From weixu365 at gmail.com Thu Nov 2 09:41:16 2017 From: weixu365 at gmail.com (Wei Xu) Date: Thu, 2 Nov 2017 20:41:16 +1100 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: Message-ID: Hi I saw there's an issue talking about "implement keepalive timeout for upstream ". I have a different scenario for this requirement. I'm using Node.js web server as upstream, and set keep alive time out to 60 second in nodejs server. The problem is I found more than a hundred "Connection reset by peer" errors everyday. Because there's no any errors on nodejs side, I guess it was because of the upstream has disconnected, and at the same time, nginx send a new request, then received a TCP RST. I tried Tengine which is a taobao cloned version of nginx, and set upstream keep alive timeout to 30s, then there's no errors any more. So I want to know is there any plan to work on this enhancement? or can I submit a patch for it? Best Regards Wei Xu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: connection-reset-errors.png Type: image/png Size: 166098 bytes Desc: not available URL: From hongzhidao at gmail.com Mon Nov 6 13:35:14 2017 From: hongzhidao at gmail.com (=?UTF-8?B?5rSq5b+X6YGT?=) Date: Mon, 6 Nov 2017 21:35:14 +0800 Subject: [upstream] consistent hash support backup? Message-ID: Hi! We know that consistent hash upstream improve its selection in the latest version. - if (hp->tries >= points->number) { - pc->name = hp->rrp.peers->name; + if (hp->tries > 20) { ngx_http_upstream_rr_peers_unlock(hp->rrp.peers); - return NGX_BUSY; + return hp->get_rr_peer(pc, &hp->rrp); Does it mean that "backup" option is allowed in the module? |NGX_HTTP_UPSTREAM_MAX_CONNS |NGX_HTTP_UPSTREAM_MAX_FAILS |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT + |NGX_HTTP_UPSTREAM_BACKUP |NGX_HTTP_UPSTREAM_DOWN; I wonder how to archive the effect of "backup" in hash like round robin, even if we don't want to use error_page. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Nov 8 13:11:44 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 8 Nov 2017 16:11:44 +0300 Subject: [upstream] consistent hash support backup? In-Reply-To: References: Message-ID: <20171108131144.GF48259@lo0.su> On Mon, Nov 06, 2017 at 09:35:14PM +0800, ??? wrote: > Hi! > We know that consistent hash upstream improve its selection in the latest > version. > > - if (hp->tries >= points->number) { > - pc->name = hp->rrp.peers->name; > + if (hp->tries > 20) { > ngx_http_upstream_rr_peers_unlock(hp->rrp.peers); > - return NGX_BUSY; > + return hp->get_rr_peer(pc, &hp->rrp); > > Does it mean that "backup" option is allowed in the module? It just means that if, after 20 tries, we weren't able to select a peer using the hash algorithm, then we'll continue a selection process using the round-robin algorithm. This is also consistent with the ip_hash module. > |NGX_HTTP_UPSTREAM_MAX_CONNS > |NGX_HTTP_UPSTREAM_MAX_FAILS > |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT > + |NGX_HTTP_UPSTREAM_BACKUP > |NGX_HTTP_UPSTREAM_DOWN; We do not support backup servers with ip_hash and hash balancers, though there's currently a bypass that allows to have backup servers in configurations with hash balancers. But these backup servers will be used only when falling back to round-robin, which is unlikely. > I wonder how to archive the effect of "backup" in hash like round robin, > even if we don't want to use error_page. I'm not sure what did you mean here. From hongzhidao at gmail.com Wed Nov 8 14:00:44 2017 From: hongzhidao at gmail.com (=?UTF-8?B?5rSq5b+X6YGT?=) Date: Wed, 8 Nov 2017 22:00:44 +0800 Subject: [upstream] consistent hash support backup? In-Reply-To: <20171108131144.GF48259@lo0.su> References: <20171108131144.GF48259@lo0.su> Message-ID: Thank you for your reply. What I want to solve is that backup servers can be selected when all the primary servers are unavailable. Now I use error_page to solve it, but it's not convenient in the case of multi-servers-locations. Especially some locations have already config error_page directive. upstream backends { hash $uri consistent; server 10.0.0.1; server 10.0.0.2; } upstream backup { server 10.0.0.3; } server { ... location / { error_page 502 504 = @fallback; proxy_pass http://backends; } location @fallback { proxy_pass http://backup; } } But if hash support backup, it would be handier, such as following. upstream backends { hash $uri consistent; server 10.0.0.1; server 10.0.0.2; server 10.0.0.3 backup; # Unfortunately it's now allowed. } server { ... location / { proxy_pass http://backends; } } Anyway. 1. Can you share the reason for "backup" option is not allowed combined with the hash module? 2. Is there any problem if I add the flag 'NGX_HTTP_UPSTREAM_BACKUP' in the hash module? I know it's not an ideal design. Thanks again. On Wed, Nov 8, 2017 at 9:11 PM, Ruslan Ermilov wrote: > On Mon, Nov 06, 2017 at 09:35:14PM +0800, ??? wrote: > > Hi! > > We know that consistent hash upstream improve its selection in the > latest > > version. > > > > - if (hp->tries >= points->number) { > > - pc->name = hp->rrp.peers->name; > > + if (hp->tries > 20) { > > ngx_http_upstream_rr_peers_unlock(hp->rrp.peers); > > - return NGX_BUSY; > > + return hp->get_rr_peer(pc, &hp->rrp); > > > > Does it mean that "backup" option is allowed in the module? > > It just means that if, after 20 tries, we weren't able to select > a peer using the hash algorithm, then we'll continue a selection > process using the round-robin algorithm. This is also consistent > with the ip_hash module. > > > |NGX_HTTP_UPSTREAM_MAX_CONNS > > |NGX_HTTP_UPSTREAM_MAX_FAILS > > |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT > > + |NGX_HTTP_UPSTREAM_BACKUP > > |NGX_HTTP_UPSTREAM_DOWN; > > We do not support backup servers with ip_hash and hash balancers, > though there's currently a bypass that allows to have backup > servers in configurations with hash balancers. But these backup > servers will be used only when falling back to round-robin, which > is unlikely. > > > I wonder how to archive the effect of "backup" in hash like round robin, > > even if we don't want to use error_page. > > I'm not sure what did you mean here. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Nov 8 20:11:26 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 8 Nov 2017 23:11:26 +0300 Subject: [upstream] consistent hash support backup? In-Reply-To: References: <20171108131144.GF48259@lo0.su> Message-ID: <20171108201126.GC2623@lo0.su> On Wed, Nov 08, 2017 at 10:00:44PM +0800, ??? wrote: > Thank you for your reply. > > What I want to solve is that backup servers can be selected when all the > primary servers are unavailable. > > Now I use error_page to solve it, but it's not convenient in the case of > multi-servers-locations. > Especially some locations have already config error_page directive. > > upstream backends { > hash $uri consistent; > server 10.0.0.1; > server 10.0.0.2; > } > > upstream backup { > server 10.0.0.3; > } > > server { > ... > location / { > error_page 502 504 = @fallback; > proxy_pass http://backends; > } > > location @fallback { > proxy_pass http://backup; > } > } > > But if hash support backup, it would be handier, such as following. > > upstream backends { > hash $uri consistent; > server 10.0.0.1; > server 10.0.0.2; > server 10.0.0.3 backup; # Unfortunately it's now allowed. > } > > server { > ... > location / { > proxy_pass http://backends; > } > } > > Anyway. > > 1. Can you share the reason for "backup" option is not allowed combined > with the hash module? Well, "backup" just doesn't make much sense in case of hash/ip_hash. As explained previously, if "backup" was allowed with hash, backup servers could only be selected when falling back from hash to round robin after 20 unsuccessful tries. > 2. Is there any problem if I add the flag 'NGX_HTTP_UPSTREAM_BACKUP' in > the hash module? Not that I'm aware of. Moreover, if you put the "hash" directive after the "server" directives in your example, then due to the bug you'll be able to use "backup", and things should just work the way you describe them. But it's not recommended, nor is guaranteed to work, use at your own risk. It's this bug that I called "bypass" in my previous email. > I know it's not an ideal design. > > Thanks again. > > > On Wed, Nov 8, 2017 at 9:11 PM, Ruslan Ermilov wrote: > > > On Mon, Nov 06, 2017 at 09:35:14PM +0800, ??? wrote: > > > Hi! > > > We know that consistent hash upstream improve its selection in the > > latest > > > version. > > > > > > - if (hp->tries >= points->number) { > > > - pc->name = hp->rrp.peers->name; > > > + if (hp->tries > 20) { > > > ngx_http_upstream_rr_peers_unlock(hp->rrp.peers); > > > - return NGX_BUSY; > > > + return hp->get_rr_peer(pc, &hp->rrp); > > > > > > Does it mean that "backup" option is allowed in the module? > > > > It just means that if, after 20 tries, we weren't able to select > > a peer using the hash algorithm, then we'll continue a selection > > process using the round-robin algorithm. This is also consistent > > with the ip_hash module. > > > > > |NGX_HTTP_UPSTREAM_MAX_CONNS > > > |NGX_HTTP_UPSTREAM_MAX_FAILS > > > |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT > > > + |NGX_HTTP_UPSTREAM_BACKUP > > > |NGX_HTTP_UPSTREAM_DOWN; > > > > We do not support backup servers with ip_hash and hash balancers, > > though there's currently a bypass that allows to have backup > > servers in configurations with hash balancers. But these backup > > servers will be used only when falling back to round-robin, which > > is unlikely. > > > > > I wonder how to archive the effect of "backup" in hash like round robin, > > > even if we don't want to use error_page. > > > > I'm not sure what did you mean here. From mdounin at mdounin.ru Thu Nov 9 12:38:30 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 09 Nov 2017 12:38:30 +0000 Subject: [nginx] FastCGI: adjust buffer position when parsing incomplete records. Message-ID: details: http://hg.nginx.org/nginx/rev/3b635e8fd499 branches: changeset: 7152:3b635e8fd499 user: Maxim Dounin date: Thu Nov 09 15:35:20 2017 +0300 description: FastCGI: adjust buffer position when parsing incomplete records. Previously, nginx failed to move buffer position when parsing an incomplete record header, and due to this wasn't be able to continue parsing once remaining bytes of the record header were received. This can affect response header parsing, potentially generating spurious errors like "upstream sent unexpected FastCGI request id high byte: 1 while reading response header from upstream". While this is very unlikely, since usually record headers are written in a single buffer, this still can happen in real life, for example, if a record header will be split across two TCP packets and the second packet will be delayed. This does not affect non-buffered response body proxying, due to "buf->pos = buf->last;" at the start of the ngx_http_fastcgi_non_buffered_filter() function. Also this does not affect buffered response body proxying, as each input buffer is only passed to the filter once. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -2646,6 +2646,7 @@ ngx_http_fastcgi_process_record(ngx_http } } + f->pos = p; f->state = state; return NGX_AGAIN; From mdounin at mdounin.ru Thu Nov 9 14:14:20 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 09 Nov 2017 14:14:20 +0000 Subject: [nginx] SSI: fixed type. Message-ID: details: http://hg.nginx.org/nginx/rev/32f83fe5747b branches: changeset: 7153:32f83fe5747b user: hucongcong date: Fri Oct 27 00:30:38 2017 +0800 description: SSI: fixed type. diffstat: src/http/modules/ngx_http_ssi_filter_module.c | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diffs (59 lines): diff --git a/src/http/modules/ngx_http_ssi_filter_module.c b/src/http/modules/ngx_http_ssi_filter_module.c --- a/src/http/modules/ngx_http_ssi_filter_module.c +++ b/src/http/modules/ngx_http_ssi_filter_module.c @@ -1630,8 +1630,7 @@ ngx_http_ssi_evaluate_string(ngx_http_re u_char ch, *p, **value, *data, *part_data; size_t *size, len, prefix, part_len; ngx_str_t var, *val; - ngx_int_t key; - ngx_uint_t i, n, bracket, quoted; + ngx_uint_t i, n, bracket, quoted, key; ngx_array_t lengths, values; ngx_http_variable_value_t *vv; @@ -1883,9 +1882,8 @@ ngx_http_ssi_regex_match(ngx_http_reques int rc, *captures; u_char *p, errstr[NGX_MAX_CONF_ERRSTR]; size_t size; - ngx_int_t key; ngx_str_t *vv, name, value; - ngx_uint_t i, n; + ngx_uint_t i, n, key; ngx_http_ssi_ctx_t *ctx; ngx_http_ssi_var_t *var; ngx_regex_compile_t rgc; @@ -1988,10 +1986,10 @@ static ngx_int_t ngx_http_ssi_include(ngx_http_request_t *r, ngx_http_ssi_ctx_t *ctx, ngx_str_t **params) { - ngx_int_t rc, key; + ngx_int_t rc; ngx_str_t *uri, *file, *wait, *set, *stub, args; ngx_buf_t *b; - ngx_uint_t flags, i; + ngx_uint_t flags, i, key; ngx_chain_t *cl, *tl, **ll, *out; ngx_http_request_t *sr; ngx_http_ssi_var_t *var; @@ -2248,9 +2246,9 @@ ngx_http_ssi_echo(ngx_http_request_t *r, { u_char *p; uintptr_t len; - ngx_int_t key; ngx_buf_t *b; ngx_str_t *var, *value, *enc, text; + ngx_uint_t key; ngx_chain_t *cl; ngx_http_variable_value_t *vv; @@ -2410,8 +2408,9 @@ static ngx_int_t ngx_http_ssi_set(ngx_http_request_t *r, ngx_http_ssi_ctx_t *ctx, ngx_str_t **params) { - ngx_int_t key, rc; + ngx_int_t rc; ngx_str_t *name, *value, *vv; + ngx_uint_t key; ngx_http_ssi_var_t *var; ngx_http_ssi_ctx_t *mctx; From mdounin at mdounin.ru Thu Nov 9 14:14:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 17:14:47 +0300 Subject: [patch] SSI: fixed type. In-Reply-To: References: Message-ID: <20171109141447.GB26836@mdounin.ru> Hello! On Fri, Oct 27, 2017 at 02:23:47AM +0800, ?? (hucc) wrote: > # HG changeset patch > # User hucongcong > # Date 1509035438 -28800 > # Fri Oct 27 00:30:38 2017 +0800 > # Node ID 0dc91aea7cd6e8398872ac3615ce1294b06e80af > # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 > SSI: fixed type. > > diff -r 9ef704d8563a -r 0dc91aea7cd6 src/http/modules/ngx_http_ssi_filter_module.c > --- a/src/http/modules/ngx_http_ssi_filter_module.c Tue Oct 17 19:52:16 2017 +0300 > +++ b/src/http/modules/ngx_http_ssi_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > @@ -1630,8 +1630,7 @@ ngx_http_ssi_evaluate_string(ngx_http_re > u_char ch, *p, **value, *data, *part_data; > size_t *size, len, prefix, part_len; > ngx_str_t var, *val; > - ngx_int_t key; > - ngx_uint_t i, n, bracket, quoted; > + ngx_uint_t i, n, bracket, quoted, key; > ngx_array_t lengths, values; > ngx_http_variable_value_t *vv; [...] Committed, thanks. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Nov 9 14:18:45 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 17:18:45 +0300 Subject: [bugfix] Range filter: more appropriate restriction on max ranges. In-Reply-To: References: Message-ID: <20171109141845.GC26836@mdounin.ru> Hello! On Fri, Oct 27, 2017 at 06:48:52PM +0800, ?? (hucc) wrote: > # HG changeset patch > # User hucongcong > # Date 1509099660 -28800 > # Fri Oct 27 18:21:00 2017 +0800 > # Node ID b9850d3deb277bd433a689712c40a84401443520 > # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 > Range filter: more appropriate restriction on max ranges. > > diff -r 9ef704d8563a -r b9850d3deb27 src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Tue Oct 17 19:52:16 2017 +0300 > +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 18:21:00 2017 +0800 > @@ -369,6 +369,11 @@ ngx_http_range_parse(ngx_http_request_t > found: > > if (start < end) { > + > + if (ranges-- == 0) { > + return NGX_DECLINED; > + } > + > range = ngx_array_push(&ctx->ranges); > if (range == NULL) { > return NGX_ERROR; > @@ -383,10 +388,6 @@ ngx_http_range_parse(ngx_http_request_t > > size += end - start; > > - if (ranges-- == 0) { > - return NGX_DECLINED; > - } > - > } else if (start == 0) { > return NGX_DECLINED; > } There is no real difference, and the current code looks slightly more readable for me, so I would rather leave it as is. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Nov 9 14:48:24 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 17:48:24 +0300 Subject: [patch-1] Range filter: support multiple ranges. In-Reply-To: References: Message-ID: <20171109144824.GD26836@mdounin.ru> Hello! On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > # HG changeset patch > # User hucongcong > # Date 1509099940 -28800 > # Fri Oct 27 18:25:40 2017 +0800 > # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 > # Parent b9850d3deb277bd433a689712c40a84401443520 > Range filter: support multiple ranges. This summary line is at least misleading. > When multiple ranges are requested, nginx will coalesce any of the ranges > that overlap, or that are separated by a gap that is smaller than the > NGX_HTTP_RANGE_MULTIPART_GAP macro. (Note that the patch also does reordering of ranges. For some reason this is not mentioned in the commit log. There are also other changes not mentioned in the commit log - for example, I see ngx_http_range_t was moved to ngx_http_request.h. These are probably do not belong to the patch at all.) Reordering and/or coalescing ranges is not something that clients usually expect to happen. This was widely discussed at the time of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 introduced the "MAY coalesce" clause. But this doesn't make clients, especially old ones, magically prepared for this. Moreover, this will certainly break some use cases like "request some metadata first, and then rest of the file". So this is certainly not a good idea to always reorder / coalesce ranges unless this is really needed for some reason. (Or even at all, as just returning 200 might be much more compatible with various clients, as outlined above.) It is also not clear what you are trying to achieve with this patch. You may want to elaborate more on what problem you are trying to solve, may be there are better solutions. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Nov 9 16:10:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 19:10:16 +0300 Subject: [PATCH] Add 'log_index_denied' directive In-Reply-To: References: Message-ID: <20171109161016.GF26836@mdounin.ru> Hello! On Mon, Oct 30, 2017 at 09:17:59PM +0000, Ben Brown wrote: > # HG changeset patch > # User Ben Brown > # Date 1509396532 0 > # Mon Oct 30 20:48:52 2017 +0000 > # Node ID 0c415222a6959147151422463261443275e69373 > # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 > Add 'log_index_denied' directive > > This is similar to the 'log_not_found' directive but instead of > suppressing 404 errors this can be used to suppress the 'index of > directory...' error messages. > > It defaults to 'on', which is the current behaviour. It is valid in the > same contexts as the 'log_not_found' directive. > > This was suggested by IRC user MacroMan to aid debugging where the logs > contained a lot of these messages. This seems very similar to quite a few other errors, including "access forbidden by rule" message as logged in ngx_http_core_post_access_phase(), and "password mismatch" as logged in ngx_http_auth_basic_crypt_handler(). Trying to control these errors on-by-one does not look like a good idea to me. Also, such errors can be easily avoided by using a site-wide index file, for example: index index.html /403; location = /403 { return 403; } As such, I don't think we need to introduce a special directive to control "directory index of ... forbidden" messages. Se below for some additional code-related comments, though probably they aren't relevant. > > diff -r 9ef704d8563a -r 0c415222a695 contrib/vim/syntax/nginx.vim > --- a/contrib/vim/syntax/nginx.vim Tue Oct 17 19:52:16 2017 +0300 > +++ b/contrib/vim/syntax/nginx.vim Mon Oct 30 20:48:52 2017 +0000 > @@ -313,6 +313,7 @@ > syn keyword ngxDirective contained load_module > syn keyword ngxDirective contained lock_file > syn keyword ngxDirective contained log_format > +syn keyword ngxDirective contained log_index_denied > syn keyword ngxDirective contained log_not_found > syn keyword ngxDirective contained log_subrequest > syn keyword ngxDirective contained map_hash_bucket_size > diff -r 9ef704d8563a -r 0c415222a695 src/http/ngx_http_core_module.c > --- a/src/http/ngx_http_core_module.c Tue Oct 17 19:52:16 2017 +0300 > +++ b/src/http/ngx_http_core_module.c Mon Oct 30 20:48:52 2017 +0000 > @@ -583,6 +583,13 @@ > offsetof(ngx_http_core_loc_conf_t, msie_refresh), > NULL }, > > + { ngx_string("log_index_denied"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, > + ngx_conf_set_flag_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_core_loc_conf_t, log_index_denied), > + NULL }, > + > { ngx_string("log_not_found"), > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, > ngx_conf_set_flag_slot, The order here isn't alphabetical, so I would recommend adding the new directive after log_not_found (if at all). > @@ -1153,9 +1160,10 @@ > ngx_http_core_content_phase(ngx_http_request_t *r, > ngx_http_phase_handler_t *ph) > { > - size_t root; > - ngx_int_t rc; > - ngx_str_t path; > + size_t root; > + ngx_int_t rc; > + ngx_str_t path; > + ngx_http_core_loc_conf_t *clcf; > > if (r->content_handler) { > r->write_event_handler = ngx_http_request_empty_handler; > @@ -1187,8 +1195,12 @@ > if (r->uri.data[r->uri.len - 1] == '/') { > > if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) { > - ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > - "directory index of \"%s\" is forbidden", path.data); > + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); > + if (clcf->log_index_denied) { Are there any reasons to call ngx_http_map_uri_to_path() if we are not going to use the result? > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > + "directory index of \"%s\" is forbidden", > + path.data); > + } > } > > ngx_http_finalize_request(r, NGX_HTTP_FORBIDDEN); > @@ -3384,6 +3396,7 @@ > clcf->port_in_redirect = NGX_CONF_UNSET; > clcf->msie_padding = NGX_CONF_UNSET; > clcf->msie_refresh = NGX_CONF_UNSET; > + clcf->log_index_denied = NGX_CONF_UNSET; > clcf->log_not_found = NGX_CONF_UNSET; > clcf->log_subrequest = NGX_CONF_UNSET; > clcf->recursive_error_pages = NGX_CONF_UNSET; > @@ -3649,6 +3662,7 @@ > ngx_conf_merge_value(conf->port_in_redirect, prev->port_in_redirect, 1); > ngx_conf_merge_value(conf->msie_padding, prev->msie_padding, 1); > ngx_conf_merge_value(conf->msie_refresh, prev->msie_refresh, 0); > + ngx_conf_merge_value(conf->log_index_denied, prev->log_index_denied, 1); > ngx_conf_merge_value(conf->log_not_found, prev->log_not_found, 1); > ngx_conf_merge_value(conf->log_subrequest, prev->log_subrequest, 0); > ngx_conf_merge_value(conf->recursive_error_pages, > diff -r 9ef704d8563a -r 0c415222a695 src/http/ngx_http_core_module.h > --- a/src/http/ngx_http_core_module.h Tue Oct 17 19:52:16 2017 +0300 > +++ b/src/http/ngx_http_core_module.h Mon Oct 30 20:48:52 2017 +0000 > @@ -385,6 +385,7 @@ > ngx_flag_t port_in_redirect; /* port_in_redirect */ > ngx_flag_t msie_padding; /* msie_padding */ > ngx_flag_t msie_refresh; /* msie_refresh */ > + ngx_flag_t log_index_denied; /* log_index_denied */ > ngx_flag_t log_not_found; /* log_not_found */ > ngx_flag_t log_subrequest; /* log_subrequest */ > ngx_flag_t recursive_error_pages; /* recursive_error_pages */ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Nov 9 17:07:02 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 20:07:02 +0300 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: Message-ID: <20171109170702.GH26836@mdounin.ru> Hello! On Thu, Nov 02, 2017 at 08:41:16PM +1100, Wei Xu wrote: > Hi > I saw there's an issue talking about "implement keepalive timeout for > upstream ". > > I have a different scenario for this requirement. > > I'm using Node.js web server as upstream, and set keep alive time out to 60 > second in nodejs server. The problem is I found more than a hundred > "Connection reset by peer" errors everyday. > > Because there's no any errors on nodejs side, I guess it was because of the > upstream has disconnected, and at the same time, nginx send a new request, > then received a TCP RST. Could you please trace what actually happens on the network level to confirm the guess is correct? Also, please check that there are no stateful firewalls between nginx and the backend. A firewall which drops the state before the timeout expires looks like a much likely cause for such errors. -- Maxim Dounin http://mdounin.ru/ From hucong.c at foxmail.com Thu Nov 9 17:08:58 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 01:08:58 +0800 Subject: [bugfix] Range filter: more appropriate restriction on max ranges. In-Reply-To: <20171109141845.GC26836@mdounin.ru> References: <20171109141845.GC26836@mdounin.ru> Message-ID: Hi, On Thursday, Nov 9, 2017 10:18 PM +0300, Maxim Dounin wrote: >On Fri, Oct 27, 2017 at 06:48:52PM +0800, ?? (hucc) wrote: > >> # HG changeset patch >> # User hucongcong >> # Date 1509099660 -28800 >> # Fri Oct 27 18:21:00 2017 +0800 >> # Node ID b9850d3deb277bd433a689712c40a84401443520 >> # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 >> Range filter: more appropriate restriction on max ranges. >> >> diff -r 9ef704d8563a -r b9850d3deb27 src/http/modules/ngx_http_range_filter_module.c >> --- a/src/http/modules/ngx_http_range_filter_module.c Tue Oct 17 19:52:16 2017 +0300 >> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 18:21:00 2017 +0800 >> @@ -369,6 +369,11 @@ ngx_http_range_parse(ngx_http_request_t >> found: >> >> if (start < end) { >> + >> + if (ranges-- == 0) { >> + return NGX_DECLINED; >> + } >> + >> range = ngx_array_push(&ctx->ranges); >> if (range == NULL) { >> return NGX_ERROR; >> @@ -383,10 +388,6 @@ ngx_http_range_parse(ngx_http_request_t >> >> size += end - start; >> >> - if (ranges-- == 0) { >> - return NGX_DECLINED; >> - } >> - >> } else if (start == 0) { >> return NGX_DECLINED; >> } > >There is no real difference, and the current code looks slightly >more readable for me, so I would rather leave it as is. We assume that max_ranges is configured as 1, CL is the length of representation and is equal to NGX_MAX_OFF_T_VALUE. Based on this, 416 will be returned if the byte-range-set is '0-9, 7-CL'; and 200 will be returned if the byte-range-set is '0-9, 7-100'. This is the difference, although the situation is rare, but it exists. I knew that the current code is slightly more readable. The problem here is consistency and predictability from user point of view, which is the rule I learned from http://mailman.nginx.org/pipermail/nginx-devel/2017-March/009687.html. Not to mention that correctness is more important than readability from server (nginx) point of view. From mdounin at mdounin.ru Thu Nov 9 17:40:50 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Nov 2017 20:40:50 +0300 Subject: [bugfix] Range filter: more appropriate restriction on max ranges. In-Reply-To: References: <20171109141845.GC26836@mdounin.ru> Message-ID: <20171109174050.GI26836@mdounin.ru> Hello! On Fri, Nov 10, 2017 at 01:08:58AM +0800, ?? (hucc) wrote: > Hi, > > On Thursday, Nov 9, 2017 10:18 PM +0300, Maxim Dounin wrote: > > >On Fri, Oct 27, 2017 at 06:48:52PM +0800, ?? (hucc) wrote: > > > >> # HG changeset patch > >> # User hucongcong > >> # Date 1509099660 -28800 > >> # Fri Oct 27 18:21:00 2017 +0800 > >> # Node ID b9850d3deb277bd433a689712c40a84401443520 > >> # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 > >> Range filter: more appropriate restriction on max ranges. > >> > >> diff -r 9ef704d8563a -r b9850d3deb27 src/http/modules/ngx_http_range_filter_module.c > >> --- a/src/http/modules/ngx_http_range_filter_module.c Tue Oct 17 19:52:16 2017 +0300 > >> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 18:21:00 2017 +0800 > >> @@ -369,6 +369,11 @@ ngx_http_range_parse(ngx_http_request_t > >> found: > >> > >> if (start < end) { > >> + > >> + if (ranges-- == 0) { > >> + return NGX_DECLINED; > >> + } > >> + > >> range = ngx_array_push(&ctx->ranges); > >> if (range == NULL) { > >> return NGX_ERROR; > >> @@ -383,10 +388,6 @@ ngx_http_range_parse(ngx_http_request_t > >> > >> size += end - start; > >> > >> - if (ranges-- == 0) { > >> - return NGX_DECLINED; > >> - } > >> - > >> } else if (start == 0) { > >> return NGX_DECLINED; > >> } > > > >There is no real difference, and the current code looks slightly > >more readable for me, so I would rather leave it as is. > > We assume that max_ranges is configured as 1, CL is the length of > representation and is equal to NGX_MAX_OFF_T_VALUE. Based on this, > 416 will be returned if the byte-range-set is '0-9, 7-CL'; > and 200 will be returned if the byte-range-set is '0-9, 7-100'. > This is the difference, although the situation is rare, but it exists. > > I knew that the current code is slightly more readable. The problem > here is consistency and predictability from user point of view, which > is the rule I learned from > http://mailman.nginx.org/pipermail/nginx-devel/2017-March/009687.html. > Not to mention that correctness is more important than readability > from server (nginx) point of view. Both 200 and 416 are correct responses in the particular situation described: 416 is quite normal when a user tries to request more than NGX_MAX_OFF_T_VALUE bytes, and 200 is quite normal when a user tries to request more ranges than allowed by max_ranges. Which status to use doesn't really matter, as in both cases no range processing will be done and both status codes are correct. In this particular case I would prefer 416 as currently returned, as it is more in line with what is returned when there is no max_ranges limit configured. (It might be also a good idea to apply max_ranges limit only after parsing all ranges, so that "Range: bytes=0-0,1-1,foo" would consistently result in 416 regardless of max_ranges configured, but this does not seem to be important enough to care.) -- Maxim Dounin http://mdounin.ru/ From hucong.c at foxmail.com Thu Nov 9 19:56:00 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 03:56:00 +0800 Subject: [patch-1] Range filter: support multiple ranges. Message-ID: Hi, On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: >On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > >> # HG changeset patch >> # User hucongcong >> # Date 1509099940 -28800 >> # Fri Oct 27 18:25:40 2017 +0800 >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 >> # Parent b9850d3deb277bd433a689712c40a84401443520 >> Range filter: support multiple ranges. > >This summary line is at least misleading. Ok, maybe the summary line is support multiple ranges when body is in multiple buffers. >> When multiple ranges are requested, nginx will coalesce any of the ranges >> that overlap, or that are separated by a gap that is smaller than the >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > >(Note that the patch also does reordering of ranges. For some >reason this is not mentioned in the commit log. There are also >other changes not mentioned in the commit log - for example, I see >ngx_http_range_t was moved to ngx_http_request.h. These are >probably do not belong to the patch at all.) I actually wait for you to give better advice. I tried my best to make the changes easier and more readable and I will split it into multiple patches based on your suggestions if these changes will be accepted. >Reordering and/or coalescing ranges is not something that clients >usually expect to happen. This was widely discussed at the time >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 >introduced the "MAY coalesce" clause. But this doesn't make >clients, especially old ones, magically prepared for this. I did not know the CVE-2011-3192. If multiple ranges list in ascending order and there are no overlapping ranges, the code will be much simpler. This is what I think. >Moreover, this will certainly break some use cases like "request >some metadata first, and then rest of the file". So this is >certainly not a good idea to always reorder / coalesce ranges >unless this is really needed for some reason. (Or even at all, >as just returning 200 might be much more compatible with various >clients, as outlined above.) > >It is also not clear what you are trying to achieve with this >patch. You may want to elaborate more on what problem you are >trying to solve, may be there are better solutions. I am trying to support multiple ranges when proxy_buffering is off and sometimes slice is enabled. The data is always cached in the backend which is not nginx. As far as I know, similar architecture is widely used in CDN. So the implementation of multiple ranges in the architecture I mentioned above is required and inevitable. Besides, P2P clients desire for this feature to gather data-pieces. Hope I already made it clear. All these changes have been tested. Hope it helps! Temporarily, the changes are as follows: diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 02:56:17 2017 +0800 @@ -46,16 +46,10 @@ typedef struct { - off_t start; - off_t end; - ngx_str_t content_range; -} ngx_http_range_t; + off_t offset; + ngx_uint_t index; /* start with 1 */ - -typedef struct { - off_t offset; - ngx_str_t boundary_header; - ngx_array_t ranges; + ngx_str_t boundary_header; } ngx_http_range_filter_ctx_t; @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); @@ -234,7 +230,7 @@ parse: r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; r->headers_out.status_line.len = 0; - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_header(r, ctx); } @@ -270,8 +266,8 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length, cutoff, - cutlim; + off_t start, end, content_length, + cutoff, cutlim; ngx_uint_t suffix; ngx_http_range_t *range; ngx_http_range_filter_ctx_t *mctx; @@ -280,19 +276,20 @@ ngx_http_range_parse(ngx_http_request_t mctx = ngx_http_get_module_ctx(r->main, ngx_http_range_body_filter_module); if (mctx) { - ctx->ranges = mctx->ranges; + r->headers_out.ranges = r->main->headers_out.ranges; + ctx->boundary_header = mctx->boundary_header; return NGX_OK; } } - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) - != NGX_OK) - { + r->headers_out.ranges = ngx_array_create(r->pool, 1, + sizeof(ngx_http_range_t)); + if (r->headers_out.ranges == NULL) { return NGX_ERROR; } p = r->headers_in.range->value.data + 6; - size = 0; + range = NULL; content_length = r->headers_out.content_length_n; cutoff = NGX_MAX_OFF_T_VALUE / 10; @@ -369,7 +366,12 @@ ngx_http_range_parse(ngx_http_request_t found: if (start < end) { - range = ngx_array_push(&ctx->ranges); + + if (range && start < range->end) { + return NGX_DECLINED; + } + + range = ngx_array_push(r->headers_out.ranges); if (range == NULL) { return NGX_ERROR; } @@ -377,16 +379,6 @@ ngx_http_range_parse(ngx_http_request_t range->start = start; range->end = end; - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { - return NGX_HTTP_RANGE_NOT_SATISFIABLE; - } - - size += end - start; - - if (ranges-- == 0) { - return NGX_DECLINED; - } - } else if (start == 0) { return NGX_DECLINED; } @@ -396,12 +388,12 @@ ngx_http_range_parse(ngx_http_request_t } } - if (ctx->ranges.nelts == 0) { + if (r->headers_out.ranges->nelts == 0) { return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { - return NGX_DECLINED; + if (r->headers_out.ranges->nelts > ranges) { + r->headers_out.ranges->nelts = ranges; } return NGX_OK; @@ -439,7 +431,7 @@ ngx_http_range_singlepart_header(ngx_htt /* "Content-Range: bytes SSSS-EEEE/TTTT" header */ - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; content_range->value.len = ngx_sprintf(content_range->value.data, "bytes %O-%O/%O", @@ -469,6 +461,10 @@ ngx_http_range_multipart_header(ngx_http ngx_http_range_t *range; ngx_atomic_uint_t boundary; + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof(CRLF "Content-Type: ") - 1 + r->headers_out.content_type.len @@ -551,8 +547,8 @@ ngx_http_range_multipart_header(ngx_http len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1; - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { + range = r->headers_out.ranges->elts; + for (i = 0; i < r->headers_out.ranges->nelts; i++) { /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */ @@ -570,10 +566,11 @@ ngx_http_range_multipart_header(ngx_http - range[i].content_range.data; len += ctx->boundary_header.len + range[i].content_range.len - + (range[i].end - range[i].start); + + (range[i].end - range[i].start); } r->headers_out.content_length_n = len; + r->headers_out.content_offset = range[0].start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; @@ -635,67 +632,19 @@ ngx_http_range_body_filter(ngx_http_requ return ngx_http_next_body_filter(r, in); } - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_body(r, ctx, in); } - /* - * multipart ranges are supported only if whole body is in a single buffer - */ - if (ngx_buf_special(in->buf)) { return ngx_http_next_body_filter(r, in); } - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { - return NGX_ERROR; - } - return ngx_http_range_multipart_body(r, ctx, in); } static ngx_int_t -ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) -{ - off_t start, last; - ngx_buf_t *buf; - ngx_uint_t i; - ngx_http_range_t *range; - - if (ctx->offset) { - goto overlapped; - } - - buf = in->buf; - - if (!buf->last_buf) { - start = ctx->offset; - last = ctx->offset + ngx_buf_size(buf); - - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { - if (start > range[i].start || last < range[i].end) { - goto overlapped; - } - } - } - - ctx->offset = ngx_buf_size(buf); - - return NGX_OK; - -overlapped: - - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "range in overlapped buffers"); - - return NGX_ERROR; -} - - -static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { @@ -706,7 +655,7 @@ ngx_http_range_singlepart_body(ngx_http_ out = NULL; ll = &out; - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; for (cl = in; cl; cl = cl->next) { @@ -786,96 +735,227 @@ static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { - ngx_buf_t *b, *buf; - ngx_uint_t i; - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; - ngx_http_range_t *range; + off_t start, last, back; + ngx_buf_t *buf, *b; + ngx_uint_t i, finished; + ngx_chain_t *out, *cl, *ncl, **ll; + ngx_http_range_t *range, *tail; + + range = r->headers_out.ranges->elts; - ll = &out; - buf = in->buf; - range = ctx->ranges.elts; + if (!ctx->index) { + for (i = 0; i < r->headers_out.ranges->nelts; i++) { + if (ctx->offset < range[i].end) { + ctx->index = i + 1; + break; + } + } + } - for (i = 0; i < ctx->ranges.nelts; i++) { + tail = range + r->headers_out.ranges->nelts - 1; + range += ctx->index - 1; - /* - * The boundary header of the range: - * CRLF - * "--0123456789" CRLF - * "Content-Type: image/jpeg" CRLF - * "Content-Range: bytes " - */ + out = NULL; + ll = &out; + finished = 0; + + for (cl = in; cl; cl = cl->next) { + + buf = cl->buf; - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + start = ctx->offset; + last = ctx->offset + ngx_buf_size(buf); + + ctx->offset = last; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body buf: %O-%O", start, last); + + if (ngx_buf_special(buf)) { + *ll = cl; + ll = &cl->next; + continue; } - b->memory = 1; - b->pos = ctx->boundary_header.data; - b->last = ctx->boundary_header.data + ctx->boundary_header.len; + if (range->end <= start || range->start >= last) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body skip"); - hcl = ngx_alloc_chain_link(r->pool); - if (hcl == NULL) { - return NGX_ERROR; + if (buf->in_file) { + buf->file_pos = buf->file_last; + } + + buf->pos = buf->last; + buf->sync = 1; + + continue; } - hcl->buf = b; + if (range->start >= start) { + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { + return NGX_ERROR; + } - /* "SSSS-EEEE/TTTT" CRLF CRLF */ + if (buf->in_file) { + buf->file_pos += range->start - start; + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->start - start); + } } - b->temporary = 1; - b->pos = range[i].content_range.data; - b->last = range[i].content_range.data + range[i].content_range.len; + if (range->end <= last) { + + if (range < tail && range[1].start < last) { + + b = ngx_alloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + ncl = ngx_alloc_chain_link(r->pool); + if (ncl == NULL) { + return NGX_ERROR; + } - rcl = ngx_alloc_chain_link(r->pool); - if (rcl == NULL) { - return NGX_ERROR; - } + ncl->buf = b; + ncl->next = cl; + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->last_in_chain = 0; + b->last_buf = 0; + + back = last - range->end; + ctx->offset -= back; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body reuse buf: %O-%O", + ctx->offset, ctx->offset + back); - rcl->buf = b; + if (buf->in_file) { + buf->file_pos = buf->file_last - back; + } + + if (ngx_buf_in_memory(buf)) { + buf->pos = buf->last - back; + } + cl = ncl; + buf = cl->buf; + } + + if (buf->in_file) { + buf->file_last -= last - range->end; + } - /* the range data */ + if (ngx_buf_in_memory(buf)) { + buf->last -= (size_t) (last - range->end); + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (range == tail) { + buf->last_buf = (r == r->main) ? 1 : 0; + buf->last_in_chain = 1; + *ll = cl; + ll = &cl->next; + + finished = 1; + break; + } + + range++; + ctx->index++; } - b->in_file = buf->in_file; - b->temporary = buf->temporary; - b->memory = buf->memory; - b->mmap = buf->mmap; - b->file = buf->file; + *ll = cl; + ll = &cl->next; + } + + if (out == NULL) { + return NGX_OK; + } + + *ll = NULL; + + if (finished + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) + { + return NGX_ERROR; + } + + return ngx_http_next_body_filter(r, out); +} + - if (buf->in_file) { - b->file_pos = buf->file_pos + range[i].start; - b->file_last = buf->file_pos + range[i].end; - } +static ngx_int_t +ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl, *rcl; + ngx_http_range_t *range; + + /* + * The boundary header of the range: + * CRLF + * "--0123456789" CRLF + * "Content-Type: image/jpeg" CRLF + * "Content-Range: bytes " + */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - if (ngx_buf_in_memory(buf)) { - b->pos = buf->pos + (size_t) range[i].start; - b->last = buf->pos + (size_t) range[i].end; - } + b->memory = 1; + b->pos = ctx->boundary_header.data; + b->last = ctx->boundary_header.data + ctx->boundary_header.len; + + hcl = ngx_alloc_chain_link(r->pool); + if (hcl == NULL) { + return NGX_ERROR; + } + + hcl->buf = b; + + + /* "SSSS-EEEE/TTTT" CRLF CRLF */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - dcl = ngx_alloc_chain_link(r->pool); - if (dcl == NULL) { - return NGX_ERROR; - } + range = r->headers_out.ranges->elts; + b->temporary = 1; + b->pos = range[ctx->index - 1].content_range.data; + b->last = range[ctx->index - 1].content_range.data + + range[ctx->index - 1].content_range.len; + + rcl = ngx_alloc_chain_link(r->pool); + if (rcl == NULL) { + return NGX_ERROR; + } + + rcl->buf = b; - dcl->buf = b; + **lll = hcl; + hcl->next = rcl; + *lll = &rcl->next; + + return NGX_OK; +} - *ll = hcl; - hcl->next = rcl; - rcl->next = dcl; - ll = &dcl->next; - } + +static ngx_int_t +ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl; /* the last boundary CRLF "--0123456789--" CRLF */ @@ -885,7 +965,8 @@ ngx_http_range_multipart_body(ngx_http_r } b->temporary = 1; - b->last_buf = 1; + b->last_in_chain = 1; + b->last_buf = (r == r->main) ? 1 : 0; b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1); @@ -908,7 +989,7 @@ ngx_http_range_multipart_body(ngx_http_r *ll = hcl; - return ngx_http_next_body_filter(r, out); + return NGX_OK; } diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 02:56:17 2017 +0800 @@ -22,6 +22,8 @@ typedef struct { ngx_str_t etag; unsigned last:1; unsigned active:1; + unsigned multipart:1; + ngx_uint_t index; ngx_http_request_t *sr; } ngx_http_slice_ctx_t; @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re { off_t end; ngx_int_t rc; + ngx_uint_t i; ngx_table_elt_t *h; + ngx_http_range_t *range; ngx_http_slice_ctx_t *ctx; ngx_http_slice_loc_conf_t *slcf; ngx_http_slice_content_range_t cr; @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re r->allow_ranges = 1; r->subrequest_ranges = 1; - r->single_range = 1; rc = ngx_http_next_header_filter(r); - if (r != r->main) { - return rc; + if (r == r->main) { + r->preserve_body = 1; + + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { + ctx->multipart = (r->headers_out.ranges->nelts != 1); + range = r->headers_out.ranges->elts; + + if (ctx->start + (off_t) slcf->size <= range[0].start) { + ctx->start = slcf->size * (range[0].start / slcf->size); + } + + ctx->end = range[r->headers_out.ranges->nelts - 1].end; + + } else { + ctx->end = cr.complete_length; + } } - r->preserve_body = 1; + if (ctx->multipart) { + range = r->headers_out.ranges->elts; + + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) { + + if (ctx->start < range[i].end) { + ctx->index = i; + break; + } - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { - ctx->start = slcf->size - * (r->headers_out.content_offset / slcf->size); + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) { + i++; + ctx->index = i; + ctx->start = slcf->size * (range[i].start / slcf->size); + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "range multipart so fast forward to %O-%O @%O", + range[i].start, range[i].end, ctx->start); + break; + } } - - ctx->end = r->headers_out.content_offset - + r->headers_out.content_length_n; - - } else { - ctx->end = cr.complete_length; } return rc; diff -r 32f83fe5747b src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/ngx_http_request.h Fri Nov 10 02:56:17 2017 +0800 @@ -251,6 +251,13 @@ typedef struct { typedef struct { + off_t start; + off_t end; + ngx_str_t content_range; +} ngx_http_range_t; + + +typedef struct { ngx_list_t headers; ngx_list_t trailers; @@ -278,6 +285,7 @@ typedef struct { u_char *content_type_lowcase; ngx_uint_t content_type_hash; + ngx_array_t *ranges; /* ngx_http_range_t */ ngx_array_t cache_control; off_t content_length_n; From hucong.c at foxmail.com Thu Nov 9 20:24:43 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 04:24:43 +0800 Subject: [bugfix] Range filter: more appropriate restriction on max ranges. In-Reply-To: <20171109174050.GI26836@mdounin.ru> References: <20171109141845.GC26836@mdounin.ru> <20171109174050.GI26836@mdounin.ru> Message-ID: Hi, On Friday, Nov 10, 2017 1:40 AM +0300, Maxim Dounin wrote: >On Fri, Nov 10, 2017 at 01:08:58AM +0800, ?? (hucc) wrote: > >> Hi, >> >> On Thursday, Nov 9, 2017 10:18 PM +0300, Maxim Dounin wrote: >> >> >On Fri, Oct 27, 2017 at 06:48:52PM +0800, ?? (hucc) wrote: >> > >> >> # HG changeset patch >> >> # User hucongcong >> >> # Date 1509099660 -28800 >> >> # Fri Oct 27 18:21:00 2017 +0800 >> >> # Node ID b9850d3deb277bd433a689712c40a84401443520 >> >> # Parent 9ef704d8563af4aff6817ab1c694fb40591f20b3 >> >> Range filter: more appropriate restriction on max ranges. >> > >> >There is no real difference, and the current code looks slightly >> >more readable for me, so I would rather leave it as is. >> >> We assume that max_ranges is configured as 1, CL is the length of >> representation and is equal to NGX_MAX_OFF_T_VALUE. Based on this, >> 416 will be returned if the byte-range-set is '0-9, 7-CL'; >> and 200 will be returned if the byte-range-set is '0-9, 7-100'. >> This is the difference, although the situation is rare, but it exists. >> >> I knew that the current code is slightly more readable. The problem >> here is consistency and predictability from user point of view, which >> is the rule I learned from >> http://mailman.nginx.org/pipermail/nginx-devel/2017-March/009687.html. >> Not to mention that correctness is more important than readability >> from server (nginx) point of view. > >Both 200 and 416 are correct responses in the particular situation >described: 416 is quite normal when a user tries to request more >than NGX_MAX_OFF_T_VALUE bytes, and 200 is quite normal when a >user tries to request more ranges than allowed by max_ranges. > >Which status to use doesn't really matter, as in both cases no >range processing will be done and both status codes are correct. >In this particular case I would prefer 416 as currently returned, >as it is more in line with what is returned when there is no >max_ranges limit configured. > >(It might be also a good idea to apply max_ranges limit only after >parsing all ranges, so that "Range: bytes=0-0,1-1,foo" would >consistently result in 416 regardless of max_ranges configured, >but this does not seem to be important enough to care.) Wow, it seems that you are pondering over all the details. Now I got it. From hucong.c at foxmail.com Thu Nov 9 20:41:57 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 04:41:57 +0800 Subject: [patch-1] Range filter: support multiple ranges. Message-ID: Hi, Please ignore the previous reply. The updated patch is placed at the end. On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: >On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > >> # HG changeset patch >> # User hucongcong >> # Date 1509099940 -28800 >> # Fri Oct 27 18:25:40 2017 +0800 >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 >> # Parent b9850d3deb277bd433a689712c40a84401443520 >> Range filter: support multiple ranges. > >This summary line is at least misleading. Ok, maybe the summary line is support multiple ranges when body is in multiple buffers. >> When multiple ranges are requested, nginx will coalesce any of the ranges >> that overlap, or that are separated by a gap that is smaller than the >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > >(Note that the patch also does reordering of ranges. For some >reason this is not mentioned in the commit log. There are also >other changes not mentioned in the commit log - for example, I see >ngx_http_range_t was moved to ngx_http_request.h. These are >probably do not belong to the patch at all.) I actually wait for you to give better advice. I tried my best to make the changes easier and more readable and I will split it into multiple patches based on your suggestions if these changes will be accepted. >Reordering and/or coalescing ranges is not something that clients >usually expect to happen. This was widely discussed at the time >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 >introduced the "MAY coalesce" clause. But this doesn't make >clients, especially old ones, magically prepared for this. I did not know the CVE-2011-3192. If multiple ranges list in ascending order and there are no overlapping ranges, the code will be much simpler. This is what I think. >Moreover, this will certainly break some use cases like "request >some metadata first, and then rest of the file". So this is >certainly not a good idea to always reorder / coalesce ranges >unless this is really needed for some reason. (Or even at all, >as just returning 200 might be much more compatible with various >clients, as outlined above.) > >It is also not clear what you are trying to achieve with this >patch. You may want to elaborate more on what problem you are >trying to solve, may be there are better solutions. I am trying to support multiple ranges when proxy_buffering is off and sometimes slice is enabled. The data is always cached in the backend which is not nginx. As far as I know, similar architecture is widely used in CDN. So the implementation of multiple ranges in the architecture I mentioned above is required and inevitable. Besides, P2P clients desire for this feature to gather data-pieces. Hope I already made it clear. All these changes have been tested. Hope it helps! Temporarily, the changes are as follows: diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800 @@ -46,16 +46,10 @@ typedef struct { - off_t start; - off_t end; - ngx_str_t content_range; -} ngx_http_range_t; + off_t offset; + ngx_uint_t index; /* start with 1 */ - -typedef struct { - off_t offset; - ngx_str_t boundary_header; - ngx_array_t ranges; + ngx_str_t boundary_header; } ngx_http_range_filter_ctx_t; @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); @@ -234,7 +230,7 @@ parse: r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; r->headers_out.status_line.len = 0; - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_header(r, ctx); } @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length, cutoff, - cutlim; - ngx_uint_t suffix; + off_t start, end, content_length, + cutoff, cutlim; + ngx_uint_t suffix, descending; ngx_http_range_t *range; ngx_http_range_filter_ctx_t *mctx; @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t mctx = ngx_http_get_module_ctx(r->main, ngx_http_range_body_filter_module); if (mctx) { - ctx->ranges = mctx->ranges; + r->headers_out.ranges = r->main->headers_out.ranges; + ctx->boundary_header = mctx->boundary_header; return NGX_OK; } } - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) - != NGX_OK) - { + r->headers_out.ranges = ngx_array_create(r->pool, 1, + sizeof(ngx_http_range_t)); + if (r->headers_out.ranges == NULL) { return NGX_ERROR; } p = r->headers_in.range->value.data + 6; - size = 0; + range = NULL; + descending = 0; content_length = r->headers_out.content_length_n; cutoff = NGX_MAX_OFF_T_VALUE / 10; @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t found: if (start < end) { - range = ngx_array_push(&ctx->ranges); + + if (range && start < range->end) { + descending++; + } + + range = ngx_array_push(r->headers_out.ranges); if (range == NULL) { return NGX_ERROR; } @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t range->start = start; range->end = end; - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { - return NGX_HTTP_RANGE_NOT_SATISFIABLE; - } - - size += end - start; - - if (ranges-- == 0) { - return NGX_DECLINED; - } - } else if (start == 0) { return NGX_DECLINED; } @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t } } - if (ctx->ranges.nelts == 0) { + if (r->headers_out.ranges->nelts == 0) { return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { + if (r->headers_out.ranges->nelts > ranges) { + r->headers_out.ranges->nelts = ranges; + } + + if (descending) { return NGX_DECLINED; } @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt /* "Content-Range: bytes SSSS-EEEE/TTTT" header */ - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; content_range->value.len = ngx_sprintf(content_range->value.data, "bytes %O-%O/%O", @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http ngx_http_range_t *range; ngx_atomic_uint_t boundary; + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof(CRLF "Content-Type: ") - 1 + r->headers_out.content_type.len @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1; - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { + range = r->headers_out.ranges->elts; + for (i = 0; i < r->headers_out.ranges->nelts; i++) { /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */ @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http - range[i].content_range.data; len += ctx->boundary_header.len + range[i].content_range.len - + (range[i].end - range[i].start); + + (range[i].end - range[i].start); } r->headers_out.content_length_n = len; + r->headers_out.content_offset = range[0].start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ return ngx_http_next_body_filter(r, in); } - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_body(r, ctx, in); } - /* - * multipart ranges are supported only if whole body is in a single buffer - */ - if (ngx_buf_special(in->buf)) { return ngx_http_next_body_filter(r, in); } - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { - return NGX_ERROR; - } - return ngx_http_range_multipart_body(r, ctx, in); } static ngx_int_t -ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) -{ - off_t start, last; - ngx_buf_t *buf; - ngx_uint_t i; - ngx_http_range_t *range; - - if (ctx->offset) { - goto overlapped; - } - - buf = in->buf; - - if (!buf->last_buf) { - start = ctx->offset; - last = ctx->offset + ngx_buf_size(buf); - - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { - if (start > range[i].start || last < range[i].end) { - goto overlapped; - } - } - } - - ctx->offset = ngx_buf_size(buf); - - return NGX_OK; - -overlapped: - - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "range in overlapped buffers"); - - return NGX_ERROR; -} - - -static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_ out = NULL; ll = &out; - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; for (cl = in; cl; cl = cl->next) { @@ -786,96 +740,227 @@ static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { - ngx_buf_t *b, *buf; - ngx_uint_t i; - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; - ngx_http_range_t *range; + off_t start, last, back; + ngx_buf_t *buf, *b; + ngx_uint_t i, finished; + ngx_chain_t *out, *cl, *ncl, **ll; + ngx_http_range_t *range, *tail; + + range = r->headers_out.ranges->elts; - ll = &out; - buf = in->buf; - range = ctx->ranges.elts; + if (!ctx->index) { + for (i = 0; i < r->headers_out.ranges->nelts; i++) { + if (ctx->offset < range[i].end) { + ctx->index = i + 1; + break; + } + } + } - for (i = 0; i < ctx->ranges.nelts; i++) { + tail = range + r->headers_out.ranges->nelts - 1; + range += ctx->index - 1; - /* - * The boundary header of the range: - * CRLF - * "--0123456789" CRLF - * "Content-Type: image/jpeg" CRLF - * "Content-Range: bytes " - */ + out = NULL; + ll = &out; + finished = 0; + + for (cl = in; cl; cl = cl->next) { + + buf = cl->buf; - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + start = ctx->offset; + last = ctx->offset + ngx_buf_size(buf); + + ctx->offset = last; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body buf: %O-%O", start, last); + + if (ngx_buf_special(buf)) { + *ll = cl; + ll = &cl->next; + continue; } - b->memory = 1; - b->pos = ctx->boundary_header.data; - b->last = ctx->boundary_header.data + ctx->boundary_header.len; + if (range->end <= start || range->start >= last) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body skip"); - hcl = ngx_alloc_chain_link(r->pool); - if (hcl == NULL) { - return NGX_ERROR; + if (buf->in_file) { + buf->file_pos = buf->file_last; + } + + buf->pos = buf->last; + buf->sync = 1; + + continue; } - hcl->buf = b; + if (range->start >= start) { + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { + return NGX_ERROR; + } - /* "SSSS-EEEE/TTTT" CRLF CRLF */ + if (buf->in_file) { + buf->file_pos += range->start - start; + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->start - start); + } } - b->temporary = 1; - b->pos = range[i].content_range.data; - b->last = range[i].content_range.data + range[i].content_range.len; + if (range->end <= last) { + + if (range < tail && range[1].start < last) { + + b = ngx_alloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + ncl = ngx_alloc_chain_link(r->pool); + if (ncl == NULL) { + return NGX_ERROR; + } - rcl = ngx_alloc_chain_link(r->pool); - if (rcl == NULL) { - return NGX_ERROR; - } + ncl->buf = b; + ncl->next = cl; + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->last_in_chain = 0; + b->last_buf = 0; + + back = last - range->end; + ctx->offset -= back; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body reuse buf: %O-%O", + ctx->offset, ctx->offset + back); - rcl->buf = b; + if (buf->in_file) { + buf->file_pos = buf->file_last - back; + } + + if (ngx_buf_in_memory(buf)) { + buf->pos = buf->last - back; + } + cl = ncl; + buf = cl->buf; + } + + if (buf->in_file) { + buf->file_last -= last - range->end; + } - /* the range data */ + if (ngx_buf_in_memory(buf)) { + buf->last -= (size_t) (last - range->end); + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (range == tail) { + buf->last_buf = (r == r->main) ? 1 : 0; + buf->last_in_chain = 1; + *ll = cl; + ll = &cl->next; + + finished = 1; + break; + } + + range++; + ctx->index++; } - b->in_file = buf->in_file; - b->temporary = buf->temporary; - b->memory = buf->memory; - b->mmap = buf->mmap; - b->file = buf->file; + *ll = cl; + ll = &cl->next; + } + + if (out == NULL) { + return NGX_OK; + } + + *ll = NULL; + + if (finished + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) + { + return NGX_ERROR; + } + + return ngx_http_next_body_filter(r, out); +} + - if (buf->in_file) { - b->file_pos = buf->file_pos + range[i].start; - b->file_last = buf->file_pos + range[i].end; - } +static ngx_int_t +ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl, *rcl; + ngx_http_range_t *range; + + /* + * The boundary header of the range: + * CRLF + * "--0123456789" CRLF + * "Content-Type: image/jpeg" CRLF + * "Content-Range: bytes " + */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - if (ngx_buf_in_memory(buf)) { - b->pos = buf->pos + (size_t) range[i].start; - b->last = buf->pos + (size_t) range[i].end; - } + b->memory = 1; + b->pos = ctx->boundary_header.data; + b->last = ctx->boundary_header.data + ctx->boundary_header.len; + + hcl = ngx_alloc_chain_link(r->pool); + if (hcl == NULL) { + return NGX_ERROR; + } + + hcl->buf = b; + + + /* "SSSS-EEEE/TTTT" CRLF CRLF */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - dcl = ngx_alloc_chain_link(r->pool); - if (dcl == NULL) { - return NGX_ERROR; - } + range = r->headers_out.ranges->elts; + b->temporary = 1; + b->pos = range[ctx->index - 1].content_range.data; + b->last = range[ctx->index - 1].content_range.data + + range[ctx->index - 1].content_range.len; + + rcl = ngx_alloc_chain_link(r->pool); + if (rcl == NULL) { + return NGX_ERROR; + } + + rcl->buf = b; - dcl->buf = b; + **lll = hcl; + hcl->next = rcl; + *lll = &rcl->next; + + return NGX_OK; +} - *ll = hcl; - hcl->next = rcl; - rcl->next = dcl; - ll = &dcl->next; - } + +static ngx_int_t +ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl; /* the last boundary CRLF "--0123456789--" CRLF */ @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r } b->temporary = 1; - b->last_buf = 1; + b->last_in_chain = 1; + b->last_buf = (r == r->main) ? 1 : 0; b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1); @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r *ll = hcl; - return ngx_http_next_body_filter(r, out); + return NGX_OK; } diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800 @@ -22,6 +22,8 @@ typedef struct { ngx_str_t etag; unsigned last:1; unsigned active:1; + unsigned multipart:1; + ngx_uint_t index; ngx_http_request_t *sr; } ngx_http_slice_ctx_t; @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re { off_t end; ngx_int_t rc; + ngx_uint_t i; ngx_table_elt_t *h; + ngx_http_range_t *range; ngx_http_slice_ctx_t *ctx; ngx_http_slice_loc_conf_t *slcf; ngx_http_slice_content_range_t cr; @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re r->allow_ranges = 1; r->subrequest_ranges = 1; - r->single_range = 1; rc = ngx_http_next_header_filter(r); - if (r != r->main) { - return rc; + if (r == r->main) { + r->preserve_body = 1; + + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { + ctx->multipart = (r->headers_out.ranges->nelts != 1); + range = r->headers_out.ranges->elts; + + if (ctx->start + (off_t) slcf->size <= range[0].start) { + ctx->start = slcf->size * (range[0].start / slcf->size); + } + + ctx->end = range[r->headers_out.ranges->nelts - 1].end; + + } else { + ctx->end = cr.complete_length; + } } - r->preserve_body = 1; + if (ctx->multipart) { + range = r->headers_out.ranges->elts; + + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) { + + if (ctx->start < range[i].end) { + ctx->index = i; + break; + } - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { - ctx->start = slcf->size - * (r->headers_out.content_offset / slcf->size); + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) { + i++; + ctx->index = i; + ctx->start = slcf->size * (range[i].start / slcf->size); + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "range multipart so fast forward to %O-%O @%O", + range[i].start, range[i].end, ctx->start); + break; + } } - - ctx->end = r->headers_out.content_offset - + r->headers_out.content_length_n; - - } else { - ctx->end = cr.complete_length; } return rc; diff -r 32f83fe5747b src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800 @@ -251,6 +251,13 @@ typedef struct { typedef struct { + off_t start; + off_t end; + ngx_str_t content_range; +} ngx_http_range_t; + + +typedef struct { ngx_list_t headers; ngx_list_t trailers; @@ -278,6 +285,7 @@ typedef struct { u_char *content_type_lowcase; ngx_uint_t content_type_hash; + ngx_array_t *ranges; /* ngx_http_range_t */ ngx_array_t cache_control; off_t content_length_n; From hucong.c at foxmail.com Thu Nov 9 21:11:50 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 05:11:50 +0800 Subject: [nginx] Range filter: allowed ranges on empty files (ticket #1031). In-Reply-To: References: Message-ID: Hi, >details: http://hg.nginx.org/nginx/rev/aeaac3ccee4f >branches: >changeset: 7043:aeaac3ccee4f >user: Maxim Dounin >date: Tue Jun 27 00:53:46 2017 +0300 >description: >Range filter: allowed ranges on empty files (ticket #1031). > >As per RFC 2616 / RFC 7233, any range request to an empty file >is expected to result in 416 Range Not Satisfiable response, as >there cannot be a "byte-range-spec whose first-byte-pos is less >than the current length of the entity-body". On the other hand, >this makes use of byte-range requests inconvenient in some cases, >as reported for the slice module here: > >http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010177.html > >This commit changes range filter to instead return 200 if the file >is empty and the range requested starts at 0. With this commit, the problem I mentioned is still unsolved. http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010198.html http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010178.html Because we can not ask that the backend must be nginx, and it is nginx who convert ordinary request to a range request in slice module. So is there a safe solution? From hucong.c at foxmail.com Fri Nov 10 11:03:01 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Fri, 10 Nov 2017 19:03:01 +0800 Subject: [patch-1] Range filter: support multiple ranges. In-Reply-To: References: Message-ID: Hi, How about this as the first patch? # HG changeset patch # User hucongcong # Date 1510309868 -28800 # Fri Nov 10 18:31:08 2017 +0800 # Node ID c32fddd15a26b00f8f293f6b0d8762cd9f2bfbdb # Parent 32f83fe5747b55ef341595b18069bee3891874d0 Range filter: support for multipart response in wider range. Before the patch multipart ranges are supported only if whole body is in a single buffer. Now, the limit is canceled. If there are no overlapping ranges and all ranges list in ascending order, nginx will return 206 with multipart response, otherwise return 200 (OK). diff -r 32f83fe5747b -r c32fddd15a26 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800 @@ -54,6 +54,7 @@ typedef struct { typedef struct { off_t offset; + ngx_uint_t index; /* start with 1 */ ngx_str_t boundary_header; ngx_array_t ranges; } ngx_http_range_filter_ctx_t; @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); @@ -270,9 +273,8 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length, cutoff, - cutlim; - ngx_uint_t suffix; + off_t start, end, content_length, cutoff, cutlim; + ngx_uint_t suffix, descending; ngx_http_range_t *range; ngx_http_range_filter_ctx_t *mctx; @@ -281,6 +283,7 @@ ngx_http_range_parse(ngx_http_request_t ngx_http_range_body_filter_module); if (mctx) { ctx->ranges = mctx->ranges; + ctx->boundary_header = ctx->boundary_header; return NGX_OK; } } @@ -292,7 +295,8 @@ ngx_http_range_parse(ngx_http_request_t } p = r->headers_in.range->value.data + 6; - size = 0; + range = NULL; + descending = 0; content_length = r->headers_out.content_length_n; cutoff = NGX_MAX_OFF_T_VALUE / 10; @@ -369,6 +373,11 @@ ngx_http_range_parse(ngx_http_request_t found: if (start < end) { + + if (range && start < range->end) { + descending++; + } + range = ngx_array_push(&ctx->ranges); if (range == NULL) { return NGX_ERROR; @@ -377,16 +386,6 @@ ngx_http_range_parse(ngx_http_request_t range->start = start; range->end = end; - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { - return NGX_HTTP_RANGE_NOT_SATISFIABLE; - } - - size += end - start; - - if (ranges-- == 0) { - return NGX_DECLINED; - } - } else if (start == 0) { return NGX_DECLINED; } @@ -400,7 +399,7 @@ ngx_http_range_parse(ngx_http_request_t return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { + if (ctx->ranges.nelts > ranges || descending) { return NGX_DECLINED; } @@ -469,6 +468,10 @@ ngx_http_range_multipart_header(ngx_http ngx_http_range_t *range; ngx_atomic_uint_t boundary; + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof(CRLF "Content-Type: ") - 1 + r->headers_out.content_type.len @@ -570,10 +573,11 @@ ngx_http_range_multipart_header(ngx_http - range[i].content_range.data; len += ctx->boundary_header.len + range[i].content_range.len - + (range[i].end - range[i].start); + + (range[i].end - range[i].start); } r->headers_out.content_length_n = len; + r->headers_out.content_offset = range[0].start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; @@ -639,63 +643,15 @@ ngx_http_range_body_filter(ngx_http_requ return ngx_http_range_singlepart_body(r, ctx, in); } - /* - * multipart ranges are supported only if whole body is in a single buffer - */ - if (ngx_buf_special(in->buf)) { return ngx_http_next_body_filter(r, in); } - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { - return NGX_ERROR; - } - return ngx_http_range_multipart_body(r, ctx, in); } static ngx_int_t -ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) -{ - off_t start, last; - ngx_buf_t *buf; - ngx_uint_t i; - ngx_http_range_t *range; - - if (ctx->offset) { - goto overlapped; - } - - buf = in->buf; - - if (!buf->last_buf) { - start = ctx->offset; - last = ctx->offset + ngx_buf_size(buf); - - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { - if (start > range[i].start || last < range[i].end) { - goto overlapped; - } - } - } - - ctx->offset = ngx_buf_size(buf); - - return NGX_OK; - -overlapped: - - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "range in overlapped buffers"); - - return NGX_ERROR; -} - - -static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { @@ -786,96 +742,227 @@ static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { - ngx_buf_t *b, *buf; - ngx_uint_t i; - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; - ngx_http_range_t *range; + off_t start, last, back; + ngx_buf_t *buf, *b; + ngx_uint_t i, finished; + ngx_chain_t *out, *cl, *ncl, **ll; + ngx_http_range_t *range, *tail; - ll = &out; - buf = in->buf; range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { + if (!ctx->index) { + for (i = 0; i < ctx->ranges.nelts; i++) { + if (ctx->offset < range[i].end) { + ctx->index = i + 1; + break; + } + } + } + + tail = range + ctx->ranges.nelts - 1; + range += ctx->index - 1; + + out = NULL; + ll = &out; + finished = 0; - /* - * The boundary header of the range: - * CRLF - * "--0123456789" CRLF - * "Content-Type: image/jpeg" CRLF - * "Content-Range: bytes " - */ + for (cl = in; cl; cl = cl->next) { + + buf = cl->buf; + + start = ctx->offset; + last = ctx->offset + ngx_buf_size(buf); - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + ctx->offset = last; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body buf: %O-%O", start, last); + + if (ngx_buf_special(buf)) { + *ll = cl; + ll = &cl->next; + continue; } - b->memory = 1; - b->pos = ctx->boundary_header.data; - b->last = ctx->boundary_header.data + ctx->boundary_header.len; + if (range->end <= start || range->start >= last) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body skip"); - hcl = ngx_alloc_chain_link(r->pool); - if (hcl == NULL) { - return NGX_ERROR; + if (buf->in_file) { + buf->file_pos = buf->file_last; + } + + buf->pos = buf->last; + buf->sync = 1; + + continue; } - hcl->buf = b; + if (range->start >= start) { + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { + return NGX_ERROR; + } - /* "SSSS-EEEE/TTTT" CRLF CRLF */ + if (buf->in_file) { + buf->file_pos += range->start - start; + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->start - start); + } } - b->temporary = 1; - b->pos = range[i].content_range.data; - b->last = range[i].content_range.data + range[i].content_range.len; + if (range->end <= last) { + + if (range < tail && range[1].start < last) { + + b = ngx_alloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + ncl = ngx_alloc_chain_link(r->pool); + if (ncl == NULL) { + return NGX_ERROR; + } - rcl = ngx_alloc_chain_link(r->pool); - if (rcl == NULL) { - return NGX_ERROR; - } + ncl->buf = b; + ncl->next = cl; + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->last_in_chain = 0; + b->last_buf = 0; + + back = last - range->end; + ctx->offset -= back; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body reuse buf: %O-%O", + ctx->offset, ctx->offset + back); - rcl->buf = b; + if (buf->in_file) { + buf->file_pos = buf->file_last - back; + } + + if (ngx_buf_in_memory(buf)) { + buf->pos = buf->last - back; + } + cl = ncl; + buf = cl->buf; + } + + if (buf->in_file) { + buf->file_last -= last - range->end; + } - /* the range data */ + if (ngx_buf_in_memory(buf)) { + buf->last -= (size_t) (last - range->end); + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (range == tail) { + buf->last_buf = (r == r->main) ? 1 : 0; + buf->last_in_chain = 1; + *ll = cl; + ll = &cl->next; + + finished = 1; + break; + } + + range++; + ctx->index++; } - b->in_file = buf->in_file; - b->temporary = buf->temporary; - b->memory = buf->memory; - b->mmap = buf->mmap; - b->file = buf->file; + *ll = cl; + ll = &cl->next; + } + + if (out == NULL) { + return NGX_OK; + } + + *ll = NULL; + + if (finished + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) + { + return NGX_ERROR; + } + + return ngx_http_next_body_filter(r, out); +} + - if (buf->in_file) { - b->file_pos = buf->file_pos + range[i].start; - b->file_last = buf->file_pos + range[i].end; - } +static ngx_int_t +ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl, *rcl; + ngx_http_range_t *range; + + /* + * The boundary header of the range: + * CRLF + * "--0123456789" CRLF + * "Content-Type: image/jpeg" CRLF + * "Content-Range: bytes " + */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - if (ngx_buf_in_memory(buf)) { - b->pos = buf->pos + (size_t) range[i].start; - b->last = buf->pos + (size_t) range[i].end; - } + b->memory = 1; + b->pos = ctx->boundary_header.data; + b->last = ctx->boundary_header.data + ctx->boundary_header.len; + + hcl = ngx_alloc_chain_link(r->pool); + if (hcl == NULL) { + return NGX_ERROR; + } + + hcl->buf = b; + + + /* "SSSS-EEEE/TTTT" CRLF CRLF */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - dcl = ngx_alloc_chain_link(r->pool); - if (dcl == NULL) { - return NGX_ERROR; - } + range = ctx->ranges.elts; + b->temporary = 1; + b->pos = range[ctx->index - 1].content_range.data; + b->last = range[ctx->index - 1].content_range.data + + range[ctx->index - 1].content_range.len; + + rcl = ngx_alloc_chain_link(r->pool); + if (rcl == NULL) { + return NGX_ERROR; + } + + rcl->buf = b; - dcl->buf = b; + **lll = hcl; + hcl->next = rcl; + *lll = &rcl->next; + + return NGX_OK; +} - *ll = hcl; - hcl->next = rcl; - rcl->next = dcl; - ll = &dcl->next; - } + +static ngx_int_t +ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl; /* the last boundary CRLF "--0123456789--" CRLF */ @@ -885,7 +972,8 @@ ngx_http_range_multipart_body(ngx_http_r } b->temporary = 1; - b->last_buf = 1; + b->last_in_chain = 1; + b->last_buf = (r == r->main) ? 1 : 0; b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1); @@ -908,7 +996,7 @@ ngx_http_range_multipart_body(ngx_http_r *ll = hcl; - return ngx_http_next_body_filter(r, out); + return NGX_OK; } ------------------ Original ------------------ From: "?? (hucc)";; Send time: Friday, Nov 10, 2017 4:41 AM To: "nginx-devel"; Subject: Re: [patch-1] Range filter: support multiple ranges. Hi, Please ignore the previous reply. The updated patch is placed at the end. On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: >On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > >> # HG changeset patch >> # User hucongcong >> # Date 1509099940 -28800 >> # Fri Oct 27 18:25:40 2017 +0800 >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 >> # Parent b9850d3deb277bd433a689712c40a84401443520 >> Range filter: support multiple ranges. > >This summary line is at least misleading. Ok, maybe the summary line is support multiple ranges when body is in multiple buffers. >> When multiple ranges are requested, nginx will coalesce any of the ranges >> that overlap, or that are separated by a gap that is smaller than the >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > >(Note that the patch also does reordering of ranges. For some >reason this is not mentioned in the commit log. There are also >other changes not mentioned in the commit log - for example, I see >ngx_http_range_t was moved to ngx_http_request.h. These are >probably do not belong to the patch at all.) I actually wait for you to give better advice. I tried my best to make the changes easier and more readable and I will split it into multiple patches based on your suggestions if these changes will be accepted. >Reordering and/or coalescing ranges is not something that clients >usually expect to happen. This was widely discussed at the time >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 >introduced the "MAY coalesce" clause. But this doesn't make >clients, especially old ones, magically prepared for this. I did not know the CVE-2011-3192. If multiple ranges list in ascending order and there are no overlapping ranges, the code will be much simpler. This is what I think. >Moreover, this will certainly break some use cases like "request >some metadata first, and then rest of the file". So this is >certainly not a good idea to always reorder / coalesce ranges >unless this is really needed for some reason. (Or even at all, >as just returning 200 might be much more compatible with various >clients, as outlined above.) > >It is also not clear what you are trying to achieve with this >patch. You may want to elaborate more on what problem you are >trying to solve, may be there are better solutions. I am trying to support multiple ranges when proxy_buffering is off and sometimes slice is enabled. The data is always cached in the backend which is not nginx. As far as I know, similar architecture is widely used in CDN. So the implementation of multiple ranges in the architecture I mentioned above is required and inevitable. Besides, P2P clients desire for this feature to gather data-pieces. Hope I already made it clear. All these changes have been tested. Hope it helps! Temporarily, the changes are as follows: diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800 @@ -46,16 +46,10 @@ typedef struct { - off_t start; - off_t end; - ngx_str_t content_range; -} ngx_http_range_t; + off_t offset; + ngx_uint_t index; /* start with 1 */ - -typedef struct { - off_t offset; - ngx_str_t boundary_header; - ngx_array_t ranges; + ngx_str_t boundary_header; } ngx_http_range_filter_ctx_t; @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); @@ -234,7 +230,7 @@ parse: r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; r->headers_out.status_line.len = 0; - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_header(r, ctx); } @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length, cutoff, - cutlim; - ngx_uint_t suffix; + off_t start, end, content_length, + cutoff, cutlim; + ngx_uint_t suffix, descending; ngx_http_range_t *range; ngx_http_range_filter_ctx_t *mctx; @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t mctx = ngx_http_get_module_ctx(r->main, ngx_http_range_body_filter_module); if (mctx) { - ctx->ranges = mctx->ranges; + r->headers_out.ranges = r->main->headers_out.ranges; + ctx->boundary_header = mctx->boundary_header; return NGX_OK; } } - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) - != NGX_OK) - { + r->headers_out.ranges = ngx_array_create(r->pool, 1, + sizeof(ngx_http_range_t)); + if (r->headers_out.ranges == NULL) { return NGX_ERROR; } p = r->headers_in.range->value.data + 6; - size = 0; + range = NULL; + descending = 0; content_length = r->headers_out.content_length_n; cutoff = NGX_MAX_OFF_T_VALUE / 10; @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t found: if (start < end) { - range = ngx_array_push(&ctx->ranges); + + if (range && start < range->end) { + descending++; + } + + range = ngx_array_push(r->headers_out.ranges); if (range == NULL) { return NGX_ERROR; } @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t range->start = start; range->end = end; - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { - return NGX_HTTP_RANGE_NOT_SATISFIABLE; - } - - size += end - start; - - if (ranges-- == 0) { - return NGX_DECLINED; - } - } else if (start == 0) { return NGX_DECLINED; } @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t } } - if (ctx->ranges.nelts == 0) { + if (r->headers_out.ranges->nelts == 0) { return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { + if (r->headers_out.ranges->nelts > ranges) { + r->headers_out.ranges->nelts = ranges; + } + + if (descending) { return NGX_DECLINED; } @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt /* "Content-Range: bytes SSSS-EEEE/TTTT" header */ - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; content_range->value.len = ngx_sprintf(content_range->value.data, "bytes %O-%O/%O", @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http ngx_http_range_t *range; ngx_atomic_uint_t boundary; + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof(CRLF "Content-Type: ") - 1 + r->headers_out.content_type.len @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1; - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { + range = r->headers_out.ranges->elts; + for (i = 0; i < r->headers_out.ranges->nelts; i++) { /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */ @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http - range[i].content_range.data; len += ctx->boundary_header.len + range[i].content_range.len - + (range[i].end - range[i].start); + + (range[i].end - range[i].start); } r->headers_out.content_length_n = len; + r->headers_out.content_offset = range[0].start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ return ngx_http_next_body_filter(r, in); } - if (ctx->ranges.nelts == 1) { + if (r->headers_out.ranges->nelts == 1) { return ngx_http_range_singlepart_body(r, ctx, in); } - /* - * multipart ranges are supported only if whole body is in a single buffer - */ - if (ngx_buf_special(in->buf)) { return ngx_http_next_body_filter(r, in); } - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { - return NGX_ERROR; - } - return ngx_http_range_multipart_body(r, ctx, in); } static ngx_int_t -ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) -{ - off_t start, last; - ngx_buf_t *buf; - ngx_uint_t i; - ngx_http_range_t *range; - - if (ctx->offset) { - goto overlapped; - } - - buf = in->buf; - - if (!buf->last_buf) { - start = ctx->offset; - last = ctx->offset + ngx_buf_size(buf); - - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { - if (start > range[i].start || last < range[i].end) { - goto overlapped; - } - } - } - - ctx->offset = ngx_buf_size(buf); - - return NGX_OK; - -overlapped: - - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "range in overlapped buffers"); - - return NGX_ERROR; -} - - -static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_ out = NULL; ll = &out; - range = ctx->ranges.elts; + range = r->headers_out.ranges->elts; for (cl = in; cl; cl = cl->next) { @@ -786,96 +740,227 @@ static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { - ngx_buf_t *b, *buf; - ngx_uint_t i; - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; - ngx_http_range_t *range; + off_t start, last, back; + ngx_buf_t *buf, *b; + ngx_uint_t i, finished; + ngx_chain_t *out, *cl, *ncl, **ll; + ngx_http_range_t *range, *tail; + + range = r->headers_out.ranges->elts; - ll = &out; - buf = in->buf; - range = ctx->ranges.elts; + if (!ctx->index) { + for (i = 0; i < r->headers_out.ranges->nelts; i++) { + if (ctx->offset < range[i].end) { + ctx->index = i + 1; + break; + } + } + } - for (i = 0; i < ctx->ranges.nelts; i++) { + tail = range + r->headers_out.ranges->nelts - 1; + range += ctx->index - 1; - /* - * The boundary header of the range: - * CRLF - * "--0123456789" CRLF - * "Content-Type: image/jpeg" CRLF - * "Content-Range: bytes " - */ + out = NULL; + ll = &out; + finished = 0; + + for (cl = in; cl; cl = cl->next) { + + buf = cl->buf; - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + start = ctx->offset; + last = ctx->offset + ngx_buf_size(buf); + + ctx->offset = last; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body buf: %O-%O", start, last); + + if (ngx_buf_special(buf)) { + *ll = cl; + ll = &cl->next; + continue; } - b->memory = 1; - b->pos = ctx->boundary_header.data; - b->last = ctx->boundary_header.data + ctx->boundary_header.len; + if (range->end <= start || range->start >= last) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body skip"); - hcl = ngx_alloc_chain_link(r->pool); - if (hcl == NULL) { - return NGX_ERROR; + if (buf->in_file) { + buf->file_pos = buf->file_last; + } + + buf->pos = buf->last; + buf->sync = 1; + + continue; } - hcl->buf = b; + if (range->start >= start) { + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { + return NGX_ERROR; + } - /* "SSSS-EEEE/TTTT" CRLF CRLF */ + if (buf->in_file) { + buf->file_pos += range->start - start; + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->start - start); + } } - b->temporary = 1; - b->pos = range[i].content_range.data; - b->last = range[i].content_range.data + range[i].content_range.len; + if (range->end <= last) { + + if (range < tail && range[1].start < last) { + + b = ngx_alloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + ncl = ngx_alloc_chain_link(r->pool); + if (ncl == NULL) { + return NGX_ERROR; + } - rcl = ngx_alloc_chain_link(r->pool); - if (rcl == NULL) { - return NGX_ERROR; - } + ncl->buf = b; + ncl->next = cl; + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->last_in_chain = 0; + b->last_buf = 0; + + back = last - range->end; + ctx->offset -= back; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body reuse buf: %O-%O", + ctx->offset, ctx->offset + back); - rcl->buf = b; + if (buf->in_file) { + buf->file_pos = buf->file_last - back; + } + + if (ngx_buf_in_memory(buf)) { + buf->pos = buf->last - back; + } + cl = ncl; + buf = cl->buf; + } + + if (buf->in_file) { + buf->file_last -= last - range->end; + } - /* the range data */ + if (ngx_buf_in_memory(buf)) { + buf->last -= (size_t) (last - range->end); + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (range == tail) { + buf->last_buf = (r == r->main) ? 1 : 0; + buf->last_in_chain = 1; + *ll = cl; + ll = &cl->next; + + finished = 1; + break; + } + + range++; + ctx->index++; } - b->in_file = buf->in_file; - b->temporary = buf->temporary; - b->memory = buf->memory; - b->mmap = buf->mmap; - b->file = buf->file; + *ll = cl; + ll = &cl->next; + } + + if (out == NULL) { + return NGX_OK; + } + + *ll = NULL; + + if (finished + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) + { + return NGX_ERROR; + } + + return ngx_http_next_body_filter(r, out); +} + - if (buf->in_file) { - b->file_pos = buf->file_pos + range[i].start; - b->file_last = buf->file_pos + range[i].end; - } +static ngx_int_t +ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl, *rcl; + ngx_http_range_t *range; + + /* + * The boundary header of the range: + * CRLF + * "--0123456789" CRLF + * "Content-Type: image/jpeg" CRLF + * "Content-Range: bytes " + */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - if (ngx_buf_in_memory(buf)) { - b->pos = buf->pos + (size_t) range[i].start; - b->last = buf->pos + (size_t) range[i].end; - } + b->memory = 1; + b->pos = ctx->boundary_header.data; + b->last = ctx->boundary_header.data + ctx->boundary_header.len; + + hcl = ngx_alloc_chain_link(r->pool); + if (hcl == NULL) { + return NGX_ERROR; + } + + hcl->buf = b; + + + /* "SSSS-EEEE/TTTT" CRLF CRLF */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - dcl = ngx_alloc_chain_link(r->pool); - if (dcl == NULL) { - return NGX_ERROR; - } + range = r->headers_out.ranges->elts; + b->temporary = 1; + b->pos = range[ctx->index - 1].content_range.data; + b->last = range[ctx->index - 1].content_range.data + + range[ctx->index - 1].content_range.len; + + rcl = ngx_alloc_chain_link(r->pool); + if (rcl == NULL) { + return NGX_ERROR; + } + + rcl->buf = b; - dcl->buf = b; + **lll = hcl; + hcl->next = rcl; + *lll = &rcl->next; + + return NGX_OK; +} - *ll = hcl; - hcl->next = rcl; - rcl->next = dcl; - ll = &dcl->next; - } + +static ngx_int_t +ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl; /* the last boundary CRLF "--0123456789--" CRLF */ @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r } b->temporary = 1; - b->last_buf = 1; + b->last_in_chain = 1; + b->last_buf = (r == r->main) ? 1 : 0; b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1); @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r *ll = hcl; - return ngx_http_next_body_filter(r, out); + return NGX_OK; } diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800 @@ -22,6 +22,8 @@ typedef struct { ngx_str_t etag; unsigned last:1; unsigned active:1; + unsigned multipart:1; + ngx_uint_t index; ngx_http_request_t *sr; } ngx_http_slice_ctx_t; @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re { off_t end; ngx_int_t rc; + ngx_uint_t i; ngx_table_elt_t *h; + ngx_http_range_t *range; ngx_http_slice_ctx_t *ctx; ngx_http_slice_loc_conf_t *slcf; ngx_http_slice_content_range_t cr; @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re r->allow_ranges = 1; r->subrequest_ranges = 1; - r->single_range = 1; rc = ngx_http_next_header_filter(r); - if (r != r->main) { - return rc; + if (r == r->main) { + r->preserve_body = 1; + + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { + ctx->multipart = (r->headers_out.ranges->nelts != 1); + range = r->headers_out.ranges->elts; + + if (ctx->start + (off_t) slcf->size <= range[0].start) { + ctx->start = slcf->size * (range[0].start / slcf->size); + } + + ctx->end = range[r->headers_out.ranges->nelts - 1].end; + + } else { + ctx->end = cr.complete_length; + } } - r->preserve_body = 1; + if (ctx->multipart) { + range = r->headers_out.ranges->elts; + + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) { + + if (ctx->start < range[i].end) { + ctx->index = i; + break; + } - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { - ctx->start = slcf->size - * (r->headers_out.content_offset / slcf->size); + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) { + i++; + ctx->index = i; + ctx->start = slcf->size * (range[i].start / slcf->size); + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "range multipart so fast forward to %O-%O @%O", + range[i].start, range[i].end, ctx->start); + break; + } } - - ctx->end = r->headers_out.content_offset - + r->headers_out.content_length_n; - - } else { - ctx->end = cr.complete_length; } return rc; diff -r 32f83fe5747b src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800 @@ -251,6 +251,13 @@ typedef struct { typedef struct { + off_t start; + off_t end; + ngx_str_t content_range; +} ngx_http_range_t; + + +typedef struct { ngx_list_t headers; ngx_list_t trailers; @@ -278,6 +285,7 @@ typedef struct { u_char *content_type_lowcase; ngx_uint_t content_type_hash; + ngx_array_t *ranges; /* ngx_http_range_t */ ngx_array_t cache_control; off_t content_length_n; _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Fri Nov 10 11:55:51 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Nov 2017 14:55:51 +0300 Subject: [nginx] Range filter: allowed ranges on empty files (ticket #1031). In-Reply-To: References: Message-ID: <20171110115551.GL26836@mdounin.ru> Hello! On Fri, Nov 10, 2017 at 05:11:50AM +0800, ?? (hucc) wrote: > Hi, > > >details: http://hg.nginx.org/nginx/rev/aeaac3ccee4f > >branches: > >changeset: 7043:aeaac3ccee4f > >user: Maxim Dounin > >date: Tue Jun 27 00:53:46 2017 +0300 > >description: > >Range filter: allowed ranges on empty files (ticket #1031). > > > >As per RFC 2616 / RFC 7233, any range request to an empty file > >is expected to result in 416 Range Not Satisfiable response, as > >there cannot be a "byte-range-spec whose first-byte-pos is less > >than the current length of the entity-body". On the other hand, > >this makes use of byte-range requests inconvenient in some cases, > >as reported for the slice module here: > > > >http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010177.html > > > >This commit changes range filter to instead return 200 if the file > >is empty and the range requested starts at 0. > > With this commit, the problem I mentioned is still unsolved. > http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010198.html > http://mailman.nginx.org/pipermail/nginx-devel/2017-June/010178.html > > Because we can not ask that the backend must be nginx, and it is > nginx who convert ordinary request to a range request in slice > module. So is there a safe solution? There are at least a couple of them: - avoid using slice module if there are empty files and your backend return 416 on range requests to them; - change your backend to allow range requests on empty files. -- Maxim Dounin http://mdounin.ru/ From weixu365 at gmail.com Sun Nov 12 12:25:20 2017 From: weixu365 at gmail.com (Wei Xu) Date: Sun, 12 Nov 2017 23:25:20 +1100 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: <20171109170702.GH26836@mdounin.ru> References: <20171109170702.GH26836@mdounin.ru> Message-ID: We are running Nginx and upstream on the same machine using docker, so there's no firewall. I did a test locally and captured the network packages. For the normal requests, upstream send a [FIN, ACK] to nginx after keep-alive timeout (500 ms), and nginx also send a [FIN, ACK] back, then upstream send a [ACK] to close the connection completely. 1 2 3 4 5 6 7 8 9 10 11 No. Time Source Destination Protocol Length Info 1 2017-11-12 17:11:04.299146 172.18.0.3 172.18.0.2 TCP 74 48528 ? 8000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=32031305 TSecr=0 WS=128 2 2017-11-12 17:11:04.299171 172.18.0.2 172.18.0.3 TCP 74 8000 ? 48528 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=32031305 TSecr=32031305 WS=128 3 2017-11-12 17:11:04.299194 172.18.0.3 172.18.0.2 TCP 66 48528 ? 8000 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=32031305 TSecr=32031305 4 2017-11-12 17:11:04.299259 172.18.0.3 172.18.0.2 HTTP 241 GET /_healthcheck HTTP/1.1 5 2017-11-12 17:11:04.299267 172.18.0.2 172.18.0.3 TCP 66 8000 ? 48528 [ACK] Seq=1 Ack=176 Win=30080 Len=0 TSval=32031305 TSecr=32031305 6 2017-11-12 17:11:04.299809 172.18.0.2 172.18.0.3 HTTP 271 HTTP/1.1 200 OK (text/html) 7 2017-11-12 17:11:04.299852 172.18.0.3 172.18.0.2 TCP 66 48528 ? 8000 [ACK] Seq=176 Ack=206 Win=30336 Len=0 TSval=32031305 TSecr=32031305 8 2017-11-12 17:11:04.800805 172.18.0.2 172.18.0.3 TCP 66 8000 ? 48528 [FIN, ACK] Seq=206 Ack=176 Win=30080 Len=0 TSval=32031355 TSecr=32031305 9 2017-11-12 17:11:04.801120 172.18.0.3 172.18.0.2 TCP 66 48528 ? 8000 [FIN, ACK] Seq=176 Ack=207 Win=30336 Len=0 TSval=32031355 TSecr=32031355 10 2017-11-12 17:11:04.801151 172.18.0.2 172.18.0.3 TCP 66 8000 ? 48528 [ACK] Seq=207 Ack=177 Win=30080 Len=0 TSval=32031355 TSecr=32031355 For the failed requests, upstream received a new http request when it had closed the connection after keep-alive timeout (500 ms) and hasn?t got a chance to send the [FIN] package. Because of the connection has been closed from upstream?s perspective, so it send a [RST] response for this request. 1 2 3 4 5 6 7 8 9 10 11 12 13 No. Time Source Destination Protocol Length Info 433 2017-11-12 17:11:26.548449 172.18.0.3 172.18.0.2 TCP 74 48702 ? 8000 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=32033530 TSecr=0 WS=128 434 2017-11-12 17:11:26.548476 172.18.0.2 172.18.0.3 TCP 74 8000 ? 48702 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=32033530 TSecr=32033530 WS=128 435 2017-11-12 17:11:26.548502 172.18.0.3 172.18.0.2 TCP 66 48702 ? 8000 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=32033530 TSecr=32033530 436 2017-11-12 17:11:26.548609 172.18.0.3 172.18.0.2 HTTP 241 GET /_healthcheck HTTP/1.1 437 2017-11-12 17:11:26.548618 172.18.0.2 172.18.0.3 TCP 66 8000 ? 48702 [ACK] Seq=1 Ack=176 Win=30080 Len=0 TSval=32033530 TSecr=32033530 438 2017-11-12 17:11:26.549173 172.18.0.2 172.18.0.3 HTTP 271 HTTP/1.1 200 OK (text/html) 439 2017-11-12 17:11:26.549230 172.18.0.3 172.18.0.2 TCP 66 48702 ? 8000 [ACK] Seq=176 Ack=206 Win=30336 Len=0 TSval=32033530 TSecr=32033530 440 2017-11-12 17:11:27.049668 172.18.0.3 172.18.0.2 HTTP 241 GET /_healthcheck HTTP/1.1 441 2017-11-12 17:11:27.050324 172.18.0.2 172.18.0.3 HTTP 271 HTTP/1.1 200 OK (text/html) 442 2017-11-12 17:11:27.050378 172.18.0.3 172.18.0.2 TCP 66 48702 ? 8000 [ACK] Seq=351 Ack=411 Win=31360 Len=0 TSval=32033580 TSecr=32033580 443 2017-11-12 17:11:27.551182 172.18.0.3 172.18.0.2 HTTP 241 GET /_healthcheck HTTP/1.1 444 2017-11-12 17:11:27.551294 172.18.0.2 172.18.0.3 TCP 66 8000 ? 48702 [RST, ACK] Seq=411 Ack=526 Win=32256 Len=0 TSval=32033630 TSecr=32033630 When nginx receives the [RST] package, it will log a ?Connection reset? error. I'm testing by set up the environment: Upstream (Node.js server): - Set keep-alive timeout to 500 ms Test client: - Keep sending requests with an interval - Interval starts from 500 ms and decrease 0.1 ms after each request For more detailed description of the test process, you can reference my post at: https://theantway.com/2017/11/analyze-connection-reset-error-in-nginx-upstream-with-keep-alive-enabled/ To Fix the issue, I tried to add a timeout for keep-alived upstream, and you can check the patch at: https://github.com/weixu365/nginx/blob/docker-1.13.6/docker/stretch/patches/01-http-upstream-keepalive-timeout.patch The patch is for my current testing, and I can create a different format if you need. Regards Wei Xu On Fri, Nov 10, 2017 at 4:07 AM, Maxim Dounin wrote: > Hello! > > On Thu, Nov 02, 2017 at 08:41:16PM +1100, Wei Xu wrote: > > > Hi > > I saw there's an issue talking about "implement keepalive timeout for > > upstream ". > > > > I have a different scenario for this requirement. > > > > I'm using Node.js web server as upstream, and set keep alive time out to > 60 > > second in nodejs server. The problem is I found more than a hundred > > "Connection reset by peer" errors everyday. > > > > Because there's no any errors on nodejs side, I guess it was because of > the > > upstream has disconnected, and at the same time, nginx send a new > request, > > then received a TCP RST. > > Could you please trace what actually happens on the network level > to confirm the guess is correct? > > Also, please check that there are no stateful firewalls between > nginx and the backend. A firewall which drops the state before > the timeout expires looks like a much likely cause for such > errors. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 13 19:49:46 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Nov 2017 22:49:46 +0300 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: <20171109170702.GH26836@mdounin.ru> Message-ID: <20171113194946.GR26836@mdounin.ru> Hello! On Sun, Nov 12, 2017 at 11:25:20PM +1100, Wei Xu wrote: > We are running Nginx and upstream on the same machine using docker, so > there's no firewall. Note that this isn't usually true. Docker uses iptables implicitly, and unless you specifically checked your iptables configuration - likely you are using firewall. > I did a test locally and captured the network packages. > > For the normal requests, upstream send a [FIN, ACK] to nginx after > keep-alive timeout (500 ms), and nginx also send a [FIN, ACK] back, then > upstream send a [ACK] to close the connection completely. [...] > For more detailed description of the test process, you can reference my > post at: > https://theantway.com/2017/11/analyze-connection-reset-error-in-nginx-upstream-with-keep-alive-enabled/ The test demonstrates that it is indeed possible to trigger the problem in question. Unfortunately, it doesn't provide any proof that what you observed in production is the same issue though. While it is more or less clear that the race condition in question is real, it seems to be very unlikely with typical workloads. And even when triggered, in most cases nginx handles this good enough, re-trying the request per proxy_next_upstream. Nevertheless, thank you for detailed testing. A simple test case that reliably demonstrates the race is appreciated, and I was able to reduce it to your client script and nginx with the following trivial configuration: upstream u { server 127.0.0.1:8082; keepalive 10; } server { listen 8080; location / { proxy_pass http://u; proxy_http_version 1.1; proxy_set_header Connection ""; } } server { listen 8082; keepalive_timeout 500ms; location / { return 200 ok\n; } } > To Fix the issue, I tried to add a timeout for keep-alived upstream, and > you can check the patch at: > https://github.com/weixu365/nginx/blob/docker-1.13.6/docker/stretch/patches/01-http-upstream-keepalive-timeout.patch > > The patch is for my current testing, and I can create a different format if > you need. The patch looks good enough for testing, though there are various minor issues - notably testing timeout for NGX_CONF_UNSET_MSEC at runtime, using wrong type for timeout during parsing (time_t instead of ngx_msec_t). Also I tend to think that using a separate keepalive_timeout directive should be easier, and we probably want to introduce some default value for it. Please take a look if the following patch works for you: # HG changeset patch # User Maxim Dounin # Date 1510601341 -10800 # Mon Nov 13 22:29:01 2017 +0300 # Node ID 9ba0a577601b7c1b714eb088bc0b0d21c6354699 # Parent 6f592a42570898e1539d2e0b86017f32bbf665c8 Upstream keepalive: keepalive_timeout directive. The directive configures maximum time a connection can be kept in the cache. By configuring a time which is smaller than the corresponding timeout on the backend side one can avoid the race between closing a connection by the backend and nginx trying to use the same connection to send a request at the same time. diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c b/src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c @@ -12,6 +12,7 @@ typedef struct { ngx_uint_t max_cached; + ngx_msec_t timeout; ngx_queue_t cache; ngx_queue_t free; @@ -84,6 +85,13 @@ static ngx_command_t ngx_http_upstream_ 0, NULL }, + { ngx_string("keepalive_timeout"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_upstream_keepalive_srv_conf_t, timeout), + NULL }, + ngx_null_command }; @@ -141,6 +149,8 @@ ngx_http_upstream_init_keepalive(ngx_con us->peer.init = ngx_http_upstream_init_keepalive_peer; + ngx_conf_init_msec_value(kcf->timeout, 60000); + /* allocate cache items and add to free queue */ cached = ngx_pcalloc(cf->pool, @@ -261,6 +271,10 @@ found: c->write->log = pc->log; c->pool->log = pc->log; + if (c->read->timer_set) { + ngx_del_timer(c->read); + } + pc->connection = c; pc->cached = 1; @@ -339,9 +353,8 @@ ngx_http_upstream_free_keepalive_peer(ng pc->connection = NULL; - if (c->read->timer_set) { - ngx_del_timer(c->read); - } + ngx_add_timer(c->read, kp->conf->timeout); + if (c->write->timer_set) { ngx_del_timer(c->write); } @@ -392,7 +405,7 @@ ngx_http_upstream_keepalive_close_handle c = ev->data; - if (c->close) { + if (c->close || c->read->timedout) { goto close; } @@ -485,6 +498,8 @@ ngx_http_upstream_keepalive_create_conf( * conf->max_cached = 0; */ + conf->timeout = NGX_CONF_UNSET_MSEC; + return conf; } -- Maxim Dounin http://mdounin.ru/ From weixu365 at gmail.com Tue Nov 14 03:03:04 2017 From: weixu365 at gmail.com (Wei Xu) Date: Tue, 14 Nov 2017 14:03:04 +1100 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: <20171113194946.GR26836@mdounin.ru> References: <20171109170702.GH26836@mdounin.ru> <20171113194946.GR26836@mdounin.ru> Message-ID: Hi, Really nice, much simpler than my patch. It's great to have a default timeout value. thanks for you time. On Tue, Nov 14, 2017 at 6:49 AM, Maxim Dounin wrote: > Hello! > > On Sun, Nov 12, 2017 at 11:25:20PM +1100, Wei Xu wrote: > > > We are running Nginx and upstream on the same machine using docker, so > > there's no firewall. > > Note that this isn't usually true. Docker uses iptables > implicitly, and unless you specifically checked your iptables > configuration - likely you are using firewall. > > > I did a test locally and captured the network packages. > > > > For the normal requests, upstream send a [FIN, ACK] to nginx after > > keep-alive timeout (500 ms), and nginx also send a [FIN, ACK] back, then > > upstream send a [ACK] to close the connection completely. > > [...] > > > For more detailed description of the test process, you can reference my > > post at: > > https://theantway.com/2017/11/analyze-connection-reset- > error-in-nginx-upstream-with-keep-alive-enabled/ > > The test demonstrates that it is indeed possible to trigger the > problem in question. Unfortunately, it doesn't provide any proof > that what you observed in production is the same issue though. > > While it is more or less clear that the race condition in question > is real, it seems to be very unlikely with typical workloads. And > even when triggered, in most cases nginx handles this good enough, > re-trying the request per proxy_next_upstream. > > Nevertheless, thank you for detailed testing. A simple test case > that reliably demonstrates the race is appreciated, and I was able > to reduce it to your client script and nginx with the following > trivial configuration: > > upstream u { > server 127.0.0.1:8082; > keepalive 10; > } > > server { > listen 8080; > > location / { > proxy_pass http://u; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > } > > server { > listen 8082; > > keepalive_timeout 500ms; > > location / { > return 200 ok\n; > } > } > > > To Fix the issue, I tried to add a timeout for keep-alived upstream, and > > you can check the patch at: > > https://github.com/weixu365/nginx/blob/docker-1.13.6/ > docker/stretch/patches/01-http-upstream-keepalive-timeout.patch > > > > The patch is for my current testing, and I can create a different format > if > > you need. > > The patch looks good enough for testing, though there are various > minor issues - notably testing timeout for NGX_CONF_UNSET_MSEC at > runtime, using wrong type for timeout during parsing (time_t > instead of ngx_msec_t). > > Also I tend to think that using a separate keepalive_timeout > directive should be easier, and we probably want to introduce some > default value for it. > > Please take a look if the following patch works for you: > > # HG changeset patch > # User Maxim Dounin > # Date 1510601341 -10800 > # Mon Nov 13 22:29:01 2017 +0300 > # Node ID 9ba0a577601b7c1b714eb088bc0b0d21c6354699 > # Parent 6f592a42570898e1539d2e0b86017f32bbf665c8 > Upstream keepalive: keepalive_timeout directive. > > The directive configures maximum time a connection can be kept in the > cache. By configuring a time which is smaller than the corresponding > timeout on the backend side one can avoid the race between closing > a connection by the backend and nginx trying to use the same connection > to send a request at the same time. > > diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c > b/src/http/modules/ngx_http_upstream_keepalive_module.c > --- a/src/http/modules/ngx_http_upstream_keepalive_module.c > +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c > @@ -12,6 +12,7 @@ > > typedef struct { > ngx_uint_t max_cached; > + ngx_msec_t timeout; > > ngx_queue_t cache; > ngx_queue_t free; > @@ -84,6 +85,13 @@ static ngx_command_t ngx_http_upstream_ > 0, > NULL }, > > + { ngx_string("keepalive_timeout"), > + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_msec_slot, > + NGX_HTTP_SRV_CONF_OFFSET, > + offsetof(ngx_http_upstream_keepalive_srv_conf_t, timeout), > + NULL }, > + > ngx_null_command > }; > > @@ -141,6 +149,8 @@ ngx_http_upstream_init_keepalive(ngx_con > > us->peer.init = ngx_http_upstream_init_keepalive_peer; > > + ngx_conf_init_msec_value(kcf->timeout, 60000); > + > /* allocate cache items and add to free queue */ > > cached = ngx_pcalloc(cf->pool, > @@ -261,6 +271,10 @@ found: > c->write->log = pc->log; > c->pool->log = pc->log; > > + if (c->read->timer_set) { > + ngx_del_timer(c->read); > + } > + > pc->connection = c; > pc->cached = 1; > > @@ -339,9 +353,8 @@ ngx_http_upstream_free_keepalive_peer(ng > > pc->connection = NULL; > > - if (c->read->timer_set) { > - ngx_del_timer(c->read); > - } > + ngx_add_timer(c->read, kp->conf->timeout); > + > if (c->write->timer_set) { > ngx_del_timer(c->write); > } > @@ -392,7 +405,7 @@ ngx_http_upstream_keepalive_close_handle > > c = ev->data; > > - if (c->close) { > + if (c->close || c->read->timedout) { > goto close; > } > > @@ -485,6 +498,8 @@ ngx_http_upstream_keepalive_create_conf( > * conf->max_cached = 0; > */ > > + conf->timeout = NGX_CONF_UNSET_MSEC; > + > return conf; > } > > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at isitdoneyet.co.uk Tue Nov 14 09:13:06 2017 From: ben at isitdoneyet.co.uk (Ben Brown) Date: Tue, 14 Nov 2017 09:13:06 +0000 Subject: [PATCH] Add 'log_index_denied' directive In-Reply-To: <20171109161016.GF26836@mdounin.ru> References: <20171109161016.GF26836@mdounin.ru> Message-ID: Hi! Thanks for taking the time to look at my patch and give such constructive feedback. On 9 November 2017 at 16:10, Maxim Dounin wrote: > Also, such errors can be easily avoided by using a site-wide index > file, for example: > > index index.html /403; > > location = /403 { > return 403; > } > > As such, I don't think we need to introduce a special directive to > control "directory index of ... forbidden" messages. > I agree, I wasn't aware of such a trivial way to avoid these errors. > The order here isn't alphabetical, so I would recommend adding the > new directive after log_not_found (if at all). > Noted. >> if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) { >> - ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, >> - "directory index of \"%s\" is forbidden", path.data); >> + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); >> + if (clcf->log_index_denied) { > > Are there any reasons to call ngx_http_map_uri_to_path() if we are > not going to use the result? > I honestly don't know - that code is already there. I'm very inexperienced with this code base so just looked for where this message is generated and wrapped it in an 'if'. Should this call not be here in the current codebase? Thanks, Ben From mdounin at mdounin.ru Tue Nov 14 12:53:15 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Nov 2017 15:53:15 +0300 Subject: [PATCH] Add 'log_index_denied' directive In-Reply-To: References: <20171109161016.GF26836@mdounin.ru> Message-ID: <20171114125315.GU26836@mdounin.ru> Hello! On Tue, Nov 14, 2017 at 09:13:06AM +0000, Ben Brown wrote: [...] > >> if (ngx_http_map_uri_to_path(r, &path, &root, 0) != NULL) { > >> - ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > >> - "directory index of \"%s\" is forbidden", path.data); > >> + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); > >> + if (clcf->log_index_denied) { > > > > Are there any reasons to call ngx_http_map_uri_to_path() if we are > > not going to use the result? > > > > I honestly don't know - that code is already there. I'm very inexperienced with > this code base so just looked for where this message is generated and wrapped > it in an 'if'. Should this call not be here in the current codebase? As far as I see, the call is only needed to generate path as used in the error message. If we are not going to use the result (assuming clcf->log_index_denied is 0 with your patch), probably there are no reasons to call it at all, and clcf->log_index_denied check should be done before the call. -- Maxim Dounin http://mdounin.ru/ From ben at isitdoneyet.co.uk Tue Nov 14 13:00:07 2017 From: ben at isitdoneyet.co.uk (Ben Brown) Date: Tue, 14 Nov 2017 13:00:07 +0000 Subject: [PATCH] Add 'log_index_denied' directive In-Reply-To: <20171114125315.GU26836@mdounin.ru> References: <20171109161016.GF26836@mdounin.ru> <20171114125315.GU26836@mdounin.ru> Message-ID: Hi! On 14 November 2017 at 12:53, Maxim Dounin wrote: > As far as I see, the call is only needed to generate path as used > in the error message. If we are not going to use the result > (assuming clcf->log_index_denied is 0 with your patch), probably > there are no reasons to call it at all, and clcf->log_index_denied > check should be done before the call. > Ahh - of course, I can see that now. Thanks for enlightening me! Ben From mdounin at mdounin.ru Tue Nov 14 14:58:22 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Nov 2017 17:58:22 +0300 Subject: [patch-1] Range filter: support multiple ranges. In-Reply-To: References: Message-ID: <20171114145822.GW26836@mdounin.ru> Hello! On Fri, Nov 10, 2017 at 04:41:57AM +0800, ?? (hucc) wrote: > On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: [...] > >> When multiple ranges are requested, nginx will coalesce any of the ranges > >> that overlap, or that are separated by a gap that is smaller than the > >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > > > >(Note that the patch also does reordering of ranges. For some > >reason this is not mentioned in the commit log. There are also > >other changes not mentioned in the commit log - for example, I see > >ngx_http_range_t was moved to ngx_http_request.h. These are > >probably do not belong to the patch at all.) > > I actually wait for you to give better advice. I tried my best to > make the changes easier and more readable and I will split it into > multiple patches based on your suggestions if these changes will be > accepted. General rule is: keep distinct changes in separate patches. I don't think I have anything better to suggest here. > >Reordering and/or coalescing ranges is not something that clients > >usually expect to happen. This was widely discussed at the time > >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 > >introduced the "MAY coalesce" clause. But this doesn't make > >clients, especially old ones, magically prepared for this. > > I did not know the CVE-2011-3192. If multiple ranges list in > ascending order and there are no overlapping ranges, the code will > be much simpler. This is what I think. While your intention is understood, this is certainly not something we should do. As far as it is possible, we should preserve exact order and range sizes instead, to avoid compatibility problems and to preserve various use cases which require non-sequential order. > >Moreover, this will certainly break some use cases like "request > >some metadata first, and then rest of the file". So this is > >certainly not a good idea to always reorder / coalesce ranges > >unless this is really needed for some reason. (Or even at all, > >as just returning 200 might be much more compatible with various > >clients, as outlined above.) > > > >It is also not clear what you are trying to achieve with this > >patch. You may want to elaborate more on what problem you are > >trying to solve, may be there are better solutions. > > I am trying to support multiple ranges when proxy_buffering is off > and sometimes slice is enabled. The data is always cached in the > backend which is not nginx. As far as I know, similar architecture > is widely used in CDN. So the implementation of multiple ranges in > the architecture I mentioned above is required and inevitable. > Besides, P2P clients desire for this feature to gather data-pieces. > Hope I already made it clear. So you are trying to support multi-range requests to resources which use slice module, correct? Do you have any specific clients in mind? I've seen very few legitimate clients which use multi-range requests. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Nov 14 16:57:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Nov 2017 19:57:03 +0300 Subject: [patch-1] Range filter: support multiple ranges. In-Reply-To: References: Message-ID: <20171114165703.GX26836@mdounin.ru> Hello! On Fri, Nov 10, 2017 at 07:03:01PM +0800, ?? (hucc) wrote: > Hi, > > How about this as the first patch? > > # HG changeset patch > # User hucongcong > # Date 1510309868 -28800 > # Fri Nov 10 18:31:08 2017 +0800 > # Node ID c32fddd15a26b00f8f293f6b0d8762cd9f2bfbdb > # Parent 32f83fe5747b55ef341595b18069bee3891874d0 > Range filter: support for multipart response in wider range. > > Before the patch multipart ranges are supported only if whole body > is in a single buffer. Now, the limit is canceled. If there are no > overlapping ranges and all ranges list in ascending order, nginx > will return 206 with multipart response, otherwise return 200 (OK). Introducing support for multipart ranges if the response body is not in the single buffer as long as requested ranges do not overlap and properly ordered looks like a much better idea to me. That's basically what I have in mind as possible futher enhancement of the range filter if we'll ever need better support for multipart ranges. There are various questions about the patch itself though, see below. > diff -r 32f83fe5747b -r c32fddd15a26 src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800 > @@ -54,6 +54,7 @@ typedef struct { > > typedef struct { > off_t offset; > + ngx_uint_t index; /* start with 1 */ > ngx_str_t boundary_header; > ngx_array_t ranges; > } ngx_http_range_filter_ctx_t; > @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa > static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); > -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); > +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); > > static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); > static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); > @@ -270,9 +273,8 @@ ngx_http_range_parse(ngx_http_request_t > ngx_uint_t ranges) > { > u_char *p; > - off_t start, end, size, content_length, cutoff, > - cutlim; > - ngx_uint_t suffix; > + off_t start, end, content_length, cutoff, cutlim; > + ngx_uint_t suffix, descending; > ngx_http_range_t *range; > ngx_http_range_filter_ctx_t *mctx; > > @@ -281,6 +283,7 @@ ngx_http_range_parse(ngx_http_request_t > ngx_http_range_body_filter_module); > if (mctx) { > ctx->ranges = mctx->ranges; > + ctx->boundary_header = ctx->boundary_header; > return NGX_OK; > } > } > @@ -292,7 +295,8 @@ ngx_http_range_parse(ngx_http_request_t > } > > p = r->headers_in.range->value.data + 6; > - size = 0; > + range = NULL; > + descending = 0; > content_length = r->headers_out.content_length_n; > > cutoff = NGX_MAX_OFF_T_VALUE / 10; > @@ -369,6 +373,11 @@ ngx_http_range_parse(ngx_http_request_t > found: > > if (start < end) { > + > + if (range && start < range->end) { > + descending++; > + } > + > range = ngx_array_push(&ctx->ranges); > if (range == NULL) { > return NGX_ERROR; > @@ -377,16 +386,6 @@ ngx_http_range_parse(ngx_http_request_t > range->start = start; > range->end = end; > > - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { > - return NGX_HTTP_RANGE_NOT_SATISFIABLE; > - } > - > - size += end - start; > - > - if (ranges-- == 0) { > - return NGX_DECLINED; > - } > - > } else if (start == 0) { > return NGX_DECLINED; > } > @@ -400,7 +399,7 @@ ngx_http_range_parse(ngx_http_request_t > return NGX_HTTP_RANGE_NOT_SATISFIABLE; > } > > - if (size > content_length) { > + if (ctx->ranges.nelts > ranges || descending) { > return NGX_DECLINED; > } This change basically disables support for non-ascending ranges. As previously suggested, this will break various legitimate use cases, and certainly this is not something we should do. > > @@ -469,6 +468,10 @@ ngx_http_range_multipart_header(ngx_http > ngx_http_range_t *range; > ngx_atomic_uint_t boundary; > > + if (r != r->main) { > + return ngx_http_next_header_filter(r); > + } > + > size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof(CRLF "Content-Type: ") - 1 > + r->headers_out.content_type.len > @@ -570,10 +573,11 @@ ngx_http_range_multipart_header(ngx_http > - range[i].content_range.data; > > len += ctx->boundary_header.len + range[i].content_range.len > - + (range[i].end - range[i].start); > + + (range[i].end - range[i].start); This looks like an unrelated whitespace change. > } > > r->headers_out.content_length_n = len; > + r->headers_out.content_offset = range[0].start; > > if (r->headers_out.content_length) { > r->headers_out.content_length->hash = 0; > @@ -639,63 +643,15 @@ ngx_http_range_body_filter(ngx_http_requ > return ngx_http_range_singlepart_body(r, ctx, in); > } > > - /* > - * multipart ranges are supported only if whole body is in a single buffer > - */ > - > if (ngx_buf_special(in->buf)) { > return ngx_http_next_body_filter(r, in); > } The ngx_buf_special() check should not be needed here as long as ngx_http_range_multipart_body() is modified to properly support multiple buffers. > > - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { > - return NGX_ERROR; > - } > - > return ngx_http_range_multipart_body(r, ctx, in); > } > > > static ngx_int_t > -ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > -{ > - off_t start, last; > - ngx_buf_t *buf; > - ngx_uint_t i; > - ngx_http_range_t *range; > - > - if (ctx->offset) { > - goto overlapped; > - } > - > - buf = in->buf; > - > - if (!buf->last_buf) { > - start = ctx->offset; > - last = ctx->offset + ngx_buf_size(buf); > - > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > - if (start > range[i].start || last < range[i].end) { > - goto overlapped; > - } > - } > - } > - > - ctx->offset = ngx_buf_size(buf); > - > - return NGX_OK; > - > -overlapped: > - > - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, > - "range in overlapped buffers"); > - > - return NGX_ERROR; > -} > - > - > -static ngx_int_t > ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > @@ -786,96 +742,227 @@ static ngx_int_t > ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > - ngx_buf_t *b, *buf; > - ngx_uint_t i; > - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; > - ngx_http_range_t *range; > + off_t start, last, back; > + ngx_buf_t *buf, *b; > + ngx_uint_t i, finished; > + ngx_chain_t *out, *cl, *ncl, **ll; > + ngx_http_range_t *range, *tail; > > - ll = &out; > - buf = in->buf; > range = ctx->ranges.elts; > > - for (i = 0; i < ctx->ranges.nelts; i++) { > + if (!ctx->index) { > + for (i = 0; i < ctx->ranges.nelts; i++) { > + if (ctx->offset < range[i].end) { > + ctx->index = i + 1; > + break; > + } > + } > + } All this logic with using ctx->index as range index plus 1 looks counter-intuitive and unneeded. A much better options would be (in no particular order): - use a special value to mean "uninitialized", like -1; - always initialize ctx->index to 0 and move it futher to the next range once we see that ctx->offset is larger than range[i].end; - do proper initialization somewhere in ngx_http_range_header_filter() or ngx_http_range_multipart_header(). > + > + tail = range + ctx->ranges.nelts - 1; > + range += ctx->index - 1; > + > + out = NULL; > + ll = &out; > + finished = 0; > > - /* > - * The boundary header of the range: > - * CRLF > - * "--0123456789" CRLF > - * "Content-Type: image/jpeg" CRLF > - * "Content-Range: bytes " > - */ > + for (cl = in; cl; cl = cl->next) { > + > + buf = cl->buf; > + > + start = ctx->offset; > + last = ctx->offset + ngx_buf_size(buf); > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + ctx->offset = last; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body buf: %O-%O", start, last); > + > + if (ngx_buf_special(buf)) { > + *ll = cl; > + ll = &cl->next; > + continue; > } > > - b->memory = 1; > - b->pos = ctx->boundary_header.data; > - b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + if (range->end <= start || range->start >= last) { > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body skip"); > > - hcl = ngx_alloc_chain_link(r->pool); > - if (hcl == NULL) { > - return NGX_ERROR; > + if (buf->in_file) { > + buf->file_pos = buf->file_last; > + } > + > + buf->pos = buf->last; > + buf->sync = 1; > + > + continue; Looking at this code I tend to think that our existing ngx_http_range_singlepart_body() implementation you've used as a reference is incorrect. It removes buffers from the original chain as passed to the filter - this can result in a buffer being lost from tracking by the module who owns the buffer, and a request hang if/when all available buffers will be lost. Instead, it should either preserve all existing chain links, or create a new chain. I'll take a look how to properly fix this. > } > > - hcl->buf = b; > + if (range->start >= start) { > > + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { > + return NGX_ERROR; > + } > > - /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + if (buf->in_file) { > + buf->file_pos += range->start - start; > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (ngx_buf_in_memory(buf)) { > + buf->pos += (size_t) (range->start - start); > + } > } > > - b->temporary = 1; > - b->pos = range[i].content_range.data; > - b->last = range[i].content_range.data + range[i].content_range.len; > + if (range->end <= last) { > + > + if (range < tail && range[1].start < last) { The "tail" name is not immediately obvious, and it might be better idea to name it differently. Also, range[1] looks strange when we are using range as a pointer and not array. Hopefully this test will be unneeded when code will be cleaned up to avoid moving ctx->offset backwards, see below. > + > + b = ngx_alloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > + > + ncl = ngx_alloc_chain_link(r->pool); > + if (ncl == NULL) { > + return NGX_ERROR; > + } Note: usual names for temporary chain links are "ln" and "tl". > > - rcl = ngx_alloc_chain_link(r->pool); > - if (rcl == NULL) { > - return NGX_ERROR; > - } > + ncl->buf = b; > + ncl->next = cl; > + > + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); > + b->last_in_chain = 0; > + b->last_buf = 0; > + > + back = last - range->end; > + ctx->offset -= back; This looks like a hack, there should be no need to adjust ctx->offset backwards. Instead, we should move ctx->offset only when we've done with a buffer. > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body reuse buf: %O-%O", > + ctx->offset, ctx->offset + back); > > - rcl->buf = b; > + if (buf->in_file) { > + buf->file_pos = buf->file_last - back; > + } > + > + if (ngx_buf_in_memory(buf)) { > + buf->pos = buf->last - back; > + } > > + cl = ncl; > + buf = cl->buf; > + } > + > + if (buf->in_file) { > + buf->file_last -= last - range->end; > + } > > - /* the range data */ > + if (ngx_buf_in_memory(buf)) { > + buf->last -= (size_t) (last - range->end); > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (range == tail) { > + buf->last_buf = (r == r->main) ? 1 : 0; > + buf->last_in_chain = 1; > + *ll = cl; > + ll = &cl->next; > + > + finished = 1; It is not clear why to use the "finished" flag instead of adding the boundary here. > + break; > + } > + > + range++; > + ctx->index++; > } > > - b->in_file = buf->in_file; > - b->temporary = buf->temporary; > - b->memory = buf->memory; > - b->mmap = buf->mmap; > - b->file = buf->file; > + *ll = cl; > + ll = &cl->next; > + } > + > + if (out == NULL) { > + return NGX_OK; > + } > + > + *ll = NULL; > + > + if (finished > + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) > + { > + return NGX_ERROR; > + } > + > + return ngx_http_next_body_filter(r, out); > +} > + > > - if (buf->in_file) { > - b->file_pos = buf->file_pos + range[i].start; > - b->file_last = buf->file_pos + range[i].end; > - } > +static ngx_int_t > +ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) The "ngx_chain_t ***lll" argument suggests it might be a good idea to somehow improve the interface. > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl, *rcl; > + ngx_http_range_t *range; > + > + /* > + * The boundary header of the range: > + * CRLF > + * "--0123456789" CRLF > + * "Content-Type: image/jpeg" CRLF > + * "Content-Range: bytes " > + */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - if (ngx_buf_in_memory(buf)) { > - b->pos = buf->pos + (size_t) range[i].start; > - b->last = buf->pos + (size_t) range[i].end; > - } > + b->memory = 1; > + b->pos = ctx->boundary_header.data; > + b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + > + hcl = ngx_alloc_chain_link(r->pool); > + if (hcl == NULL) { > + return NGX_ERROR; > + } > + > + hcl->buf = b; > + > + > + /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - dcl = ngx_alloc_chain_link(r->pool); > - if (dcl == NULL) { > - return NGX_ERROR; > - } > + range = ctx->ranges.elts; > + b->temporary = 1; > + b->pos = range[ctx->index - 1].content_range.data; > + b->last = range[ctx->index - 1].content_range.data > + + range[ctx->index - 1].content_range.len; > + > + rcl = ngx_alloc_chain_link(r->pool); > + if (rcl == NULL) { > + return NGX_ERROR; > + } > + > + rcl->buf = b; > > - dcl->buf = b; > + **lll = hcl; > + hcl->next = rcl; > + *lll = &rcl->next; > + > + return NGX_OK; > +} > > - *ll = hcl; > - hcl->next = rcl; > - rcl->next = dcl; > - ll = &dcl->next; > - } > + > +static ngx_int_t > +ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl; > > /* the last boundary CRLF "--0123456789--" CRLF */ > > @@ -885,7 +972,8 @@ ngx_http_range_multipart_body(ngx_http_r > } > > b->temporary = 1; > - b->last_buf = 1; > + b->last_in_chain = 1; > + b->last_buf = (r == r->main) ? 1 : 0; > > b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof("--" CRLF) - 1); > @@ -908,7 +996,7 @@ ngx_http_range_multipart_body(ngx_http_r > > *ll = hcl; > > - return ngx_http_next_body_filter(r, out); > + return NGX_OK; > } > > > ------------------ Original ------------------ > From: "?? (hucc)";; > Send time: Friday, Nov 10, 2017 4:41 AM > To: "nginx-devel"; > Subject: Re: [patch-1] Range filter: support multiple ranges. > > Hi, > > Please ignore the previous reply. The updated patch is placed at the end. > > On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: > > >On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > > > >> # HG changeset patch > >> # User hucongcong > >> # Date 1509099940 -28800 > >> # Fri Oct 27 18:25:40 2017 +0800 > >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 > >> # Parent b9850d3deb277bd433a689712c40a84401443520 > >> Range filter: support multiple ranges. > > > >This summary line is at least misleading. > > Ok, maybe the summary line is support multiple ranges when body is > in multiple buffers. > > >> When multiple ranges are requested, nginx will coalesce any of the ranges > >> that overlap, or that are separated by a gap that is smaller than the > >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > > > >(Note that the patch also does reordering of ranges. For some > >reason this is not mentioned in the commit log. There are also > >other changes not mentioned in the commit log - for example, I see > >ngx_http_range_t was moved to ngx_http_request.h. These are > >probably do not belong to the patch at all.) > > I actually wait for you to give better advice. I tried my best to > make the changes easier and more readable and I will split it into > multiple patches based on your suggestions if these changes will be > accepted. > > >Reordering and/or coalescing ranges is not something that clients > >usually expect to happen. This was widely discussed at the time > >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 > >introduced the "MAY coalesce" clause. But this doesn't make > >clients, especially old ones, magically prepared for this. > > I did not know the CVE-2011-3192. If multiple ranges list in > ascending order and there are no overlapping ranges, the code will > be much simpler. This is what I think. > > >Moreover, this will certainly break some use cases like "request > >some metadata first, and then rest of the file". So this is > >certainly not a good idea to always reorder / coalesce ranges > >unless this is really needed for some reason. (Or even at all, > >as just returning 200 might be much more compatible with various > >clients, as outlined above.) > > > >It is also not clear what you are trying to achieve with this > >patch. You may want to elaborate more on what problem you are > >trying to solve, may be there are better solutions. > > I am trying to support multiple ranges when proxy_buffering is off > and sometimes slice is enabled. The data is always cached in the > backend which is not nginx. As far as I know, similar architecture > is widely used in CDN. So the implementation of multiple ranges in > the architecture I mentioned above is required and inevitable. > Besides, P2P clients desire for this feature to gather data-pieces. > Hope I already made it clear. > > All these changes have been tested. Hope it helps! Temporarily, > the changes are as follows: > > diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800 > @@ -46,16 +46,10 @@ > > > typedef struct { > - off_t start; > - off_t end; > - ngx_str_t content_range; > -} ngx_http_range_t; > + off_t offset; > + ngx_uint_t index; /* start with 1 */ > > - > -typedef struct { > - off_t offset; > - ngx_str_t boundary_header; > - ngx_array_t ranges; > + ngx_str_t boundary_header; > } ngx_http_range_filter_ctx_t; > > > @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa > static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); > -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); > +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); > > static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); > static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); > @@ -234,7 +230,7 @@ parse: > r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; > r->headers_out.status_line.len = 0; > > - if (ctx->ranges.nelts == 1) { > + if (r->headers_out.ranges->nelts == 1) { > return ngx_http_range_singlepart_header(r, ctx); > } > > @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t > ngx_uint_t ranges) > { > u_char *p; > - off_t start, end, size, content_length, cutoff, > - cutlim; > - ngx_uint_t suffix; > + off_t start, end, content_length, > + cutoff, cutlim; > + ngx_uint_t suffix, descending; > ngx_http_range_t *range; > ngx_http_range_filter_ctx_t *mctx; > > @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t > mctx = ngx_http_get_module_ctx(r->main, > ngx_http_range_body_filter_module); > if (mctx) { > - ctx->ranges = mctx->ranges; > + r->headers_out.ranges = r->main->headers_out.ranges; > + ctx->boundary_header = mctx->boundary_header; > return NGX_OK; > } > } > > - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) > - != NGX_OK) > - { > + r->headers_out.ranges = ngx_array_create(r->pool, 1, > + sizeof(ngx_http_range_t)); > + if (r->headers_out.ranges == NULL) { > return NGX_ERROR; > } > > p = r->headers_in.range->value.data + 6; > - size = 0; > + range = NULL; > + descending = 0; > content_length = r->headers_out.content_length_n; > > cutoff = NGX_MAX_OFF_T_VALUE / 10; > @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t > found: > > if (start < end) { > - range = ngx_array_push(&ctx->ranges); > + > + if (range && start < range->end) { > + descending++; > + } > + > + range = ngx_array_push(r->headers_out.ranges); > if (range == NULL) { > return NGX_ERROR; > } > @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t > range->start = start; > range->end = end; > > - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { > - return NGX_HTTP_RANGE_NOT_SATISFIABLE; > - } > - > - size += end - start; > - > - if (ranges-- == 0) { > - return NGX_DECLINED; > - } > - > } else if (start == 0) { > return NGX_DECLINED; > } > @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t > } > } > > - if (ctx->ranges.nelts == 0) { > + if (r->headers_out.ranges->nelts == 0) { > return NGX_HTTP_RANGE_NOT_SATISFIABLE; > } > > - if (size > content_length) { > + if (r->headers_out.ranges->nelts > ranges) { > + r->headers_out.ranges->nelts = ranges; > + } > + > + if (descending) { > return NGX_DECLINED; > } > > @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt > > /* "Content-Range: bytes SSSS-EEEE/TTTT" header */ > > - range = ctx->ranges.elts; > + range = r->headers_out.ranges->elts; > > content_range->value.len = ngx_sprintf(content_range->value.data, > "bytes %O-%O/%O", > @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http > ngx_http_range_t *range; > ngx_atomic_uint_t boundary; > > + if (r != r->main) { > + return ngx_http_next_header_filter(r); > + } > + > size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof(CRLF "Content-Type: ") - 1 > + r->headers_out.content_type.len > @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http > > len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1; > > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > + range = r->headers_out.ranges->elts; > + for (i = 0; i < r->headers_out.ranges->nelts; i++) { > > /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */ > > @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http > - range[i].content_range.data; > > len += ctx->boundary_header.len + range[i].content_range.len > - + (range[i].end - range[i].start); > + + (range[i].end - range[i].start); > } > > r->headers_out.content_length_n = len; > + r->headers_out.content_offset = range[0].start; > > if (r->headers_out.content_length) { > r->headers_out.content_length->hash = 0; > @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ > return ngx_http_next_body_filter(r, in); > } > > - if (ctx->ranges.nelts == 1) { > + if (r->headers_out.ranges->nelts == 1) { > return ngx_http_range_singlepart_body(r, ctx, in); > } > > - /* > - * multipart ranges are supported only if whole body is in a single buffer > - */ > - > if (ngx_buf_special(in->buf)) { > return ngx_http_next_body_filter(r, in); > } > > - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { > - return NGX_ERROR; > - } > - > return ngx_http_range_multipart_body(r, ctx, in); > } > > > static ngx_int_t > -ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > -{ > - off_t start, last; > - ngx_buf_t *buf; > - ngx_uint_t i; > - ngx_http_range_t *range; > - > - if (ctx->offset) { > - goto overlapped; > - } > - > - buf = in->buf; > - > - if (!buf->last_buf) { > - start = ctx->offset; > - last = ctx->offset + ngx_buf_size(buf); > - > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > - if (start > range[i].start || last < range[i].end) { > - goto overlapped; > - } > - } > - } > - > - ctx->offset = ngx_buf_size(buf); > - > - return NGX_OK; > - > -overlapped: > - > - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, > - "range in overlapped buffers"); > - > - return NGX_ERROR; > -} > - > - > -static ngx_int_t > ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_ > > out = NULL; > ll = &out; > - range = ctx->ranges.elts; > + range = r->headers_out.ranges->elts; > > for (cl = in; cl; cl = cl->next) { > > @@ -786,96 +740,227 @@ static ngx_int_t > ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > - ngx_buf_t *b, *buf; > - ngx_uint_t i; > - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; > - ngx_http_range_t *range; > + off_t start, last, back; > + ngx_buf_t *buf, *b; > + ngx_uint_t i, finished; > + ngx_chain_t *out, *cl, *ncl, **ll; > + ngx_http_range_t *range, *tail; > + > + range = r->headers_out.ranges->elts; > > - ll = &out; > - buf = in->buf; > - range = ctx->ranges.elts; > + if (!ctx->index) { > + for (i = 0; i < r->headers_out.ranges->nelts; i++) { > + if (ctx->offset < range[i].end) { > + ctx->index = i + 1; > + break; > + } > + } > + } > > - for (i = 0; i < ctx->ranges.nelts; i++) { > + tail = range + r->headers_out.ranges->nelts - 1; > + range += ctx->index - 1; > > - /* > - * The boundary header of the range: > - * CRLF > - * "--0123456789" CRLF > - * "Content-Type: image/jpeg" CRLF > - * "Content-Range: bytes " > - */ > + out = NULL; > + ll = &out; > + finished = 0; > + > + for (cl = in; cl; cl = cl->next) { > + > + buf = cl->buf; > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + start = ctx->offset; > + last = ctx->offset + ngx_buf_size(buf); > + > + ctx->offset = last; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body buf: %O-%O", start, last); > + > + if (ngx_buf_special(buf)) { > + *ll = cl; > + ll = &cl->next; > + continue; > } > > - b->memory = 1; > - b->pos = ctx->boundary_header.data; > - b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + if (range->end <= start || range->start >= last) { > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body skip"); > > - hcl = ngx_alloc_chain_link(r->pool); > - if (hcl == NULL) { > - return NGX_ERROR; > + if (buf->in_file) { > + buf->file_pos = buf->file_last; > + } > + > + buf->pos = buf->last; > + buf->sync = 1; > + > + continue; > } > > - hcl->buf = b; > + if (range->start >= start) { > > + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { > + return NGX_ERROR; > + } > > - /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + if (buf->in_file) { > + buf->file_pos += range->start - start; > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (ngx_buf_in_memory(buf)) { > + buf->pos += (size_t) (range->start - start); > + } > } > > - b->temporary = 1; > - b->pos = range[i].content_range.data; > - b->last = range[i].content_range.data + range[i].content_range.len; > + if (range->end <= last) { > + > + if (range < tail && range[1].start < last) { > + > + b = ngx_alloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > + > + ncl = ngx_alloc_chain_link(r->pool); > + if (ncl == NULL) { > + return NGX_ERROR; > + } > > - rcl = ngx_alloc_chain_link(r->pool); > - if (rcl == NULL) { > - return NGX_ERROR; > - } > + ncl->buf = b; > + ncl->next = cl; > + > + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); > + b->last_in_chain = 0; > + b->last_buf = 0; > + > + back = last - range->end; > + ctx->offset -= back; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body reuse buf: %O-%O", > + ctx->offset, ctx->offset + back); > > - rcl->buf = b; > + if (buf->in_file) { > + buf->file_pos = buf->file_last - back; > + } > + > + if (ngx_buf_in_memory(buf)) { > + buf->pos = buf->last - back; > + } > > + cl = ncl; > + buf = cl->buf; > + } > + > + if (buf->in_file) { > + buf->file_last -= last - range->end; > + } > > - /* the range data */ > + if (ngx_buf_in_memory(buf)) { > + buf->last -= (size_t) (last - range->end); > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (range == tail) { > + buf->last_buf = (r == r->main) ? 1 : 0; > + buf->last_in_chain = 1; > + *ll = cl; > + ll = &cl->next; > + > + finished = 1; > + break; > + } > + > + range++; > + ctx->index++; > } > > - b->in_file = buf->in_file; > - b->temporary = buf->temporary; > - b->memory = buf->memory; > - b->mmap = buf->mmap; > - b->file = buf->file; > + *ll = cl; > + ll = &cl->next; > + } > + > + if (out == NULL) { > + return NGX_OK; > + } > + > + *ll = NULL; > + > + if (finished > + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) > + { > + return NGX_ERROR; > + } > + > + return ngx_http_next_body_filter(r, out); > +} > + > > - if (buf->in_file) { > - b->file_pos = buf->file_pos + range[i].start; > - b->file_last = buf->file_pos + range[i].end; > - } > +static ngx_int_t > +ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl, *rcl; > + ngx_http_range_t *range; > + > + /* > + * The boundary header of the range: > + * CRLF > + * "--0123456789" CRLF > + * "Content-Type: image/jpeg" CRLF > + * "Content-Range: bytes " > + */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - if (ngx_buf_in_memory(buf)) { > - b->pos = buf->pos + (size_t) range[i].start; > - b->last = buf->pos + (size_t) range[i].end; > - } > + b->memory = 1; > + b->pos = ctx->boundary_header.data; > + b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + > + hcl = ngx_alloc_chain_link(r->pool); > + if (hcl == NULL) { > + return NGX_ERROR; > + } > + > + hcl->buf = b; > + > + > + /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - dcl = ngx_alloc_chain_link(r->pool); > - if (dcl == NULL) { > - return NGX_ERROR; > - } > + range = r->headers_out.ranges->elts; > + b->temporary = 1; > + b->pos = range[ctx->index - 1].content_range.data; > + b->last = range[ctx->index - 1].content_range.data > + + range[ctx->index - 1].content_range.len; > + > + rcl = ngx_alloc_chain_link(r->pool); > + if (rcl == NULL) { > + return NGX_ERROR; > + } > + > + rcl->buf = b; > > - dcl->buf = b; > + **lll = hcl; > + hcl->next = rcl; > + *lll = &rcl->next; > + > + return NGX_OK; > +} > > - *ll = hcl; > - hcl->next = rcl; > - rcl->next = dcl; > - ll = &dcl->next; > - } > + > +static ngx_int_t > +ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl; > > /* the last boundary CRLF "--0123456789--" CRLF */ > > @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r > } > > b->temporary = 1; > - b->last_buf = 1; > + b->last_in_chain = 1; > + b->last_buf = (r == r->main) ? 1 : 0; > > b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof("--" CRLF) - 1); > @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r > > *ll = hcl; > > - return ngx_http_next_body_filter(r, out); > + return NGX_OK; > } > > > diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c > --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800 > @@ -22,6 +22,8 @@ typedef struct { > ngx_str_t etag; > unsigned last:1; > unsigned active:1; > + unsigned multipart:1; > + ngx_uint_t index; > ngx_http_request_t *sr; > } ngx_http_slice_ctx_t; > > @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re > { > off_t end; > ngx_int_t rc; > + ngx_uint_t i; > ngx_table_elt_t *h; > + ngx_http_range_t *range; > ngx_http_slice_ctx_t *ctx; > ngx_http_slice_loc_conf_t *slcf; > ngx_http_slice_content_range_t cr; > @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re > > r->allow_ranges = 1; > r->subrequest_ranges = 1; > - r->single_range = 1; > > rc = ngx_http_next_header_filter(r); > > - if (r != r->main) { > - return rc; > + if (r == r->main) { > + r->preserve_body = 1; > + > + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > + ctx->multipart = (r->headers_out.ranges->nelts != 1); > + range = r->headers_out.ranges->elts; > + > + if (ctx->start + (off_t) slcf->size <= range[0].start) { > + ctx->start = slcf->size * (range[0].start / slcf->size); > + } > + > + ctx->end = range[r->headers_out.ranges->nelts - 1].end; > + > + } else { > + ctx->end = cr.complete_length; > + } > } > > - r->preserve_body = 1; > + if (ctx->multipart) { > + range = r->headers_out.ranges->elts; > + > + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) { > + > + if (ctx->start < range[i].end) { > + ctx->index = i; > + break; > + } > > - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { > - ctx->start = slcf->size > - * (r->headers_out.content_offset / slcf->size); > + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) { > + i++; > + ctx->index = i; > + ctx->start = slcf->size * (range[i].start / slcf->size); > + > + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "range multipart so fast forward to %O-%O @%O", > + range[i].start, range[i].end, ctx->start); > + break; > + } > } > - > - ctx->end = r->headers_out.content_offset > - + r->headers_out.content_length_n; > - > - } else { > - ctx->end = cr.complete_length; > } > > return rc; > diff -r 32f83fe5747b src/http/ngx_http_request.h > --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800 > @@ -251,6 +251,13 @@ typedef struct { > > > typedef struct { > + off_t start; > + off_t end; > + ngx_str_t content_range; > +} ngx_http_range_t; > + > + > +typedef struct { > ngx_list_t headers; > ngx_list_t trailers; > > @@ -278,6 +285,7 @@ typedef struct { > u_char *content_type_lowcase; > ngx_uint_t content_type_hash; > > + ngx_array_t *ranges; /* ngx_http_range_t */ > ngx_array_t cache_control; > > off_t content_length_n; > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://mdounin.ru/ From karim at malhas.de Wed Nov 15 20:57:28 2017 From: karim at malhas.de (Karim Malhas) Date: Wed, 15 Nov 2017 21:57:28 +0100 Subject: patch for #1416 xslt_stylesheet parameter only works on first request Message-ID: <20171115205728.GA31813@klx> Hi all, I recently discovered a bug [0] concerning the xslt_stylesheet directive as documented here [1]. The root cause of this issue is that the parser of the directive also modifies it instead of just reading it. I have created a patch which lets the parser work on a copy of that data, so that changes to it will not affect subsequent requests. In my opinion this is merely a workaround, the parser probably should not modify the data it parsed from the configuration file, but I didn't feel comfortable to make such a big change. Since this is my first time working with nginx I am seeking feedback on the approach I used. Kind regards, Karim Malhas [0] https://trac.nginx.org/nginx/ticket/1416 [1] https://nginx.org/en/docs/http/ngx_http_xslt_module.html#xslt_stylesheet -- -------------- next part -------------- A non-text attachment was scrubbed... Name: 1416.patch Type: text/x-diff Size: 1665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 2991 bytes Desc: not available URL: From hucong.c at foxmail.com Thu Nov 16 02:11:13 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Thu, 16 Nov 2017 10:11:13 +0800 Subject: [patch-1] Range filter: support multiple ranges. Message-ID: Hi, On Tuesday, Nov 14, 2017 10:58 PM +0300, Maxim Dounin wrote: >Hello! > >On Fri, Nov 10, 2017 at 04:41:57AM +0800, ?? (hucc) wrote: > >> On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: > >[...] > >> >> When multiple ranges are requested, nginx will coalesce any of the ranges >> >> that overlap, or that are separated by a gap that is smaller than the >> >> NGX_HTTP_RANGE_MULTIPART_GAP macro. >> > >> >(Note that the patch also does reordering of ranges. For some >> >reason this is not mentioned in the commit log. There are also >> >other changes not mentioned in the commit log - for example, I see >> >ngx_http_range_t was moved to ngx_http_request.h. These are >> >probably do not belong to the patch at all.) >> >> I actually wait for you to give better advice. I tried my best to >> make the changes easier and more readable and I will split it into >> multiple patches based on your suggestions if these changes will be >> accepted. > >General rule is: keep distinct changes in separate patches. I >don't think I have anything better to suggest here. > >> >Reordering and/or coalescing ranges is not something that clients >> >usually expect to happen. This was widely discussed at the time >> >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 >> >introduced the "MAY coalesce" clause. But this doesn't make >> >clients, especially old ones, magically prepared for this. >> >> I did not know the CVE-2011-3192. If multiple ranges list in >> ascending order and there are no overlapping ranges, the code will >> be much simpler. This is what I think. > >While your intention is understood, this is certainly not >something we should do. As far as it is possible, we should >preserve exact order and range sizes instead, to avoid >compatibility problems and to preserve various use cases which >require non-sequential order. > >> >Moreover, this will certainly break some use cases like "request >> >some metadata first, and then rest of the file". So this is >> >certainly not a good idea to always reorder / coalesce ranges >> >unless this is really needed for some reason. (Or even at all, >> >as just returning 200 might be much more compatible with various >> >clients, as outlined above.) >> > >> >It is also not clear what you are trying to achieve with this >> >patch. You may want to elaborate more on what problem you are >> >trying to solve, may be there are better solutions. >> >> I am trying to support multiple ranges when proxy_buffering is off >> and sometimes slice is enabled. The data is always cached in the >> backend which is not nginx. As far as I know, similar architecture >> is widely used in CDN. So the implementation of multiple ranges in >> the architecture I mentioned above is required and inevitable. >> Besides, P2P clients desire for this feature to gather data-pieces. >> Hope I already made it clear. > >So you are trying to support multi-range requests to resources >which use slice module, correct? Thank you very much for giving a detailed reply. Yes, I am trying to support multi-range requests to resources which use slice module. >Do you have any specific clients in mind? I've seen very few >legitimate clients which use multi-range requests. In the beginning, we have just one such customer, and the client of the customer gets 200 instead of 206. Now, we have some P2P clients need to gather pieces that can not get from nearby. It is a huge waste of resources if the clients get 200. In fact, this patch is mainly to solve P2P's problem, and the requested ranges are indeed in sequential order. This is the original request/idea. But since the current implementation is too simple and too restrictive, I want to make it better so I sent the patch. From ru at nginx.com Thu Nov 16 10:14:57 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 16 Nov 2017 13:14:57 +0300 Subject: patch for #1416 xslt_stylesheet parameter only works on first request In-Reply-To: <20171115205728.GA31813@klx> References: <20171115205728.GA31813@klx> Message-ID: <20171116101457.GD96177@lo0.su> Hi, On Wed, Nov 15, 2017 at 09:57:28PM +0100, Karim Malhas wrote: > Hi all, > > I recently discovered a bug [0] concerning the xslt_stylesheet directive as > documented here [1]. > > The root cause of this issue is that the parser of the directive also > modifies it instead of just reading it. > > I have created a patch which lets the parser work on a copy of that > data, so that changes to it will not affect subsequent requests. > > In my opinion this is merely a workaround, the parser probably should > not modify the data it parsed from the configuration file, but I didn't > feel comfortable to make such a big change. > > Since this is my first time working with nginx I am seeking feedback on > the approach I used. > > > Kind regards, > Karim Malhas > > > > [0] https://trac.nginx.org/nginx/ticket/1416 > [1] https://nginx.org/en/docs/http/ngx_http_xslt_module.html#xslt_stylesheet My patch has just got a positive review from Maxim Dounin, and I'm going to commit it later today. Here it is, for reference: # HG changeset patch # User Ruslan Ermilov # Date 1510827158 -10800 # Thu Nov 16 13:12:38 2017 +0300 # Node ID e8062e0dd60c8f594106cf4bee8761429702c8e5 # Parent 687a9344627a48bc307c942148e07a95fc893382 Xslt: fixed parameters parsing (ticket #1416). If parameters were specified in xslt_stylesheet without variables, any request except the first would cause an internal server error. diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -686,8 +686,19 @@ ngx_http_xslt_params(ngx_http_request_t * specified in xslt_stylesheet directives */ - p = string.data; - last = string.data + string.len; + if (param[i].value.lengths) { + p = string.data; + + } else { + p = ngx_pnalloc(r->pool, string.len + 1); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, string.data, string.len + 1); + } + + last = p + string.len; while (p && *p) { From ru at nginx.com Thu Nov 16 10:25:08 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 16 Nov 2017 10:25:08 +0000 Subject: [nginx] Xslt: fixed parameters parsing (ticket #1416). Message-ID: details: http://hg.nginx.org/nginx/rev/595a3de03e91 branches: changeset: 7154:595a3de03e91 user: Ruslan Ermilov date: Thu Nov 16 13:20:47 2017 +0300 description: Xslt: fixed parameters parsing (ticket #1416). If parameters were specified in xslt_stylesheet without variables, any request except the first would cause an internal server error. diffstat: src/http/modules/ngx_http_xslt_filter_module.c | 15 +++++++++++++-- 1 files changed, 13 insertions(+), 2 deletions(-) diffs (25 lines): diff -r 32f83fe5747b -r 595a3de03e91 src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_xslt_filter_module.c Thu Nov 16 13:20:47 2017 +0300 @@ -686,8 +686,19 @@ ngx_http_xslt_params(ngx_http_request_t * specified in xslt_stylesheet directives */ - p = string.data; - last = string.data + string.len; + if (param[i].value.lengths) { + p = string.data; + + } else { + p = ngx_pnalloc(r->pool, string.len + 1); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, string.data, string.len + 1); + } + + last = p + string.len; while (p && *p) { From karim at malhas.de Thu Nov 16 11:14:57 2017 From: karim at malhas.de (Karim Malhas) Date: Thu, 16 Nov 2017 12:14:57 +0100 Subject: patch for #1416 xslt_stylesheet parameter only works on first request In-Reply-To: <20171116101457.GD96177@lo0.su> References: <20171115205728.GA31813@klx> <20171116101457.GD96177@lo0.su> Message-ID: <20171116111457.GA8580@klx> Ah, I didn't see that. Thank you! On Thu, Nov 16, 2017 at 01:14:57PM +0300, Ruslan Ermilov wrote: > Hi, > > On Wed, Nov 15, 2017 at 09:57:28PM +0100, Karim Malhas wrote: > > Hi all, > > > > I recently discovered a bug [0] concerning the xslt_stylesheet directive as > > documented here [1]. > > > > The root cause of this issue is that the parser of the directive also > > modifies it instead of just reading it. > > > > I have created a patch which lets the parser work on a copy of that > > data, so that changes to it will not affect subsequent requests. > > > > In my opinion this is merely a workaround, the parser probably should > > not modify the data it parsed from the configuration file, but I didn't > > feel comfortable to make such a big change. > > > > Since this is my first time working with nginx I am seeking feedback on > > the approach I used. > > > > > > Kind regards, > > Karim Malhas > > > > > > > > [0] https://trac.nginx.org/nginx/ticket/1416 > > [1] https://nginx.org/en/docs/http/ngx_http_xslt_module.html#xslt_stylesheet > > My patch has just got a positive review from Maxim Dounin, and > I'm going to commit it later today. Here it is, for reference: > > # HG changeset patch > # User Ruslan Ermilov > # Date 1510827158 -10800 > # Thu Nov 16 13:12:38 2017 +0300 > # Node ID e8062e0dd60c8f594106cf4bee8761429702c8e5 > # Parent 687a9344627a48bc307c942148e07a95fc893382 > Xslt: fixed parameters parsing (ticket #1416). > > If parameters were specified in xslt_stylesheet without variables, > any request except the first would cause an internal server error. > > diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c > --- a/src/http/modules/ngx_http_xslt_filter_module.c > +++ b/src/http/modules/ngx_http_xslt_filter_module.c > @@ -686,8 +686,19 @@ ngx_http_xslt_params(ngx_http_request_t > * specified in xslt_stylesheet directives > */ > > - p = string.data; > - last = string.data + string.len; > + if (param[i].value.lengths) { > + p = string.data; > + > + } else { > + p = ngx_pnalloc(r->pool, string.len + 1); > + if (p == NULL) { > + return NGX_ERROR; > + } > + > + ngx_memcpy(p, string.data, string.len + 1); > + } > + > + last = p + string.len; > > while (p && *p) { > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 2971 bytes Desc: not available URL: From xeioex at nginx.com Fri Nov 17 16:06:04 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:04 +0000 Subject: [njs] Fixed JSON.stringify() for objects inherited from Object prototype. Message-ID: details: http://hg.nginx.org/njs/rev/a6af47aab3f2 branches: changeset: 419:a6af47aab3f2 user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Fixed JSON.stringify() for objects inherited from Object prototype. diffstat: njs/njs_json.c | 12 +++++++----- njs/test/njs_unit_test.c | 3 +++ 2 files changed, 10 insertions(+), 5 deletions(-) diffs (56 lines): diff -r 06aadeb164a3 -r a6af47aab3f2 njs/njs_json.c --- a/njs/njs_json.c Mon Oct 09 20:37:02 2017 +0300 +++ b/njs/njs_json.c Fri Nov 17 18:55:07 2017 +0300 @@ -1162,8 +1162,10 @@ njs_json_parse_exception(njs_json_parse_ } -#define njs_is_object_or_array(value) \ - (((value)->type == NJS_OBJECT) || ((value)->type == NJS_ARRAY)) +#define njs_json_is_object(value) \ + (((value)->type == NJS_OBJECT) \ + || ((value)->type == NJS_ARRAY) \ + || ((value)->type >= NJS_REGEXP)) #define njs_json_stringify_append(str, len) \ @@ -1280,7 +1282,7 @@ njs_json_stringify_continuation(njs_vm_t njs_json_stringify_append_key(&prop->name); - if (njs_is_object_or_array(&prop->value)) { + if (njs_json_is_object(&prop->value)) { state = njs_json_push_stringify_state(vm, stringify, &prop->value); if (state == NULL) { @@ -1371,7 +1373,7 @@ njs_json_stringify_continuation(njs_vm_t return njs_json_stringify_replacer(vm, stringify, NULL, value); } - if (njs_is_object_or_array(value)) { + if (njs_json_is_object(value)) { state = njs_json_push_stringify_state(vm, stringify, value); if (state == NULL) { return NXT_ERROR; @@ -1397,7 +1399,7 @@ njs_json_stringify_continuation(njs_vm_t case NJS_JSON_ARRAY_REPLACED: state->type = NJS_JSON_ARRAY_CONTINUE; - if (njs_is_object_or_array(&stringify->retval)) { + if (njs_json_is_object(&stringify->retval)) { state = njs_json_push_stringify_state(vm, stringify, &stringify->retval); if (state == NULL) { diff -r 06aadeb164a3 -r a6af47aab3f2 njs/test/njs_unit_test.c --- a/njs/test/njs_unit_test.c Mon Oct 09 20:37:02 2017 +0300 +++ b/njs/test/njs_unit_test.c Fri Nov 17 18:55:07 2017 +0300 @@ -8199,6 +8199,9 @@ static njs_unit_test_t njs_test[] = { nxt_string("JSON.stringify({a:{}, b:[function(v){}]})"), nxt_string("{\"a\":{},\"b\":[null]}") }, + { nxt_string("JSON.stringify(RegExp())"), + nxt_string("{}") }, + /* Ignoring named properties of an array. */ { nxt_string("var a = [1,2]; a.a = 1;" From xeioex at nginx.com Fri Nov 17 16:06:04 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:04 +0000 Subject: [njs] Checking that backtrace is available before accessing it. Message-ID: details: http://hg.nginx.org/njs/rev/84a95e20f93a branches: changeset: 421:84a95e20f93a user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Checking that backtrace is available before accessing it. diffstat: njs/njscript.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 5bc8d7c25e4f -r 84a95e20f93a njs/njscript.c --- a/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 @@ -533,7 +533,7 @@ njs_vm_exception(njs_vm_t *vm, nxt_str_t nxt_array_t * njs_vm_backtrace(njs_vm_t *vm) { - if (!nxt_array_is_empty(vm->backtrace)) { + if (vm->backtrace != NULL && !nxt_array_is_empty(vm->backtrace)) { return vm->backtrace; } From xeioex at nginx.com Fri Nov 17 16:06:04 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:04 +0000 Subject: [njs] Fixed inheriting debug metadata while cloning a VM. Message-ID: details: http://hg.nginx.org/njs/rev/5637024772aa branches: changeset: 423:5637024772aa user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Fixed inheriting debug metadata while cloning a VM. diffstat: njs/njscript.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff -r a83775113025 -r 5637024772aa njs/njscript.c --- a/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 @@ -332,6 +332,8 @@ njs_vm_clone(njs_vm_t *vm, nxt_mem_cache nvm->global_scope = vm->global_scope; nvm->scope_size = vm->scope_size; + nvm->debug = vm->debug; + ret = njs_vm_init(nvm); if (nxt_slow_path(ret != NXT_OK)) { goto fail; From xeioex at nginx.com Fri Nov 17 16:06:04 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:04 +0000 Subject: [njs] Error builtin objects. Message-ID: details: http://hg.nginx.org/njs/rev/5bc8d7c25e4f branches: changeset: 420:5bc8d7c25e4f user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Error builtin objects. diffstat: Makefile | 16 + njs/njs_array.c | 9 +- njs/njs_boolean.c | 5 +- njs/njs_builtin.c | 87 ++++- njs/njs_date.c | 5 +- njs/njs_error.c | 865 +++++++++++++++++++++++++++++++++++++++++++ njs/njs_error.h | 77 +++ njs/njs_function.c | 14 +- njs/njs_generator.c | 14 +- njs/njs_json.c | 100 +--- njs/njs_lexer_keyword.c | 9 + njs/njs_number.c | 7 +- njs/njs_object.c | 71 ++- njs/njs_object_hash.h | 19 + njs/njs_parser.c | 36 + njs/njs_parser.h | 9 + njs/njs_regexp.c | 9 +- njs/njs_string.c | 15 +- njs/njs_vm.c | 57 ++- njs/njs_vm.h | 131 ++++- njs/njscript.c | 4 + njs/test/njs_expect_test.exp | 8 +- njs/test/njs_unit_test.c | 274 +++++++++++++ nxt/nxt_string.h | 3 + 24 files changed, 1661 insertions(+), 183 deletions(-) diffs (truncated from 2877 to 1000 lines): diff -r a6af47aab3f2 -r 5bc8d7c25e4f Makefile --- a/Makefile Fri Nov 17 18:55:07 2017 +0300 +++ b/Makefile Fri Nov 17 18:55:07 2017 +0300 @@ -20,6 +20,7 @@ NXT_BUILDDIR = build $(NXT_BUILDDIR)/njs_function.o \ $(NXT_BUILDDIR)/njs_regexp.o \ $(NXT_BUILDDIR)/njs_date.o \ + $(NXT_BUILDDIR)/njs_error.o \ $(NXT_BUILDDIR)/njs_math.o \ $(NXT_BUILDDIR)/njs_extern.o \ $(NXT_BUILDDIR)/njs_variable.o \ @@ -53,6 +54,7 @@ NXT_BUILDDIR = build $(NXT_BUILDDIR)/njs_function.o \ $(NXT_BUILDDIR)/njs_regexp.o \ $(NXT_BUILDDIR)/njs_date.o \ + $(NXT_BUILDDIR)/njs_error.o \ $(NXT_BUILDDIR)/njs_math.o \ $(NXT_BUILDDIR)/njs_extern.o \ $(NXT_BUILDDIR)/njs_variable.o \ @@ -271,6 +273,20 @@ dist: -I$(NXT_LIB) -Injs $(NXT_PCRE_CFLAGS) \ njs/njs_date.c +$(NXT_BUILDDIR)/njs_error.o: \ + $(NXT_BUILDDIR)/libnxt.a \ + njs/njscript.h \ + njs/njs_vm.h \ + njs/njs_string.h \ + njs/njs_object.h \ + njs/njs_function.h \ + njs/njs_error.h \ + njs/njs_error.c \ + + $(NXT_CC) -c -o $(NXT_BUILDDIR)/njs_error.o $(NXT_CFLAGS) \ + -I$(NXT_LIB) -Injs $(NXT_PCRE_CFLAGS) \ + njs/njs_error.c + $(NXT_BUILDDIR)/njs_math.o: \ $(NXT_BUILDDIR)/libnxt.a \ njs/njscript.h \ diff -r a6af47aab3f2 -r 5bc8d7c25e4f njs/njs_array.c --- a/njs/njs_array.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_array.c Fri Nov 17 18:55:07 2017 +0300 @@ -23,6 +23,7 @@ #include #include #include +#include #include @@ -248,7 +249,7 @@ njs_array_constructor(njs_vm_t *vm, njs_ size = (uint32_t) num; if ((double) size != num) { - vm->exception = &njs_exception_range_error; + njs_exception_range_error(vm, NULL, NULL); return NXT_ERROR; } @@ -1713,7 +1714,7 @@ njs_array_prototype_reduce(njs_vm_t *vm, n = njs_array_iterator_index(array, iter); if (n == NJS_ARRAY_INVALID_INDEX) { - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } @@ -1774,7 +1775,7 @@ njs_array_iterator_args(njs_vm_t *vm, nj return NXT_OK; } - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } @@ -1858,7 +1859,7 @@ njs_array_prototype_reduce_right(njs_vm_ unused); type_error: - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } diff -r a6af47aab3f2 -r 5bc8d7c25e4f njs/njs_boolean.c --- a/njs/njs_boolean.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_boolean.c Fri Nov 17 18:55:07 2017 +0300 @@ -18,6 +18,7 @@ #include #include #include +#include njs_ret_t @@ -98,7 +99,7 @@ njs_boolean_prototype_value_of(njs_vm_t value = &value->data.u.object_value->value; } else { - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } } @@ -123,7 +124,7 @@ njs_boolean_prototype_to_string(njs_vm_t value = &value->data.u.object_value->value; } else { - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } } diff -r a6af47aab3f2 -r 5bc8d7c25e4f njs/njs_builtin.c --- a/njs/njs_builtin.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_builtin.c Fri Nov 17 18:55:07 2017 +0300 @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -62,6 +63,15 @@ const njs_object_init_t *njs_prototype_ &njs_function_prototype_init, &njs_regexp_prototype_init, &njs_date_prototype_init, + &njs_error_prototype_init, + &njs_eval_error_prototype_init, + &njs_internal_error_prototype_init, + &njs_range_error_prototype_init, + &njs_ref_error_prototype_init, + &njs_syntax_error_prototype_init, + &njs_type_error_prototype_init, + &njs_uri_error_prototype_init, + &njs_memory_error_prototype_init, }; @@ -74,6 +84,15 @@ const njs_object_init_t *njs_construc &njs_function_constructor_init, &njs_regexp_constructor_init, &njs_date_constructor_init, + &njs_error_constructor_init, + &njs_eval_error_constructor_init, + &njs_internal_error_constructor_init, + &njs_range_error_constructor_init, + &njs_ref_error_constructor_init, + &njs_syntax_error_constructor_init, + &njs_type_error_constructor_init, + &njs_uri_error_constructor_init, + &njs_memory_error_constructor_init, }; @@ -126,6 +145,16 @@ njs_builtin_objects_create(njs_vm_t *vm) { .date = { .time = NAN, .object = { .type = NJS_DATE } } }, + + { .object = { .type = NJS_OBJECT_ERROR } }, + { .object = { .type = NJS_OBJECT_EVAL_ERROR } }, + { .object = { .type = NJS_OBJECT_INTERNAL_ERROR } }, + { .object = { .type = NJS_OBJECT_RANGE_ERROR } }, + { .object = { .type = NJS_OBJECT_REF_ERROR } }, + { .object = { .type = NJS_OBJECT_SYNTAX_ERROR } }, + { .object = { .type = NJS_OBJECT_TYPE_ERROR } }, + { .object = { .type = NJS_OBJECT_URI_ERROR } }, + { .object = { .type = NJS_OBJECT_INTERNAL_ERROR } }, }; static const njs_function_init_t native_constructors[] = { @@ -139,6 +168,18 @@ njs_builtin_objects_create(njs_vm_t *vm) { njs_regexp_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG, NJS_STRING_ARG } }, { njs_date_constructor, { 0 } }, + { njs_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_eval_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_internal_error_constructor, + { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_range_error_constructor, + { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_ref_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_syntax_error_constructor, + { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_type_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_uri_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_memory_error_constructor, { NJS_SKIP_ARG, NJS_STRING_ARG } }, }; static const njs_object_init_t *function_init[] = { @@ -309,6 +350,42 @@ njs_builtin_objects_create(njs_vm_t *vm) * Date.__proto__ -> Function_Prototype, * Date_Prototype.__proto__ -> Object_Prototype, * + * Error(), + * Error.__proto__ -> Function_Prototype, + * Error_Prototype.__proto__ -> Object_Prototype, + * + * EvalError(), + * EvalError.__proto__ -> Function_Prototype, + * EvalError_Prototype.__proto__ -> Error_Prototype, + * + * InternalError(), + * InternalError.__proto__ -> Function_Prototype, + * InternalError_Prototype.__proto__ -> Error_Prototype, + * + * RangeError(), + * RangeError.__proto__ -> Function_Prototype, + * RangeError_Prototype.__proto__ -> Error_Prototype, + * + * ReferenceError(), + * ReferenceError.__proto__ -> Function_Prototype, + * ReferenceError_Prototype.__proto__ -> Error_Prototype, + * + * SyntaxError(), + * SyntaxError.__proto__ -> Function_Prototype, + * SyntaxError_Prototype.__proto__ -> Error_Prototype, + * + * TypeError(), + * TypeError.__proto__ -> Function_Prototype, + * TypeError_Prototype.__proto__ -> Error_Prototype, + * + * URIError(), + * URIError.__proto__ -> Function_Prototype, + * URIError_Prototype.__proto__ -> Error_Prototype, + * + * MemoryError(), + * MemoryError.__proto__ -> Function_Prototype, + * MemoryError_Prototype.__proto__ -> Error_Prototype, + * * eval(), * eval.__proto__ -> Function_Prototype. */ @@ -319,7 +396,7 @@ njs_builtin_objects_clone(njs_vm_t *vm) size_t size; nxt_uint_t i; njs_value_t *values; - njs_object_t *object_prototype, *function_prototype; + njs_object_t *object_prototype, *function_prototype, *error_prototype; /* * Copy both prototypes and constructors arrays by one memcpy() @@ -332,10 +409,16 @@ njs_builtin_objects_clone(njs_vm_t *vm) object_prototype = &vm->prototypes[NJS_PROTOTYPE_OBJECT].object; - for (i = NJS_PROTOTYPE_ARRAY; i < NJS_PROTOTYPE_MAX; i++) { + for (i = NJS_PROTOTYPE_ARRAY; i < NJS_PROTOTYPE_EVAL_ERROR; i++) { vm->prototypes[i].object.__proto__ = object_prototype; } + error_prototype = &vm->prototypes[NJS_PROTOTYPE_ERROR].object; + + for (i = NJS_PROTOTYPE_EVAL_ERROR; i < NJS_PROTOTYPE_MAX; i++) { + vm->prototypes[i].object.__proto__ = error_prototype; + } + function_prototype = &vm->prototypes[NJS_CONSTRUCTOR_FUNCTION].object; values = vm->scopes[NJS_SCOPE_GLOBAL]; diff -r a6af47aab3f2 -r 5bc8d7c25e4f njs/njs_date.c --- a/njs/njs_date.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_date.c Fri Nov 17 18:55:07 2017 +0300 @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -1062,7 +1063,7 @@ njs_date_prototype_to_iso_string(njs_vm_ return njs_string_new(vm, &vm->retval, buf, size, size); } - vm->exception = &njs_exception_range_error; + njs_exception_range_error(vm, NULL, NULL); return NXT_ERROR; } @@ -1910,7 +1911,7 @@ njs_date_prototype_to_json(njs_vm_t *vm, } } - vm->exception = &njs_exception_type_error; + njs_exception_type_error(vm, NULL, NULL); return NXT_ERROR; } diff -r a6af47aab3f2 -r 5bc8d7c25e4f njs/njs_error.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/njs/njs_error.c Fri Nov 17 18:55:07 2017 +0300 @@ -0,0 +1,865 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) NGINX, Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +static const njs_value_t njs_error_message_string = njs_string("message"); +static const njs_value_t njs_error_name_string = njs_string("name"); + + +void +njs_exception_error_create(njs_vm_t *vm, njs_value_type_t type, + const char* fmt, ...) +{ + size_t size; + va_list args; + nxt_int_t ret; + njs_value_t string, *value; + njs_object_t *error; + + static char buf[256]; + + if (fmt != NULL) { + va_start(args, fmt); + size = vsnprintf(buf, sizeof(buf), fmt, args); + va_end(args); + + } else { + size = 0; + } + + ret = njs_string_new(vm, &string, (const u_char *) buf, size, size); + if (nxt_slow_path(ret != NXT_OK)) { + goto memory_error; + } + + error = njs_error_alloc(vm, type, NULL, &string); + if (nxt_slow_path(error == NULL)) { + goto memory_error; + } + + value = nxt_mem_cache_alloc(vm->mem_cache_pool, sizeof(njs_value_t)); + if (nxt_slow_path(value == NULL)) { + goto memory_error; + } + + value->data.u.object = error; + value->type = type; + value->data.truth = 1; + + vm->exception = value; + + return; + +memory_error: + + njs_exception_memory_error(vm); +} + + +nxt_noinline njs_object_t * +njs_error_alloc(njs_vm_t *vm, njs_value_type_t type, const njs_value_t *name, + const njs_value_t *message) +{ + nxt_int_t ret; + njs_object_t *error; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + error = nxt_mem_cache_alloc(vm->mem_cache_pool, sizeof(njs_object_t)); + if (nxt_slow_path(error == NULL)) { + return NULL; + } + + nxt_lvlhsh_init(&error->hash); + nxt_lvlhsh_init(&error->shared_hash); + error->type = type; + error->shared = 0; + error->extensible = 1; + error->__proto__ = &vm->prototypes[njs_error_prototype_index(type)].object; + + lhq.replace = 0; + lhq.pool = vm->mem_cache_pool; + + if (name != NULL) { + lhq.key = nxt_string_value("name"); + lhq.key_hash = NJS_NAME_HASH; + lhq.proto = &njs_object_hash_proto; + + prop = njs_object_prop_alloc(vm, &njs_error_name_string, name, 1); + if (nxt_slow_path(prop == NULL)) { + return NULL; + } + + lhq.value = prop; + + ret = nxt_lvlhsh_insert(&error->hash, &lhq); + if (nxt_slow_path(ret != NXT_OK)) { + return NULL; + } + } + + if (message!= NULL) { + lhq.key = nxt_string_value("message"); + lhq.key_hash = NJS_MESSAGE_HASH; + lhq.proto = &njs_object_hash_proto; + + prop = njs_object_prop_alloc(vm, &njs_error_message_string, message, 1); + if (nxt_slow_path(prop == NULL)) { + return NULL; + } + + prop->enumerable = 0; + + lhq.value = prop; + + ret = nxt_lvlhsh_insert(&error->hash, &lhq); + if (nxt_slow_path(ret != NXT_OK)) { + return NULL; + } + } + + return error; +} + + +static njs_ret_t +njs_error_create(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_value_type_t type) +{ + njs_object_t *error; + const njs_value_t *value; + + if (nargs == 1) { + value = &njs_string_empty; + + } else { + value = &args[1]; + } + + error = njs_error_alloc(vm, type, NULL, value); + if (nxt_slow_path(error == NULL)) { + njs_exception_memory_error(vm); + return NXT_ERROR; + } + + vm->retval.data.u.object = error; + vm->retval.type = type; + vm->retval.data.truth = 1; + + return NXT_OK; +} + + +njs_ret_t +njs_error_constructor(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_ERROR); +} + + +static const njs_object_prop_t njs_error_constructor_properties[] = +{ + /* Error.name == "Error". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("Error"), + }, + + /* Error.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* Error.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_error_constructor_init = { + nxt_string("Error"), + njs_error_constructor_properties, + nxt_nitems(njs_error_constructor_properties), +}; + + +njs_ret_t +njs_eval_error_constructor(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_EVAL_ERROR); +} + + +static const njs_object_prop_t njs_eval_error_constructor_properties[] = +{ + /* EvalError.name == "EvalError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("EvalError"), + }, + + /* EvalError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* EvalError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_eval_error_constructor_init = { + nxt_string("EvalError"), + njs_eval_error_constructor_properties, + nxt_nitems(njs_eval_error_constructor_properties), +}; + + +njs_ret_t +njs_internal_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_INTERNAL_ERROR); +} + + +static const njs_object_prop_t njs_internal_error_constructor_properties[] = +{ + /* InternalError.name == "InternalError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("InternalError"), + }, + + /* InternalError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* InternalError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_internal_error_constructor_init = { + nxt_string("InternalError"), + njs_internal_error_constructor_properties, + nxt_nitems(njs_internal_error_constructor_properties), +}; + + +njs_ret_t +njs_range_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_RANGE_ERROR); +} + + +static const njs_object_prop_t njs_range_error_constructor_properties[] = +{ + /* RangeError.name == "RangeError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("RangeError"), + }, + + /* RangeError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* RangeError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_range_error_constructor_init = { + nxt_string("RangeError"), + njs_range_error_constructor_properties, + nxt_nitems(njs_range_error_constructor_properties), +}; + + +njs_ret_t +njs_ref_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_REF_ERROR); +} + + +static const njs_object_prop_t njs_ref_error_constructor_properties[] = +{ + /* ReferenceError.name == "ReferenceError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("ReferenceError"), + }, + + /* ReferenceError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* ReferenceError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_ref_error_constructor_init = { + nxt_string("ReferenceError"), + njs_ref_error_constructor_properties, + nxt_nitems(njs_ref_error_constructor_properties), +}; + + +njs_ret_t +njs_syntax_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_SYNTAX_ERROR); +} + + +static const njs_object_prop_t njs_syntax_error_constructor_properties[] = +{ + /* SyntaxError.name == "SyntaxError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("SyntaxError"), + }, + + /* SyntaxError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* SyntaxError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_syntax_error_constructor_init = { + nxt_string("SyntaxError"), + njs_syntax_error_constructor_properties, + nxt_nitems(njs_syntax_error_constructor_properties), +}; + + +njs_ret_t +njs_type_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_TYPE_ERROR); +} + + +static const njs_object_prop_t njs_type_error_constructor_properties[] = +{ + /* TypeError.name == "TypeError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("TypeError"), + }, + + /* TypeError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* TypeError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_type_error_constructor_init = { + nxt_string("TypeError"), + njs_type_error_constructor_properties, + nxt_nitems(njs_type_error_constructor_properties), +}; + + +njs_ret_t +njs_uri_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_error_create(vm, args, nargs, NJS_OBJECT_URI_ERROR); +} + + +static const njs_object_prop_t njs_uri_error_constructor_properties[] = +{ + /* URIError.name == "URIError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("URIError"), + }, + + /* URIError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* URIError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_uri_error_constructor_init = { + nxt_string("URIError"), + njs_uri_error_constructor_properties, + nxt_nitems(njs_uri_error_constructor_properties), +}; + + +static void +njs_init_memory_error(njs_vm_t *vm) +{ + njs_value_t *value; + njs_object_t *object; + njs_object_prototype_t *prototypes; + + prototypes = vm->prototypes; + object = &vm->memory_error_object; + + nxt_lvlhsh_init(&object->hash); + nxt_lvlhsh_init(&object->shared_hash); + object->__proto__ = &prototypes[NJS_PROTOTYPE_MEMORY_ERROR].object; + object->type = NJS_OBJECT_INTERNAL_ERROR; + object->shared = 1; + + /* + * Marking it nonextensible to differentiate + * it from ordinary internal errors. + */ + object->extensible = 0; + + value = &vm->memory_error; + + value->data.type = NJS_OBJECT_INTERNAL_ERROR; + value->data.truth = 1; + value->data.u.number = NAN; + value->data.u.object = object; +} + + +void +njs_exception_memory_error(njs_vm_t *vm) +{ + njs_init_memory_error(vm); + + vm->exception = &vm->memory_error; +} + + +njs_ret_t +njs_memory_error_constructor(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + njs_init_memory_error(vm); + + vm->retval = vm->memory_error; + + return NXT_OK; +} + + +static const njs_object_prop_t njs_memory_error_constructor_properties[] = +{ + /* MemoryError.name == "MemoryError". */ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("MemoryError"), + }, + + /* MemoryError.length == 1. */ + { + .type = NJS_PROPERTY, + .name = njs_string("length"), + .value = njs_value(NJS_NUMBER, 1, 1.0), + }, + + /* MemoryError.prototype. */ + { + .type = NJS_NATIVE_GETTER, + .name = njs_string("prototype"), + .value = njs_native_getter(njs_object_prototype_create), + }, +}; + + +const njs_object_init_t njs_memory_error_constructor_init = { + nxt_string("MemoryError"), + njs_memory_error_constructor_properties, + nxt_nitems(njs_memory_error_constructor_properties), +}; + + +static njs_ret_t +njs_error_prototype_value_of(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + vm->retval = args[0]; + + return NXT_OK; +} + + +static njs_ret_t +njs_error_prototype_to_string(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + size_t size; + u_char *p; + nxt_str_t name, message; + const njs_value_t *name_value, *message_value; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + static const njs_value_t default_name = njs_string("Error"); + + if (nargs < 1 || !njs_is_object(&args[0])) { + njs_exception_type_error(vm, NULL, NULL); + return NXT_ERROR; + } + + lhq.key_hash = NJS_NAME_HASH; + lhq.key = nxt_string_value("name"); + lhq.proto = &njs_object_hash_proto; + + prop = njs_object_property(vm, args[0].data.u.object, &lhq); + + if (prop != NULL) { + name_value = &prop->value; + + } else { + name_value = &default_name; + } + + njs_string_get(name_value, &name); + + lhq.key_hash = NJS_MESSAGE_HASH; + lhq.key = nxt_string_value("message"); + + prop = njs_object_property(vm, args[0].data.u.object, &lhq); + + if (prop != NULL) { + message_value = &prop->value; + + } else { + message_value = &njs_string_empty; + } + + njs_string_get(message_value, &message); + + if (name.length == 0) { + vm->retval = *message_value; + return NJS_OK; + } + + if (message.length == 0) { + vm->retval = *name_value; + return NJS_OK; + } + + size = name.length + message.length + 2; + + p = njs_string_alloc(vm, &vm->retval, size, size); + + if (nxt_fast_path(p != NULL)) { + p = nxt_cpymem(p, name.start, name.length); + *p++ = ':'; + *p++ = ' '; + memcpy(p, message.start, message.length); + + return NJS_OK; + } + + njs_exception_memory_error(vm); + return NJS_ERROR; +} + + +static const njs_object_prop_t njs_error_prototype_properties[] = +{ + { + .type = NJS_PROPERTY, + .name = njs_string("name"), + .value = njs_string("Error"), + }, + + { + .type = NJS_PROPERTY, + .name = njs_string("message"), + .value = njs_string(""), + }, + + { + .type = NJS_METHOD, + .name = njs_string("valueOf"), + .value = njs_native_function(njs_error_prototype_value_of, 0, 0), + }, + + { + .type = NJS_METHOD, + .name = njs_string("toString"), + .value = njs_native_function(njs_error_prototype_to_string, 0, 0), + }, +}; + + +const njs_object_init_t njs_error_prototype_init = { + nxt_string("Error"), + njs_error_prototype_properties, + nxt_nitems(njs_error_prototype_properties), +}; + + +static const njs_object_prop_t njs_eval_error_prototype_properties[] = +{ From xeioex at nginx.com Fri Nov 17 16:06:04 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:04 +0000 Subject: [njs] Fixed exception handling. Message-ID: details: http://hg.nginx.org/njs/rev/a83775113025 branches: changeset: 422:a83775113025 user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Fixed exception handling. njs_vm_exception() is removed and combined with njs_vm_retval(). vm->exception is removed either, exceptions are now stored in vm->retval. It simplifies the client logic, because previously njs_vm_exception() had to be called if njs_vm_retval() fails. Additonally, stack traces are now appended to the retval if an exception happens. diffstat: nginx/ngx_http_js_module.c | 6 +- nginx/ngx_stream_js_module.c | 8 +- njs/njs.c | 56 +++----------------- njs/njs_error.c | 36 ++++--------- njs/njs_function.c | 2 + njs/njs_generator.c | 6 +- njs/njs_parser.c | 108 ++++++++++++++++++++++++--------------- njs/njs_parser.h | 4 + njs/njs_parser_expression.c | 25 ++++---- njs/njs_regexp.c | 34 +++++------- njs/njs_variable.c | 16 ++-- njs/njs_vm.c | 19 ------- njs/njs_vm.h | 5 - njs/njscript.c | 91 ++++++++++++++++++++++++++++---- njs/njscript.h | 1 - njs/test/njs_benchmark.c | 9 +-- njs/test/njs_expect_test.exp | 16 +++++ njs/test/njs_interactive_test.c | 99 +++++++++++------------------------ njs/test/njs_unit_test.c | 46 +++++++++++----- 19 files changed, 293 insertions(+), 294 deletions(-) diffs (truncated from 1269 to 1000 lines): diff -r 84a95e20f93a -r a83775113025 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Fri Nov 17 18:55:07 2017 +0300 +++ b/nginx/ngx_http_js_module.c Fri Nov 17 18:55:07 2017 +0300 @@ -445,7 +445,7 @@ ngx_http_js_handler(ngx_http_request_t * } if (njs_vm_call(ctx->vm, func, ctx->args, 2) != NJS_OK) { - njs_vm_exception(ctx->vm, &exception); + njs_vm_retval(ctx->vm, &exception); ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "js exception: %*s", exception.length, exception.start); @@ -496,7 +496,7 @@ ngx_http_js_variable(ngx_http_request_t } if (njs_vm_call(ctx->vm, func, ctx->args, 2) != NJS_OK) { - njs_vm_exception(ctx->vm, &exception); + njs_vm_retval(ctx->vm, &exception); ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "js exception: %*s", exception.length, exception.start); @@ -1333,7 +1333,7 @@ ngx_http_js_include(ngx_conf_t *cf, ngx_ rc = njs_vm_compile(jlcf->vm, &start, end); if (rc != NJS_OK) { - njs_vm_exception(jlcf->vm, &text); + njs_vm_retval(jlcf->vm, &text); ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "%*s, included", diff -r 84a95e20f93a -r a83775113025 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Fri Nov 17 18:55:07 2017 +0300 +++ b/nginx/ngx_stream_js_module.c Fri Nov 17 18:55:07 2017 +0300 @@ -408,7 +408,7 @@ ngx_stream_js_phase_handler(ngx_stream_s } if (njs_vm_call(ctx->vm, func, ctx->arg, 1) != NJS_OK) { - njs_vm_exception(ctx->vm, &exception); + njs_vm_retval(ctx->vm, &exception); ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", exception.length, exception.start); @@ -495,7 +495,7 @@ ngx_stream_js_body_filter(ngx_stream_ses ctx->buf = in->buf; if (njs_vm_call(ctx->vm, func, ctx->arg, 1) != NJS_OK) { - njs_vm_exception(ctx->vm, &exception); + njs_vm_retval(ctx->vm, &exception); ngx_log_error(NGX_LOG_ERR, c->log, 0, "js exception: %*s", exception.length, exception.start); @@ -593,7 +593,7 @@ ngx_stream_js_variable(ngx_stream_sessio } if (njs_vm_call(ctx->vm, func, ctx->arg, 1) != NJS_OK) { - njs_vm_exception(ctx->vm, &exception); + njs_vm_retval(ctx->vm, &exception); ngx_log_error(NGX_LOG_ERR, s->connection->log, 0, "js exception: %*s", exception.length, exception.start); @@ -1043,7 +1043,7 @@ ngx_stream_js_include(ngx_conf_t *cf, ng rc = njs_vm_compile(jscf->vm, &start, end); if (rc != NJS_OK) { - njs_vm_exception(jscf->vm, &text); + njs_vm_retval(jscf->vm, &text); ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "%*s, included", diff -r 84a95e20f93a -r a83775113025 njs/njs.c --- a/njs/njs.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs.c Fri Nov 17 18:55:07 2017 +0300 @@ -64,7 +64,6 @@ static nxt_int_t njs_interactive_shell(n static nxt_int_t njs_process_file(njs_opts_t *opts, njs_vm_opt_t *vm_options); static nxt_int_t njs_process_script(njs_vm_t *vm, njs_opts_t *opts, const nxt_str_t *script, nxt_str_t *out); -static void njs_print_backtrace(nxt_array_t *backtrace); static nxt_int_t njs_editline_init(njs_vm_t *vm); static char **njs_completion_handler(const char *text, int start, int end); static char *njs_completion_generator(const char *text, int state); @@ -240,10 +239,9 @@ njs_externals_init(njs_opts_t *opts, njs static nxt_int_t njs_interactive_shell(njs_opts_t *opts, njs_vm_opt_t *vm_options) { - njs_vm_t *vm; - nxt_int_t ret; - nxt_str_t line, out; - nxt_array_t *backtrace; + njs_vm_t *vm; + nxt_int_t ret; + nxt_str_t line, out; vm = njs_vm_create(vm_options); if (vm == NULL) { @@ -282,11 +280,6 @@ njs_interactive_shell(njs_opts_t *opts, printf("%.*s\n", (int) out.length, out.start); - backtrace = njs_vm_backtrace(vm); - if (backtrace != NULL) { - njs_print_backtrace(backtrace); - } - /* editline allocs a new buffer every time. */ free(line.start); } @@ -307,7 +300,6 @@ njs_process_file(njs_opts_t *opts, njs_v nxt_int_t ret; nxt_str_t out, script; struct stat sb; - nxt_array_t *backtrace; file = opts->file; @@ -399,11 +391,6 @@ njs_process_file(njs_opts_t *opts, njs_v if (!opts->disassemble) { printf("%.*s\n", (int) out.length, out.start); - - backtrace = njs_vm_backtrace(vm); - if (backtrace != NULL) { - njs_print_backtrace(backtrace); - } } ret = NXT_OK; @@ -442,44 +429,19 @@ njs_process_script(njs_vm_t *vm, njs_opt } ret = njs_vm_run(vm); - - if (ret == NXT_OK) { - if (njs_vm_retval(vm, out) != NXT_OK) { - return NXT_ERROR; - } + if (ret == NXT_AGAIN) { + return ret; + } + } - } else { - njs_vm_exception(vm, out); - } - - } else { - njs_vm_exception(vm, out); + if (njs_vm_retval(vm, out) != NXT_OK) { + return NXT_ERROR; } return NXT_OK; } -static void -njs_print_backtrace(nxt_array_t *backtrace) -{ - nxt_uint_t i; - njs_backtrace_entry_t *be; - - be = backtrace->start; - - for (i = 0; i < backtrace->items; i++) { - if (be[i].line != 0) { - printf("at %.*s (:%d)\n", (int) be[i].name.length, be[i].name.start, - be[i].line); - - } else { - printf("at %.*s\n", (int) be[i].name.length, be[i].name.start); - } - } -} - - static nxt_int_t njs_editline_init(njs_vm_t *vm) { diff -r 84a95e20f93a -r a83775113025 njs/njs_error.c --- a/njs/njs_error.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_error.c Fri Nov 17 18:55:07 2017 +0300 @@ -37,7 +37,7 @@ njs_exception_error_create(njs_vm_t *vm, size_t size; va_list args; nxt_int_t ret; - njs_value_t string, *value; + njs_value_t string; njs_object_t *error; static char buf[256]; @@ -61,16 +61,9 @@ njs_exception_error_create(njs_vm_t *vm, goto memory_error; } - value = nxt_mem_cache_alloc(vm->mem_cache_pool, sizeof(njs_value_t)); - if (nxt_slow_path(value == NULL)) { - goto memory_error; - } - - value->data.u.object = error; - value->type = type; - value->data.truth = 1; - - vm->exception = value; + vm->retval.data.u.object = error; + vm->retval.type = type; + vm->retval.data.truth = 1; return; @@ -495,9 +488,8 @@ const njs_object_init_t njs_uri_error_c static void -njs_init_memory_error(njs_vm_t *vm) +njs_set_memory_error(njs_vm_t *vm) { - njs_value_t *value; njs_object_t *object; njs_object_prototype_t *prototypes; @@ -516,21 +508,17 @@ njs_init_memory_error(njs_vm_t *vm) */ object->extensible = 0; - value = &vm->memory_error; - - value->data.type = NJS_OBJECT_INTERNAL_ERROR; - value->data.truth = 1; - value->data.u.number = NAN; - value->data.u.object = object; + vm->retval.data.type = NJS_OBJECT_INTERNAL_ERROR; + vm->retval.data.truth = 1; + vm->retval.data.u.number = NAN; + vm->retval.data.u.object = object; } void njs_exception_memory_error(njs_vm_t *vm) { - njs_init_memory_error(vm); - - vm->exception = &vm->memory_error; + njs_set_memory_error(vm); } @@ -538,9 +526,7 @@ njs_ret_t njs_memory_error_constructor(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, njs_index_t unused) { - njs_init_memory_error(vm); - - vm->retval = vm->memory_error; + njs_set_memory_error(vm); return NXT_OK; } diff -r 84a95e20f93a -r a83775113025 njs/njs_function.c --- a/njs/njs_function.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_function.c Fri Nov 17 18:55:07 2017 +0300 @@ -701,6 +701,8 @@ njs_ret_t njs_eval_function(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, njs_index_t unused) { + njs_exception_internal_error(vm, "Not implemented", NULL); + return NXT_ERROR; } diff -r 84a95e20f93a -r a83775113025 njs/njs_generator.c --- a/njs/njs_generator.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_generator.c Fri Nov 17 18:55:07 2017 +0300 @@ -1150,8 +1150,7 @@ njs_generate_continue_statement(njs_vm_t } } - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Illegal continue statement"); + njs_parser_syntax_error(vm, parser, "Illegal continue statement", NULL); return NXT_ERROR; @@ -1194,8 +1193,7 @@ njs_generate_break_statement(njs_vm_t *v } } - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Illegal break statement"); + njs_parser_syntax_error(vm, parser, "Illegal break statement", NULL); return NXT_ERROR; diff -r 84a95e20f93a -r a83775113025 njs/njs_parser.c --- a/njs/njs_parser.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_parser.c Fri Nov 17 18:55:07 2017 +0300 @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -196,9 +197,9 @@ njs_parser_scope_begin(njs_vm_t *vm, njs break; } - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, "SyntaxError: " - "The maximum function nesting level is \"%d\"", - NJS_MAX_NESTING); + njs_parser_syntax_error(vm, parser, + "The maximum function nesting " + "level is \"%d\"", NJS_MAX_NESTING); return NXT_ERROR; } @@ -310,7 +311,7 @@ njs_parser_statement_chain(njs_vm_t *vm, } } - } else if (vm->exception == NULL) { + } else if (!njs_is_error(&vm->retval)) { (void) njs_parser_unexpected_token(vm, parser, token); } @@ -770,8 +771,8 @@ njs_parser_return_statement(njs_vm_t *vm scope = scope->parent) { if (scope->type == NJS_SCOPE_GLOBAL) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Illegal return statement"); + njs_parser_syntax_error(vm, parser, "Illegal return statement", + NULL); return NXT_ERROR; } @@ -1053,9 +1054,9 @@ njs_parser_switch_statement(njs_vm_t *vm } else { if (dflt != NULL) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: More than one default clause " - "in switch statement"); + njs_parser_syntax_error(vm, parser, + "More than one default clause " + "in switch statement", NULL); return NJS_TOKEN_ILLEGAL; } @@ -1461,9 +1462,9 @@ njs_parser_for_in_statement(njs_vm_t *vm node = parser->node->left; if (node->token != NJS_TOKEN_NAME) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, "ReferenceError: Invalid " - "left-hand side \"%.*s\" in for-in statement", - (int) name->length, name->start); + njs_parser_ref_error(vm, parser, "Invalid left-hand side \"%.*s\" " + "in for-in statement", (int) name->length, + name->start); return NJS_TOKEN_ILLEGAL; } @@ -1686,8 +1687,8 @@ njs_parser_try_statement(njs_vm_t *vm, n } if (try->right == NULL) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Missing catch or finally after try"); + njs_parser_syntax_error(vm, parser, "Missing catch or " + "finally after try", NULL); return NJS_TOKEN_ILLEGAL; } @@ -1936,9 +1937,9 @@ njs_parser_terminal(njs_vm_t *vm, njs_pa break; case NJS_TOKEN_UNTERMINATED_STRING: - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Unterminated string \"%.*s\"", - (int) parser->lexer->text.length, parser->lexer->text.start); + njs_parser_syntax_error(vm, parser, "Unterminated string \"%.*s\"", + (int) parser->lexer->text.length, + parser->lexer->text.start); return NJS_TOKEN_ILLEGAL; @@ -2540,9 +2541,9 @@ njs_parser_escape_string_create(njs_vm_t invalid: - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Invalid Unicode code point \"%.*s\"", - (int) parser->lexer->text.length, parser->lexer->text.start); + njs_parser_syntax_error(vm, parser, "Invalid Unicode code point \"%.*s\"", + (int) parser->lexer->text.length, + parser->lexer->text.start); return NJS_TOKEN_ILLEGAL; } @@ -2584,13 +2585,12 @@ njs_parser_unexpected_token(njs_vm_t *vm njs_token_t token) { if (token != NJS_TOKEN_END) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Unexpected token \"%.*s\"", - (int) parser->lexer->text.length, parser->lexer->text.start); + njs_parser_syntax_error(vm, parser, "Unexpected token \"%.*s\"", + (int) parser->lexer->text.length, + parser->lexer->text.start); } else { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Unexpected end of input"); + njs_parser_syntax_error(vm, parser, "Unexpected end of input", NULL); } return NJS_TOKEN_ILLEGAL; @@ -2601,18 +2601,13 @@ u_char * njs_parser_trace_handler(nxt_trace_t *trace, nxt_trace_data_t *td, u_char *start) { - int n; u_char *p; - ssize_t size; + size_t size; njs_vm_t *vm; - p = start; - - if (td->level == NXT_LEVEL_CRIT) { - size = sizeof("InternalError: ") - 1; - memcpy(p, "InternalError: ", size); - p = start + size; - } + size = sizeof("InternalError: ") - 1; + memcpy(start, "InternalError: ", size); + p = start + size; vm = trace->data; @@ -2620,16 +2615,43 @@ njs_parser_trace_handler(nxt_trace_t *tr p = trace->handler(trace, td, p); if (vm->parser != NULL) { - size = td->end - start; - - n = snprintf((char *) p, size, " in %u", vm->parser->lexer->line); - - if (n < size) { - p += n; - } + njs_exception_internal_error(vm, "%s in %u", start, + vm->parser->lexer->line); + } else { + njs_exception_internal_error(vm, "%s", start); } - njs_vm_throw_exception(vm, start, p - start); - return p; } + + +void +njs_parser_syntax_error(njs_vm_t *vm, njs_parser_t *parser, const char* fmt, + ...) +{ + va_list args; + + static char buf[256]; + + va_start(args, fmt); + (void) vsnprintf(buf, sizeof(buf), fmt, args); + va_end(args); + + njs_exception_syntax_error(vm, "%s in %u", buf, parser->lexer->line); +} + + +void +njs_parser_ref_error(njs_vm_t *vm, njs_parser_t *parser, const char* fmt, + ...) +{ + va_list args; + + static char buf[256]; + + va_start(args, fmt); + (void) vsnprintf(buf, sizeof(buf), fmt, args); + va_end(args); + + njs_exception_ref_error(vm, "%s in %u", buf, parser->lexer->line); +} diff -r 84a95e20f93a -r a83775113025 njs/njs_parser.h --- a/njs/njs_parser.h Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_parser.h Fri Nov 17 18:55:07 2017 +0300 @@ -381,6 +381,10 @@ njs_index_t njs_variable_index(njs_vm_t nxt_bool_t njs_parser_has_side_effect(njs_parser_node_t *node); u_char *njs_parser_trace_handler(nxt_trace_t *trace, nxt_trace_data_t *td, u_char *start); +void njs_parser_syntax_error(njs_vm_t *vm, njs_parser_t *parser, + const char* fmt, ...); +void njs_parser_ref_error(njs_vm_t *vm, njs_parser_t *parser, const char* fmt, + ...); nxt_int_t njs_generate_scope(njs_vm_t *vm, njs_parser_t *parser, njs_parser_node_t *node); diff -r 84a95e20f93a -r a83775113025 njs/njs_parser_expression.c --- a/njs/njs_parser_expression.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_parser_expression.c Fri Nov 17 18:55:07 2017 +0300 @@ -294,8 +294,8 @@ njs_parser_var_expression(njs_vm_t *vm, } if (!njs_parser_is_lvalue(parser->node)) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "ReferenceError: Invalid left-hand side in assignment"); + njs_parser_ref_error(vm, parser, + "Invalid left-hand side in assignment", NULL); return NJS_TOKEN_ILLEGAL; } @@ -432,8 +432,8 @@ njs_parser_assignment_expression(njs_vm_ } if (!njs_parser_is_lvalue(parser->node)) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "ReferenceError: Invalid left-hand side in assignment"); + njs_parser_ref_error(vm, parser, "Invalid left-hand side " + "in assignment", NULL); return NJS_TOKEN_ILLEGAL; } @@ -756,9 +756,8 @@ njs_parser_unary_expression(njs_vm_t *vm } if (next == NJS_TOKEN_EXPONENTIATION) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Either left-hand side or entire exponentiation " - "must be parenthesized"); + njs_parser_syntax_error(vm, parser, "Either left-hand side or entire " + "exponentiation must be parenthesized"); return NJS_TOKEN_ILLEGAL; } @@ -796,8 +795,8 @@ njs_parser_unary_expression(njs_vm_t *vm case NJS_TOKEN_NAME: case NJS_TOKEN_UNDEFINED: - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Delete of an unqualified identifier"); + njs_parser_syntax_error(vm, parser, + "Delete of an unqualified identifier", NULL); return NJS_TOKEN_ILLEGAL; @@ -856,8 +855,8 @@ njs_parser_inc_dec_expression(njs_vm_t * } if (!njs_parser_is_lvalue(parser->node)) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "ReferenceError: Invalid left-hand side in prefix operation"); + njs_parser_ref_error(vm, parser, "Invalid left-hand side " + "in prefix operation", NULL); return NJS_TOKEN_ILLEGAL; } @@ -911,8 +910,8 @@ njs_parser_post_inc_dec_expression(njs_v } if (!njs_parser_is_lvalue(parser->node)) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "ReferenceError: Invalid left-hand side in postfix operation"); + njs_parser_ref_error(vm, parser, "Invalid left-hand side " + "in postfix operation", NULL); return NJS_TOKEN_ILLEGAL; } diff -r 84a95e20f93a -r a83775113025 njs/njs_regexp.c --- a/njs/njs_regexp.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_regexp.c Fri Nov 17 18:55:07 2017 +0300 @@ -178,9 +178,9 @@ njs_regexp_literal(njs_vm_t *vm, njs_par flags = njs_regexp_flags(&p, lexer->end, 0); if (nxt_slow_path(flags < 0)) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Invalid RegExp flags \"%.*s\"", - p - lexer->start, lexer->start); + njs_parser_syntax_error(vm, parser, + "Invalid RegExp flags \"%.*s\"", + p - lexer->start, lexer->start); return NJS_TOKEN_ILLEGAL; } @@ -199,9 +199,8 @@ njs_regexp_literal(njs_vm_t *vm, njs_par } } - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Unterminated RegExp \"%.*s\"", - p - lexer->start - 1, lexer->start - 1); + njs_parser_syntax_error(vm, parser, "Unterminated RegExp \"%.*s\"", + p - lexer->start - 1, lexer->start - 1); return NJS_TOKEN_ILLEGAL; } @@ -386,9 +385,8 @@ static u_char * njs_regexp_compile_trace_handler(nxt_trace_t *trace, nxt_trace_data_t *td, u_char *start) { - int n; u_char *p; - ssize_t size; + size_t size; njs_vm_t *vm; size = sizeof("SyntaxError: ") - 1; @@ -398,20 +396,16 @@ njs_regexp_compile_trace_handler(nxt_tra vm = trace->data; trace = trace->next; - p = trace->handler(trace, td, p); + p = trace->handler(trace, td, start); if (vm->parser != NULL) { - size = td->end - start; - - n = snprintf((char *) p, size, " in %u", vm->parser->lexer->line); + njs_exception_syntax_error(vm, "%s in %u", start, + vm->parser->lexer->line); - if (n < size) { - p += n; - } + } else { + njs_exception_syntax_error(vm, "%s", start); } - njs_vm_throw_exception(vm, start, p - start); - return p; } @@ -442,8 +436,8 @@ njs_regexp_match_trace_handler(nxt_trace size_t size; njs_vm_t *vm; - size = sizeof("RegExpError: ") - 1; - memcpy(start, "RegExpError: ", size); + size = sizeof("InternalError: ") - 1; + memcpy(start, "InternalError: ", size); p = start + size; vm = trace->data; @@ -451,7 +445,7 @@ njs_regexp_match_trace_handler(nxt_trace trace = trace->next; p = trace->handler(trace, td, p); - njs_vm_throw_exception(vm, start, p - start); + njs_exception_internal_error(vm, (const char *) start, NULL); return p; } diff -r 84a95e20f93a -r a83775113025 njs/njs_variable.c --- a/njs/njs_variable.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_variable.c Fri Nov 17 18:55:07 2017 +0300 @@ -20,6 +20,7 @@ #include #include #include +#include #include @@ -156,9 +157,9 @@ njs_variable_add(njs_vm_t *vm, njs_parse /* ret == NXT_DECLINED. */ - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "SyntaxError: Identifier \"%.*s\" has already been declared", - (int) lhq.key.length, lhq.key.start); + njs_parser_syntax_error(vm, parser, "Identifier \"%.*s\" " + "has already been declared", + (int) lhq.key.length, lhq.key.start); return NULL; } @@ -342,8 +343,8 @@ njs_variable_get(njs_vm_t *vm, njs_parse index = (index >> NJS_SCOPE_SHIFT) + 1; if (index > 255 || vs.scope->argument_closures == 0) { - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "InternalError: too many argument closures"); + njs_exception_internal_error(vm, "too many argument closures", + NULL); return NULL; } @@ -405,9 +406,8 @@ njs_variable_get(njs_vm_t *vm, njs_parse not_found: - nxt_alert(&vm->trace, NXT_LEVEL_ERROR, - "ReferenceError: \"%.*s\" is not defined", - (int) vs.lhq.key.length, vs.lhq.key.start); + njs_parser_ref_error(vm, vm->parser, "\"%.*s\" is not defined", + (int) vs.lhq.key.length, vs.lhq.key.start); return NULL; } diff -r 84a95e20f93a -r a83775113025 njs/njs_vm.c --- a/njs/njs_vm.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_vm.c Fri Nov 17 18:55:07 2017 +0300 @@ -3454,25 +3454,6 @@ njs_value_string_copy(njs_vm_t *vm, nxt_ } -void -njs_vm_throw_exception(njs_vm_t *vm, const u_char *buf, uint32_t size) -{ - int32_t length; - njs_value_t *value; - - value = nxt_mem_cache_alloc(vm->mem_cache_pool, sizeof(njs_value_t)); - - if (nxt_fast_path(value != NULL)) { - vm->exception = value; - - length = nxt_utf8_length(buf, size); - length = (length >= 0) ? length : 0; - - (void) njs_string_new(vm, value, buf, size, length); - } -} - - static njs_ret_t njs_vm_add_backtrace_entry(njs_vm_t *vm, njs_frame_t *frame) { diff -r 84a95e20f93a -r a83775113025 njs/njs_vm.h --- a/njs/njs_vm.h Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_vm.h Fri Nov 17 18:55:07 2017 +0300 @@ -931,8 +931,6 @@ struct njs_vm_s { njs_native_frame_t *top_frame; njs_frame_t *active_frame; - const njs_value_t *exception; - nxt_lvlhsh_t externals_hash; nxt_lvlhsh_t variables_hash; nxt_lvlhsh_t values_hash; @@ -962,7 +960,6 @@ struct njs_vm_s { * with the generic type NJS_OBJECT_INTERNAL_ERROR but its own prototype * object NJS_PROTOTYPE_MEMORY_ERROR. */ - njs_value_t memory_error; njs_object_t memory_error_object; nxt_array_t *code; /* of njs_vm_code_t */ @@ -1146,8 +1143,6 @@ njs_ret_t njs_value_to_ext_string(njs_vm const njs_value_t *src); void njs_number_set(njs_value_t *value, double num); -void njs_vm_throw_exception(njs_vm_t *vm, const u_char *buf, uint32_t size); - nxt_int_t njs_builtin_objects_create(njs_vm_t *vm); nxt_int_t njs_builtin_objects_clone(njs_vm_t *vm); nxt_int_t njs_builtin_match_native_function(njs_vm_t *vm, diff -r 84a95e20f93a -r a83775113025 njs/njscript.c --- a/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njscript.c Fri Nov 17 18:55:07 2017 +0300 @@ -24,6 +24,7 @@ #include #include #include +#include static nxt_int_t njs_vm_init(njs_vm_t *vm); @@ -198,6 +199,8 @@ njs_vm_create(njs_vm_opt_t *options) if (nxt_slow_path(ret != NXT_OK)) { return NULL; } + + vm->retval = njs_value_void; } } @@ -246,6 +249,10 @@ njs_vm_compile(njs_vm_t *vm, u_char **st parser->code_size = sizeof(njs_vmcode_stop_t); parser->scope_offset = NJS_INDEX_GLOBAL_OFFSET; + if (vm->backtrace != NULL) { + nxt_array_reset(vm->backtrace); + } + node = njs_parser(vm, parser, prev); if (nxt_slow_path(node == NULL)) { goto fail; @@ -264,10 +271,6 @@ njs_vm_compile(njs_vm_t *vm, u_char **st */ vm->code = NULL; - if (vm->backtrace != NULL) { - nxt_array_reset(vm->backtrace); - } - ret = njs_generate_scope(vm, parser, node); if (nxt_slow_path(ret != NXT_OK)) { goto fail; @@ -334,6 +337,8 @@ njs_vm_clone(njs_vm_t *vm, nxt_mem_cache goto fail; } + nvm->retval = njs_value_void; + return nvm; } @@ -402,8 +407,6 @@ njs_vm_init(njs_vm_t *vm) vm->backtrace = backtrace; } - vm->retval = njs_value_void; - vm->trace.level = NXT_LEVEL_TRACE; vm->trace.size = 2048; vm->trace.handler = njs_parser_trace_handler; @@ -464,6 +467,10 @@ njs_vm_run(njs_vm_t *vm) nxt_thread_log_debug("RUN:"); + if (vm->backtrace != NULL) { + nxt_array_reset(vm->backtrace); + } + ret = njs_vmcode_interpreter(vm); if (nxt_slow_path(ret == NXT_AGAIN)) { @@ -515,18 +522,76 @@ njs_vm_return_string(njs_vm_t *vm, u_cha nxt_int_t njs_vm_retval(njs_vm_t *vm, nxt_str_t *retval) { - return njs_value_to_ext_string(vm, retval, &vm->retval); -} + u_char *p, *start; + size_t len; + nxt_int_t ret; + nxt_uint_t i; + nxt_array_t *backtrace; + njs_backtrace_entry_t *be; + if (vm->top_frame == NULL) { + /* An exception was thrown during compilation. */ -nxt_int_t -njs_vm_exception(njs_vm_t *vm, nxt_str_t *retval) -{ - if (vm->top_frame != NULL) { + njs_vm_init(vm); + } + + ret = njs_value_to_ext_string(vm, retval, &vm->retval); + + if (ret != NXT_OK) { + /* retval evaluation threw an exception. */ + vm->top_frame->trap_tries = 0; + + ret = njs_value_to_ext_string(vm, retval, &vm->retval); + if (ret != NXT_OK) { + return ret; + } } - return njs_value_to_ext_string(vm, retval, vm->exception); + backtrace = njs_vm_backtrace(vm); + + if (backtrace != NULL) { + + len = retval->length + 1; + + be = backtrace->start; + + for (i = 0; i < backtrace->items; i++) { + if (be[i].line != 0) { + len += sizeof(" at (:)\n") - 1 + 10 + be[i].name.length; + + } else { + len += sizeof(" at (native)\n") - 1 + be[i].name.length; + } + } + + p = nxt_mem_cache_alloc(vm->mem_cache_pool, len); + if (p == NULL) { + return NXT_ERROR; + } + + start = p; + + p = nxt_cpymem(p, retval->start, retval->length); + *p++ = '\n'; + + for (i = 0; i < backtrace->items; i++) { + if (be[i].line != 0) { + p += sprintf((char *) p, " at %.*s (:%u)\n", + (int) be[i].name.length, be[i].name.start, + be[i].line); + + } else { + p += sprintf((char *) p, " at %.*s (native)\n", + (int) be[i].name.length, be[i].name.start); + } + } + + retval->start = start; + retval->length = p - retval->start; + } + + return NXT_OK; } diff -r 84a95e20f93a -r a83775113025 njs/njscript.h --- a/njs/njscript.h Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njscript.h Fri Nov 17 18:55:07 2017 +0300 @@ -110,7 +110,6 @@ NXT_EXPORT njs_function_t *njs_vm_functi NXT_EXPORT njs_ret_t njs_vm_return_string(njs_vm_t *vm, u_char *start, size_t size); NXT_EXPORT nxt_int_t njs_vm_retval(njs_vm_t *vm, nxt_str_t *retval); -NXT_EXPORT nxt_int_t njs_vm_exception(njs_vm_t *vm, nxt_str_t *retval); NXT_EXPORT nxt_array_t *njs_vm_backtrace(njs_vm_t *vm); NXT_EXPORT void njs_disassembler(njs_vm_t *vm); diff -r 84a95e20f93a -r a83775113025 njs/test/njs_benchmark.c --- a/njs/test/njs_benchmark.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/test/njs_benchmark.c Fri Nov 17 18:55:07 2017 +0300 @@ -111,13 +111,10 @@ njs_unit_test_benchmark(nxt_str_t *scrip return NXT_ERROR; } - if (njs_vm_run(nvm) == NXT_OK) { - if (njs_vm_retval(nvm, &s) != NXT_OK) { - return NXT_ERROR; - } + (void) njs_vm_run(nvm); - } else { - njs_vm_exception(nvm, &s); + if (njs_vm_retval(nvm, &s) != NXT_OK) { + return NXT_ERROR; } success = nxt_strstr_eq(result, &s); diff -r 84a95e20f93a -r a83775113025 njs/test/njs_expect_test.exp --- a/njs/test/njs_expect_test.exp Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/test/njs_expect_test.exp Fri Nov 17 18:55:07 2017 +0300 @@ -166,3 +166,19 @@ njs_test { {"console.help()\r\n" "console.help()\r\nVM built-in objects:"} } + +# Exception in njs_vm_retval() +njs_test { + {"var o = { toString: function() { return [1] } }\r\n" + "undefined\r\n>> "} + {"o\r\n" + "TypeError"} +} + +# Backtraces are reset between invocations +njs_test { + {"JSON.parse(Error())\r\n" + "JSON.parse(Error())\r\nSyntaxError: Unexpected token at position 0*at JSON.parse (native)"} + {"JSON.parse(Error()\r\n" + "JSON.parse(Error()\r\nSyntaxError: Unexpected token \"\" in 1"} +} diff -r 84a95e20f93a -r a83775113025 njs/test/njs_interactive_test.c --- a/njs/test/njs_interactive_test.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/test/njs_interactive_test.c Fri Nov 17 18:55:07 2017 +0300 @@ -121,69 +121,73 @@ static njs_interactive_test_t njs_test[ "function f(o) {return ff(o)}" ENTER "f({})" ENTER), nxt_string("TypeError\n" - "at ff (:1)\n" - "at f (:1)\n" - "at main\n") }, + " at ff (:1)\n" + " at f (:1)\n" + " at main (native)\n") }, { nxt_string("function ff(o) {return o.a.a}" ENTER "function f(o) {try {return ff(o)} " "finally {return o.a.a}}" ENTER "f({})" ENTER), nxt_string("TypeError\n" - "at f (:1)\n" - "at main\n") }, + " at f (:1)\n" + " at main (native)\n") }, From xeioex at nginx.com Fri Nov 17 16:06:05 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:05 +0000 Subject: [njs] Added support of oct literals. Message-ID: details: http://hg.nginx.org/njs/rev/17909969892f branches: changeset: 425:17909969892f user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Added support of oct literals. diffstat: njs/njs_lexer.c | 47 ++++++++++++++++++++++++++++++++++++++--------- njs/njs_number.c | 28 ++++++++++++++++++++++++++++ njs/njs_number.h | 1 + njs/test/njs_unit_test.c | 31 +++++++++++++++++++++++++++++++ 4 files changed, 98 insertions(+), 9 deletions(-) diffs (151 lines): diff -r 779156b4b930 -r 17909969892f njs/njs_lexer.c --- a/njs/njs_lexer.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_lexer.c Fri Nov 17 18:55:07 2017 +0300 @@ -543,19 +543,48 @@ njs_lexer_number(njs_lexer_t *lexer) p = lexer->start; c = p[-1]; - /* Hexadecimal literal values. */ + if (c == '0' && p != lexer->end) { + + /* Hexadecimal literal values. */ + + if (*p == 'x' || *p == 'X') { + p++; + + if (p == lexer->end) { + return NJS_TOKEN_ILLEGAL; + } + + lexer->start = p; + lexer->number = njs_number_hex_parse(&lexer->start, lexer->end); + + return NJS_TOKEN_NUMBER; + } + + /* Octal literal values. */ - if (c == '0' && p != lexer->end && (*p == 'x' || *p == 'X')) { - p++; + if (*p == 'o') { + p++; + + if (p == lexer->end) { + return NJS_TOKEN_ILLEGAL; + } + + lexer->start = p; + lexer->number = njs_number_oct_parse(&lexer->start, lexer->end); + p = lexer->start; - if (p == lexer->end) { + if (p < lexer->end && (*p == '8' || *p == '9')) { + return NJS_TOKEN_ILLEGAL; + } + + return NJS_TOKEN_NUMBER; + } + + /* Legacy Octal literals are deprecated. */ + + if (*p >= '0' && *p <= '9') { return NJS_TOKEN_ILLEGAL; } - - lexer->start = p; - lexer->number = njs_number_hex_parse(&lexer->start, lexer->end); - - return NJS_TOKEN_NUMBER; } lexer->start = p - 1; diff -r 779156b4b930 -r 17909969892f njs/njs_number.c --- a/njs/njs_number.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_number.c Fri Nov 17 18:55:07 2017 +0300 @@ -169,6 +169,34 @@ njs_number_dec_parse(u_char **start, u_c uint64_t +njs_number_oct_parse(u_char **start, u_char *end) +{ + u_char c, *p; + uint64_t num; + + p = *start; + + num = 0; + + while (p < end) { + /* Values less than '0' become >= 208. */ + c = *p - '0'; + + if (nxt_slow_path(c > 7)) { + break; + } + + num = num * 8 + c; + p++; + } + + *start = p; + + return num; +} + + +uint64_t njs_number_hex_parse(u_char **start, u_char *end) { u_char c, *p; diff -r 779156b4b930 -r 17909969892f njs/njs_number.h --- a/njs/njs_number.h Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_number.h Fri Nov 17 18:55:07 2017 +0300 @@ -13,6 +13,7 @@ uint32_t njs_value_to_index(njs_value_t *value); double njs_number_dec_parse(u_char **start, u_char *end); +uint64_t njs_number_oct_parse(u_char **start, u_char *end); uint64_t njs_number_hex_parse(u_char **start, u_char *end); int64_t njs_number_radix_parse(u_char **start, u_char *end, uint8_t radix); njs_ret_t njs_number_to_string(njs_vm_t *vm, njs_value_t *string, diff -r 779156b4b930 -r 17909969892f njs/test/njs_unit_test.c --- a/njs/test/njs_unit_test.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/test/njs_unit_test.c Fri Nov 17 18:55:07 2017 +0300 @@ -121,6 +121,37 @@ static njs_unit_test_t njs_test[] = { nxt_string("+1\n"), nxt_string("1") }, + /* Octal Numbers. */ + + { nxt_string("0o0"), + nxt_string("0") }, + + { nxt_string("0o011"), + nxt_string("9") }, + + { nxt_string("-0o777"), + nxt_string("-511") }, + + /* Legacy Octal Numbers are deprecated. */ + + { nxt_string("00"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + + { nxt_string("08"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + + { nxt_string("09"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + + { nxt_string("0011"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + + { nxt_string("0o"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + + { nxt_string("0o778"), + nxt_string("SyntaxError: Unexpected token \"\" in 1") }, + /* Hex Numbers. */ { nxt_string("0x0"), From xeioex at nginx.com Fri Nov 17 16:06:05 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:05 +0000 Subject: [njs] Enabling exception backtraces in nginx modules. Message-ID: details: http://hg.nginx.org/njs/rev/779156b4b930 branches: changeset: 424:779156b4b930 user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Enabling exception backtraces in nginx modules. diffstat: nginx/ngx_http_js_module.c | 1 + nginx/ngx_stream_js_module.c | 1 + 2 files changed, 2 insertions(+), 0 deletions(-) diffs (22 lines): diff -r 5637024772aa -r 779156b4b930 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Fri Nov 17 18:55:07 2017 +0300 +++ b/nginx/ngx_http_js_module.c Fri Nov 17 18:55:07 2017 +0300 @@ -1322,6 +1322,7 @@ ngx_http_js_include(ngx_conf_t *cf, ngx_ ngx_memzero(&options, sizeof(njs_vm_opt_t)); options.mcp = mcp; + options.backtrace = 1; options.externals_hash = &externals; jlcf->vm = njs_vm_create(&options); diff -r 5637024772aa -r 779156b4b930 nginx/ngx_stream_js_module.c --- a/nginx/ngx_stream_js_module.c Fri Nov 17 18:55:07 2017 +0300 +++ b/nginx/ngx_stream_js_module.c Fri Nov 17 18:55:07 2017 +0300 @@ -1032,6 +1032,7 @@ ngx_stream_js_include(ngx_conf_t *cf, ng ngx_memzero(&options, sizeof(njs_vm_opt_t)); options.mcp = mcp; + options.backtrace = 1; options.externals_hash = &externals; jscf->vm = njs_vm_create(&options); From xeioex at nginx.com Fri Nov 17 16:06:05 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 17 Nov 2017 16:06:05 +0000 Subject: [njs] Nodejs style file methods. Message-ID: details: http://hg.nginx.org/njs/rev/5c6aa60224cb branches: changeset: 426:5c6aa60224cb user: Dmitry Volyntsev date: Fri Nov 17 18:55:07 2017 +0300 description: Nodejs style file methods. diffstat: Makefile | 27 + njs/njs_builtin.c | 84 ++- njs/njs_builtin.h | 1 + njs/njs_fs.c | 1095 +++++++++++++++++++++++++++++++++++++++ njs/njs_fs.h | 13 + njs/njs_generator.c | 1 + njs/njs_lexer_keyword.c | 1 + njs/njs_module.c | 85 +++ njs/njs_module.h | 22 + njs/njs_object_hash.h | 56 + njs/njs_parser.c | 1 + njs/njs_parser.h | 1 + njs/njs_string.c | 46 + njs/njs_string.h | 1 + njs/njs_vm.h | 20 +- njs/njscript.c | 3 + njs/test/njs_expect_test.exp | 221 +++++++ njs/test/njs_interactive_test.c | 5 + njs/test/njs_unit_test.c | 136 ++++ 19 files changed, 1817 insertions(+), 2 deletions(-) diffs (truncated from 2098 to 1000 lines): diff -r 17909969892f -r 5c6aa60224cb Makefile --- a/Makefile Fri Nov 17 18:55:07 2017 +0300 +++ b/Makefile Fri Nov 17 18:55:07 2017 +0300 @@ -22,6 +22,8 @@ NXT_BUILDDIR = build $(NXT_BUILDDIR)/njs_date.o \ $(NXT_BUILDDIR)/njs_error.o \ $(NXT_BUILDDIR)/njs_math.o \ + $(NXT_BUILDDIR)/njs_module.o \ + $(NXT_BUILDDIR)/njs_fs.o \ $(NXT_BUILDDIR)/njs_extern.o \ $(NXT_BUILDDIR)/njs_variable.o \ $(NXT_BUILDDIR)/njs_builtin.o \ @@ -56,6 +58,8 @@ NXT_BUILDDIR = build $(NXT_BUILDDIR)/njs_date.o \ $(NXT_BUILDDIR)/njs_error.o \ $(NXT_BUILDDIR)/njs_math.o \ + $(NXT_BUILDDIR)/njs_module.o \ + $(NXT_BUILDDIR)/njs_fs.o \ $(NXT_BUILDDIR)/njs_extern.o \ $(NXT_BUILDDIR)/njs_variable.o \ $(NXT_BUILDDIR)/njs_builtin.o \ @@ -299,6 +303,28 @@ dist: -I$(NXT_LIB) -Injs \ njs/njs_math.c +$(NXT_BUILDDIR)/njs_module.o: \ + $(NXT_BUILDDIR)/libnxt.a \ + njs/njscript.h \ + njs/njs_vm.h \ + njs/njs_module.h \ + njs/njs_module.c \ + + $(NXT_CC) -c -o $(NXT_BUILDDIR)/njs_module.o $(NXT_CFLAGS) \ + -I$(NXT_LIB) -Injs \ + njs/njs_module.c + +$(NXT_BUILDDIR)/njs_fs.o: \ + $(NXT_BUILDDIR)/libnxt.a \ + njs/njscript.h \ + njs/njs_vm.h \ + njs/njs_fs.h \ + njs/njs_fs.c \ + + $(NXT_CC) -c -o $(NXT_BUILDDIR)/njs_fs.o $(NXT_CFLAGS) \ + -I$(NXT_LIB) -Injs \ + njs/njs_fs.c + $(NXT_BUILDDIR)/njs_extern.o: \ $(NXT_BUILDDIR)/libnxt.a \ njs/njscript.h \ @@ -332,6 +358,7 @@ dist: njs/njs_string.h \ njs/njs_object.h \ njs/njs_array.h \ + njs/njs_module.h \ njs/njs_function.h \ njs/njs_regexp.h \ njs/njs_parser.h \ diff -r 17909969892f -r 5c6aa60224cb njs/njs_builtin.c --- a/njs/njs_builtin.c Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_builtin.c Fri Nov 17 18:55:07 2017 +0300 @@ -30,6 +30,8 @@ #include #include #include +#include +#include #include #include @@ -54,6 +56,11 @@ const njs_object_init_t *njs_object_i }; +const njs_object_init_t *njs_module_init[] = { + &njs_fs_object_init /* fs */ +}; + + const njs_object_init_t *njs_prototype_init[] = { &njs_object_prototype_init, &njs_array_prototype_init, @@ -111,8 +118,10 @@ njs_builtin_objects_create(njs_vm_t *vm) { nxt_int_t ret; nxt_uint_t i; + njs_module_t *module; njs_object_t *objects; njs_function_t *functions, *constructors; + nxt_lvlhsh_query_t lhq; njs_object_prototype_t *prototypes; static const njs_object_prototype_t prototype_values[] = { @@ -193,6 +202,7 @@ njs_builtin_objects_create(njs_vm_t *vm) NULL, /* encodeURIComponent */ NULL, /* decodeURI */ NULL, /* decodeURIComponent */ + NULL, /* require */ }; static const njs_function_init_t native_functions[] = { @@ -208,6 +218,7 @@ njs_builtin_objects_create(njs_vm_t *vm) { njs_string_encode_uri_component, { NJS_SKIP_ARG, NJS_STRING_ARG } }, { njs_string_decode_uri, { NJS_SKIP_ARG, NJS_STRING_ARG } }, { njs_string_decode_uri_component, { NJS_SKIP_ARG, NJS_STRING_ARG } }, + { njs_module_require, { NJS_SKIP_ARG, NJS_STRING_ARG } }, }; static const njs_object_prop_t null_proto_property = { @@ -249,6 +260,37 @@ njs_builtin_objects_create(njs_vm_t *vm) objects[i].shared = 1; } + lhq.replace = 0; + lhq.proto = &njs_modules_hash_proto; + lhq.pool = vm->mem_cache_pool; + + for (i = NJS_MODULE_FS; i < NJS_MODULE_MAX; i++) { + module = nxt_mem_cache_zalloc(vm->mem_cache_pool, sizeof(njs_module_t)); + if (nxt_slow_path(module == NULL)) { + return NJS_ERROR; + } + + module->name = njs_module_init[i]->name; + + ret = njs_object_hash_create(vm, &module->object.shared_hash, + njs_module_init[i]->properties, + njs_module_init[i]->items); + if (nxt_slow_path(ret != NXT_OK)) { + return NXT_ERROR; + } + + module->object.shared = 1; + + lhq.key = module->name; + lhq.key_hash = nxt_djb_hash(lhq.key.start, lhq.key.length); + lhq.value = module; + + ret = nxt_lvlhsh_insert(&vm->modules_hash, &lhq); + if (nxt_fast_path(ret != NXT_OK)) { + return NXT_ERROR; + } + } + functions = vm->shared->functions; for (i = NJS_FUNCTION_EVAL; i < NJS_FUNCTION_MAX; i++) { @@ -857,10 +899,11 @@ njs_builtin_match_native_function(njs_vm size_t len; nxt_str_t string; nxt_uint_t i; + njs_module_t *module; njs_object_t *objects; njs_function_t *constructors; njs_object_prop_t *prop; - nxt_lvlhsh_each_t lhe; + nxt_lvlhsh_each_t lhe, lhe_prop; njs_object_prototype_t *prototypes; objects = vm->shared->objects; @@ -978,5 +1021,44 @@ njs_builtin_match_native_function(njs_vm } } + nxt_lvlhsh_each_init(&lhe, &njs_modules_hash_proto); + + for ( ;; ) { + module = nxt_lvlhsh_each(&vm->modules_hash, &lhe); + if (module == NULL) { + break; + } + + nxt_lvlhsh_each_init(&lhe_prop, &njs_object_hash_proto); + + for ( ;; ) { + prop = nxt_lvlhsh_each(&module->object.shared_hash, &lhe_prop); + if (prop == NULL) { + break; + } + + if (!njs_is_function(&prop->value)) { + continue; + } + + if (function == prop->value.data.u.function) { + njs_string_get(&prop->name, &string); + len = module->name.length + string.length + sizeof("."); + + buf = nxt_mem_cache_zalloc(vm->mem_cache_pool, len); + if (buf == NULL) { + return NXT_ERROR; + } + + snprintf(buf, len, "%s.%s", module->name.start, string.start); + + name->length = len; + name->start = (u_char *) buf; + + return NXT_OK; + } + } + } + return NXT_DECLINED; } diff -r 17909969892f -r 5c6aa60224cb njs/njs_builtin.h --- a/njs/njs_builtin.h Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/njs_builtin.h Fri Nov 17 18:55:07 2017 +0300 @@ -9,6 +9,7 @@ extern const njs_object_init_t *njs_object_init[]; +extern const njs_object_init_t *njs_module_init[]; extern const njs_object_init_t *njs_prototype_init[]; extern const njs_object_init_t *njs_constructor_init[]; diff -r 17909969892f -r 5c6aa60224cb njs/njs_fs.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/njs/njs_fs.c Fri Nov 17 18:55:07 2017 +0300 @@ -0,0 +1,1095 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) NGINX, Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +typedef struct { + union { + njs_continuation_t cont; + u_char padding[NJS_CONTINUATION_SIZE]; + } u; + + nxt_bool_t done; +} njs_fs_cont_t; + + +typedef struct { + nxt_str_t name; + int value; +} njs_fs_entry_t; + + +static njs_ret_t njs_fs_read_file(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_read_file_sync(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_append_file(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_write_file(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_append_file_sync(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_write_file_sync(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); +static njs_ret_t njs_fs_write_file_internal(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, int default_flags); +static njs_ret_t njs_fs_write_file_sync_internal(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, int default_flags); +static njs_ret_t njs_fs_done(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused); + +static njs_ret_t njs_fs_error(njs_vm_t *vm, const char *syscall, + const char *description, njs_value_t *path, int errn, njs_value_t *retval); +static int njs_fs_flags(nxt_str_t *value); +static mode_t njs_fs_mode(njs_value_t *value); + + +static const njs_value_t njs_fs_errno_string = njs_string("errno"); +static const njs_value_t njs_fs_path_string = njs_string("path"); +static const njs_value_t njs_fs_syscall_string = njs_string("syscall"); + + +static njs_fs_entry_t njs_flags_table[] = { + { nxt_string("r"), O_RDONLY }, + { nxt_string("r+"), O_RDWR }, + { nxt_string("w"), O_TRUNC | O_CREAT | O_WRONLY }, + { nxt_string("w+"), O_TRUNC | O_CREAT | O_RDWR }, + { nxt_string("a"), O_APPEND | O_CREAT | O_WRONLY }, + { nxt_string("a+"), O_APPEND | O_CREAT | O_RDWR }, + { nxt_string("rs"), O_SYNC | O_RDONLY }, + { nxt_string("sr"), O_SYNC | O_RDONLY }, + { nxt_string("wx"), O_TRUNC | O_CREAT | O_EXCL | O_WRONLY }, + { nxt_string("xw"), O_TRUNC | O_CREAT | O_EXCL | O_WRONLY }, + { nxt_string("ax"), O_APPEND | O_CREAT | O_EXCL | O_WRONLY }, + { nxt_string("xa"), O_APPEND | O_CREAT | O_EXCL | O_WRONLY }, + { nxt_string("rs+"), O_SYNC | O_RDWR }, + { nxt_string("sr+"), O_SYNC | O_RDWR }, + { nxt_string("wx+"), O_TRUNC | O_CREAT | O_EXCL | O_RDWR }, + { nxt_string("xw+"), O_TRUNC | O_CREAT | O_EXCL | O_RDWR }, + { nxt_string("ax+"), O_APPEND | O_CREAT | O_EXCL | O_RDWR }, + { nxt_string("xa+"), O_APPEND | O_CREAT | O_EXCL | O_RDWR }, + { nxt_null_string, 0 } +}; + + +static njs_ret_t +njs_fs_read_file(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + int fd, errn, flags; + u_char *p, *start, *end; + ssize_t n, length; + nxt_str_t flag, encoding; + njs_ret_t ret; + const char *path, *syscall, *description; + struct stat sb; + njs_value_t *callback, arguments[3]; + njs_fs_cont_t *cont; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + if (nxt_slow_path(nargs < 3)) { + njs_exception_type_error(vm, "too few arguments", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[1]))) { + njs_exception_type_error(vm, "path must be a string", NULL); + return NJS_ERROR; + } + + flag.start = NULL; + encoding.length = 0; + encoding.start = NULL; + + if (!njs_is_function(&args[2])) { + if (njs_is_string(&args[2])) { + njs_string_get(&args[2], &encoding); + + } else if (njs_is_object(&args[2])) { + lhq.key_hash = NJS_FLAG_HASH; + lhq.key = nxt_string_value("flag"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[2].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &flag); + } + + lhq.key_hash = NJS_ENCODING_HASH; + lhq.key = nxt_string_value("encoding"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[2].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &encoding); + } + + } else { + njs_exception_type_error(vm, "Unknown options type " + "(a string or object required)", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(nargs < 4 || !njs_is_function(&args[3]))) { + njs_exception_type_error(vm, "callback must be a function", NULL); + return NJS_ERROR; + } + + callback = &args[3]; + + } else { + if (nxt_slow_path(!njs_is_function(&args[2]))) { + njs_exception_type_error(vm, "callback must be a function", NULL); + return NJS_ERROR; + } + + callback = &args[2]; + } + + if (flag.start == NULL) { + flag = nxt_string_value("r"); + } + + flags = njs_fs_flags(&flag); + if (nxt_slow_path(flags == -1)) { + njs_exception_type_error(vm, "Unknown file open flags: '%.*s'", + (int) flag.length, flag.start); + return NJS_ERROR; + } + + path = (char *) njs_string_to_c_string(vm, &args[1]); + if (nxt_slow_path(path == NULL)) { + return NJS_ERROR; + } + + if (encoding.length != 0 + && (encoding.length != 4 || memcmp(encoding.start, "utf8", 4) != 0)) + { + njs_exception_type_error(vm, "Unknown encoding: '%.*s'", + (int) encoding.length, encoding.start); + return NJS_ERROR; + } + + description = NULL; + + /* GCC 4 complains about uninitialized errn and syscall. */ + errn = 0; + syscall = NULL; + + fd = open(path, flags); + if (nxt_slow_path(fd < 0)) { + errn = errno; + description = strerror(errno); + syscall = "open"; + goto done; + } + + ret = fstat(fd, &sb); + if (nxt_slow_path(ret == -1)) { + errn = errno; + description = strerror(errno); + syscall = "stat"; + goto done; + } + + if (nxt_slow_path(!S_ISREG(sb.st_mode))) { + errn = 0; + description = "File is not regular"; + syscall = "stat"; + goto done; + } + + if (encoding.length != 0) { + length = sb.st_size; + + } else { + length = 0; + } + + start = njs_string_alloc(vm, &arguments[2], sb.st_size, length); + if (nxt_slow_path(start == NULL)) { + goto memory_error; + } + + p = start; + end = p + sb.st_size; + + while (p < end) { + n = read(fd, p, end - p); + if (nxt_slow_path(n == -1)) { + if (errno == EINTR) { + continue; + } + + errn = errno; + description = strerror(errno); + syscall = "read"; + goto done; + } + + p += n; + } + + if (encoding.length != 0) { + length = nxt_utf8_length(start, sb.st_size); + + if (length >= 0) { + njs_string_offset_map_init(start, sb.st_size); + njs_string_length_set(&arguments[2], length); + + } else { + errn = 0; + description = "Non-UTF8 file, convertion is not implemented"; + syscall = NULL; + goto done; + } + } + +done: + + if (fd > 0) { + close(fd); + } + + if (description != 0) { + ret = njs_fs_error(vm, syscall, description, &args[1], errn, + &arguments[1]); + + if (nxt_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + + arguments[2] = njs_value_void; + + } else { + arguments[1] = njs_value_void; + } + + arguments[0] = njs_value_void; + + cont = njs_vm_continuation(vm); + cont->u.cont.function = njs_fs_done; + + return njs_function_apply(vm, callback->data.u.function, + arguments, 3, (njs_index_t) &vm->retval); + +memory_error: + + if (fd > 0) { + close(fd); + } + + njs_exception_memory_error(vm); + + return NJS_ERROR; +} + + +static njs_ret_t +njs_fs_read_file_sync(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + int fd, errn, flags; + u_char *p, *start, *end; + ssize_t n, length; + nxt_str_t flag, encoding; + njs_ret_t ret; + const char *path, *syscall, *description; + struct stat sb; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + if (nxt_slow_path(nargs < 2)) { + njs_exception_type_error(vm, "too few arguments", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[1]))) { + njs_exception_type_error(vm, "path must be a string", NULL); + return NJS_ERROR; + } + + flag.start = NULL; + encoding.length = 0; + encoding.start = NULL; + + if (nargs == 3) { + if (njs_is_string(&args[2])) { + njs_string_get(&args[2], &encoding); + + } else if (njs_is_object(&args[2])) { + lhq.key_hash = NJS_FLAG_HASH; + lhq.key = nxt_string_value("flag"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[2].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &flag); + } + + lhq.key_hash = NJS_ENCODING_HASH; + lhq.key = nxt_string_value("encoding"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[2].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &encoding); + } + + } else { + njs_exception_type_error(vm, "Unknown options type " + "(a string or object required)", NULL); + return NJS_ERROR; + } + } + + if (flag.start == NULL) { + flag = nxt_string_value("r"); + } + + flags = njs_fs_flags(&flag); + if (nxt_slow_path(flags == -1)) { + njs_exception_type_error(vm, "Unknown file open flags: '%.*s'", + (int) flag.length, flag.start); + return NJS_ERROR; + } + + path = (char *) njs_string_to_c_string(vm, &args[1]); + if (nxt_slow_path(path == NULL)) { + return NJS_ERROR; + } + + if (encoding.length != 0 + && (encoding.length != 4 || memcmp(encoding.start, "utf8", 4) != 0)) + { + njs_exception_type_error(vm, "Unknown encoding: '%.*s'", + (int) encoding.length, encoding.start); + return NJS_ERROR; + } + + description = NULL; + + /* GCC 4 complains about uninitialized errn and syscall. */ + errn = 0; + syscall = NULL; + + fd = open(path, flags); + if (nxt_slow_path(fd < 0)) { + errn = errno; + description = strerror(errno); + syscall = "open"; + goto done; + } + + ret = fstat(fd, &sb); + if (nxt_slow_path(ret == -1)) { + errn = errno; + description = strerror(errno); + syscall = "stat"; + goto done; + } + + if (nxt_slow_path(!S_ISREG(sb.st_mode))) { + errn = 0; + description = "File is not regular"; + syscall = "stat"; + goto done; + } + + if (encoding.length != 0) { + length = sb.st_size; + + } else { + length = 0; + } + + start = njs_string_alloc(vm, &vm->retval, sb.st_size, length); + if (nxt_slow_path(start == NULL)) { + goto memory_error; + } + + p = start; + end = p + sb.st_size; + + while (p < end) { + n = read(fd, p, end - p); + if (nxt_slow_path(n == -1)) { + if (errno == EINTR) { + continue; + } + + errn = errno; + description = strerror(errno); + syscall = "read"; + goto done; + } + + p += n; + } + + if (encoding.length != 0) { + length = nxt_utf8_length(start, sb.st_size); + + if (length >= 0) { + njs_string_offset_map_init(start, sb.st_size); + njs_string_length_set(&vm->retval, length); + + } else { + errn = 0; + description = "Non-UTF8 file, convertion is not implemented"; + syscall = NULL; + goto done; + } + } + +done: + + if (fd > 0) { + close(fd); + } + + if (description != 0) { + (void) njs_fs_error(vm, syscall, description, &args[1], errn, + &vm->retval); + + return NJS_ERROR; + } + + return NJS_OK; + +memory_error: + + if (fd > 0) { + close(fd); + } + + njs_exception_memory_error(vm); + + return NJS_ERROR; +} + + +static njs_ret_t +njs_fs_append_file(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + return njs_fs_write_file_internal(vm, args, nargs, + O_APPEND | O_CREAT | O_WRONLY); +} + + +static njs_ret_t +njs_fs_write_file(njs_vm_t *vm, njs_value_t *args, nxt_uint_t nargs, + njs_index_t unused) +{ + return njs_fs_write_file_internal(vm, args, nargs, + O_TRUNC | O_CREAT | O_WRONLY); +} + + +static njs_ret_t njs_fs_append_file_sync(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_fs_write_file_sync_internal(vm, args, nargs, + O_APPEND | O_CREAT | O_WRONLY); +} + + +static njs_ret_t njs_fs_write_file_sync(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + return njs_fs_write_file_sync_internal(vm, args, nargs, + O_TRUNC | O_CREAT | O_WRONLY); +} + + +static njs_ret_t njs_fs_write_file_internal(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, int default_flags) +{ + int fd, errn, flags; + u_char *p, *end; + mode_t md; + ssize_t n; + nxt_str_t data, flag, encoding; + njs_ret_t ret; + const char *path, *syscall, *description; + njs_value_t *callback, *mode, arguments[2]; + njs_fs_cont_t *cont; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + if (nxt_slow_path(nargs < 4)) { + njs_exception_type_error(vm, "too few arguments", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[1]))) { + njs_exception_type_error(vm, "path must be a string", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[2]))) { + njs_exception_type_error(vm, "data must be a string", NULL); + return NJS_ERROR; + } + + mode = NULL; + flag.start = NULL; + encoding.length = 0; + encoding.start = NULL; + + if (!njs_is_function(&args[3])) { + if (njs_is_string(&args[3])) { + njs_string_get(&args[3], &encoding); + + } else if (njs_is_object(&args[3])) { + lhq.key_hash = NJS_FLAG_HASH; + lhq.key = nxt_string_value("flag"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[3].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &flag); + } + + lhq.key_hash = NJS_ENCODING_HASH; + lhq.key = nxt_string_value("encoding"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[3].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &encoding); + } + + lhq.key_hash = NJS_MODE_HASH; + lhq.key = nxt_string_value("mode"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[3].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + mode = &prop->value; + } + + } else { + njs_exception_type_error(vm, "Unknown options type " + "(a string or object required)", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(nargs < 5 || !njs_is_function(&args[4]))) { + njs_exception_type_error(vm, "callback must be a function", NULL); + return NJS_ERROR; + } + + callback = &args[4]; + + } else { + if (nxt_slow_path(!njs_is_function(&args[3]))) { + njs_exception_type_error(vm, "callback must be a function", NULL); + return NJS_ERROR; + } + + callback = &args[3]; + } + + if (flag.start != NULL) { + flags = njs_fs_flags(&flag); + if (nxt_slow_path(flags == -1)) { + njs_exception_type_error(vm, "Unknown file open flags: '%.*s'", + (int) flag.length, flag.start); + return NJS_ERROR; + } + + } else { + flags = default_flags; + } + + if (mode != NULL) { + md = njs_fs_mode(mode); + + } else { + md = 0666; + } + + path = (char *) njs_string_to_c_string(vm, &args[1]); + if (nxt_slow_path(path == NULL)) { + return NJS_ERROR; + } + + if (encoding.length != 0 + && (encoding.length != 4 || memcmp(encoding.start, "utf8", 4) != 0)) + { + njs_exception_type_error(vm, "Unknown encoding: '%.*s'", + (int) encoding.length, encoding.start); + return NJS_ERROR; + } + + description = NULL; + + /* GCC 4 complains about uninitialized errn and syscall. */ + errn = 0; + syscall = NULL; + + fd = open(path, flags, md); + if (nxt_slow_path(fd < 0)) { + errn = errno; + description = strerror(errno); + syscall = "open"; + goto done; + } + + njs_string_get(&args[2], &data); + + p = data.start; + end = p + data.length; + + while (p < end) { + n = write(fd, p, end - p); + if (nxt_slow_path(n == -1)) { + if (errno == EINTR) { + continue; + } + + errn = errno; + description = strerror(errno); + syscall = "write"; + goto done; + } + + p += n; + } + +done: + + if (fd > 0) { + close(fd); + } + + if (description != 0) { + ret = njs_fs_error(vm, syscall, description, &args[1], errn, + &arguments[1]); + + if (nxt_slow_path(ret != NJS_OK)) { + return NJS_ERROR; + } + + } else { + arguments[1] = njs_value_void; + } + + arguments[0] = njs_value_void; + + cont = njs_vm_continuation(vm); + cont->u.cont.function = njs_fs_done; + + return njs_function_apply(vm, callback->data.u.function, + arguments, 2, (njs_index_t) &vm->retval); +} + + +static njs_ret_t +njs_fs_write_file_sync_internal(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, int default_flags) +{ + int fd, errn, flags; + u_char *p, *end; + mode_t md; + ssize_t n; + nxt_str_t data, flag, encoding; + njs_ret_t ret; + const char *path, *syscall, *description; + njs_value_t *mode; + njs_object_prop_t *prop; + nxt_lvlhsh_query_t lhq; + + if (nxt_slow_path(nargs < 3)) { + njs_exception_type_error(vm, "too few arguments", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[1]))) { + njs_exception_type_error(vm, "path must be a string", NULL); + return NJS_ERROR; + } + + if (nxt_slow_path(!njs_is_string(&args[2]))) { + njs_exception_type_error(vm, "data must be a string", NULL); + return NJS_ERROR; + } + + mode = NULL; + flag.start = NULL; + encoding.length = 0; + encoding.start = NULL; + + if (nargs == 4) { + if (njs_is_string(&args[3])) { + njs_string_get(&args[3], &encoding); + + } else if (njs_is_object(&args[3])) { + lhq.key_hash = NJS_FLAG_HASH; + lhq.key = nxt_string_value("flag"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[3].data.u.object->hash, &lhq); + if (ret == NXT_OK) { + prop = lhq.value; + njs_string_get(&prop->value, &flag); + } + + lhq.key_hash = NJS_ENCODING_HASH; + lhq.key = nxt_string_value("encoding"); + lhq.proto = &njs_object_hash_proto; + + ret = nxt_lvlhsh_find(&args[3].data.u.object->hash, &lhq); + if (ret == NXT_OK) { From mdounin at mdounin.ru Sun Nov 19 13:41:40 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Nov 2017 13:41:40 +0000 Subject: [nginx] Gzip: support for a zlib variant from Intel. Message-ID: details: http://hg.nginx.org/nginx/rev/29e9571b1989 branches: changeset: 7155:29e9571b1989 user: Maxim Dounin date: Sat Nov 18 04:03:27 2017 +0300 description: Gzip: support for a zlib variant from Intel. A zlib variant from Intel as available from https://github.com/jtkukunas/zlib uses 64K hash instead of scaling it from the specified memory level, and also uses 16-byte padding in one of the window-sized memory buffers, and can force window bits to 13 if compression level is set to 1 and appropriate compile options are used. As a result, nginx complained with "gzip filter failed to use preallocated memory" alerts. This change improves deflate_state allocation detection by testing that items is 1 (deflate_state is the only allocation where items is 1). Additionally, on first failure to use preallocated memory we now assume that we are working with the Intel's modified zlib, and switch to using appropriate preallocations. If this does not help, we complain with the usual alerts. Previous version of this patch was published at http://mailman.nginx.org/pipermail/nginx/2014-July/044568.html. The zlib variant in question is used by default in ClearLinux from Intel, see http://mailman.nginx.org/pipermail/nginx-ru/2017-October/060421.html, http://mailman.nginx.org/pipermail/nginx-ru/2017-November/060544.html. diffstat: src/http/modules/ngx_http_gzip_filter_module.c | 38 ++++++++++++++++++++++--- 1 files changed, 33 insertions(+), 5 deletions(-) diffs (76 lines): diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c +++ b/src/http/modules/ngx_http_gzip_filter_module.c @@ -57,6 +57,7 @@ typedef struct { unsigned nomem:1; unsigned gzheader:1; unsigned buffering:1; + unsigned intel:1; size_t zin; size_t zout; @@ -233,6 +234,8 @@ static ngx_str_t ngx_http_gzip_ratio = static ngx_http_output_header_filter_pt ngx_http_next_header_filter; static ngx_http_output_body_filter_pt ngx_http_next_body_filter; +static ngx_uint_t ngx_http_gzip_assume_intel; + static ngx_int_t ngx_http_gzip_header_filter(ngx_http_request_t *r) @@ -527,7 +530,27 @@ ngx_http_gzip_filter_memory(ngx_http_req * *) 5920 bytes on amd64 and sparc64 */ - ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); + if (!ngx_http_gzip_assume_intel) { + ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); + + } else { + /* + * A zlib variant from Intel, https://github.com/jtkukunas/zlib. + * It can force window bits to 13 for fast compression level, + * on processors with SSE 4.2 it uses 64K hash instead of scaling + * it from the specified memory level, and also introduces + * 16-byte padding in one out of the two window-sized buffers. + */ + + if (conf->level == 1) { + wbits = ngx_max(wbits, 13); + } + + ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) + + (1 << (ngx_max(memlevel, 8) + 8)) + + (1 << (memlevel + 8)); + ctx->intel = 1; + } } @@ -1003,7 +1026,7 @@ ngx_http_gzip_filter_alloc(void *opaque, alloc = items * size; - if (alloc % 512 != 0 && alloc < 8192) { + if (items == 1 && alloc % 512 != 0 && alloc < 8192) { /* * The zlib deflate_state allocation, it takes about 6K, @@ -1025,9 +1048,14 @@ ngx_http_gzip_filter_alloc(void *opaque, return p; } - ngx_log_error(NGX_LOG_ALERT, ctx->request->connection->log, 0, - "gzip filter failed to use preallocated memory: %ud of %ui", - items * size, ctx->allocated); + if (ctx->intel) { + ngx_log_error(NGX_LOG_ALERT, ctx->request->connection->log, 0, + "gzip filter failed to use preallocated memory: " + "%ud of %ui", items * size, ctx->allocated); + + } else { + ngx_http_gzip_assume_intel = 1; + } p = ngx_palloc(ctx->request->pool, items * size); From mdounin at mdounin.ru Mon Nov 20 14:37:36 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Nov 2017 14:37:36 +0000 Subject: [nginx] Fixed worker_shutdown_timeout in various cases. Message-ID: details: http://hg.nginx.org/nginx/rev/9c29644f6d03 branches: changeset: 7156:9c29644f6d03 user: Maxim Dounin date: Mon Nov 20 16:31:07 2017 +0300 description: Fixed worker_shutdown_timeout in various cases. The ngx_http_upstream_process_upgraded() did not handle c->close request, and upgraded connections do not use the write filter. As a result, worker_shutdown_timeout did not affect upgraded connections (ticket #1419). Fix is to handle c->close in the ngx_http_request_handler() function, thus covering most of the possible cases in http handling. Additionally, mail proxying did not handle neither c->close nor c->error, and thus worker_shutdown_timeout did not work for mail connections. Fix is to add c->close handling to ngx_mail_proxy_handler(). Also, added explicit handling of c->close to stream proxy, ngx_stream_proxy_process_connection(). This improves worker_shutdown_timeout handling in stream, it will no longer wait for some data being transferred in a connection before closing it, and will also provide appropriate logging at the "info" level. diffstat: src/http/ngx_http_request.c | 7 +++++++ src/mail/ngx_mail_proxy_module.c | 7 +++++-- src/stream/ngx_stream_proxy_module.c | 6 ++++++ 3 files changed, 18 insertions(+), 2 deletions(-) diffs (52 lines): diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2225,6 +2225,13 @@ ngx_http_request_handler(ngx_event_t *ev ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, "http run request: \"%V?%V\"", &r->uri, &r->args); + if (c->close) { + r->main->count++; + ngx_http_terminate_request(r, 0); + ngx_http_run_posted_requests(c); + return; + } + if (ev->delayed && ev->timedout) { ev->delayed = 0; ev->timedout = 0; diff --git a/src/mail/ngx_mail_proxy_module.c b/src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c +++ b/src/mail/ngx_mail_proxy_module.c @@ -882,10 +882,13 @@ ngx_mail_proxy_handler(ngx_event_t *ev) c = ev->data; s = c->data; - if (ev->timedout) { + if (ev->timedout || c->close) { c->log->action = "proxying"; - if (c == s->connection) { + if (c->close) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "shutdown timeout"); + + } else if (c == s->connection) { ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); c->timedout = 1; diff --git a/src/stream/ngx_stream_proxy_module.c b/src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c +++ b/src/stream/ngx_stream_proxy_module.c @@ -1290,6 +1290,12 @@ ngx_stream_proxy_process_connection(ngx_ s = c->data; u = s->upstream; + if (c->close) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, "shutdown timeout"); + ngx_stream_proxy_finalize(s, NGX_STREAM_OK); + return; + } + c = s->connection; pc = u->peer.connection; From xeioex at nginx.com Mon Nov 20 16:25:31 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 16:25:31 +0000 Subject: [njs] Fixing Coverity warnings related to close(). Message-ID: details: http://hg.nginx.org/njs/rev/e51a848edba3 branches: changeset: 429:e51a848edba3 user: Dmitry Volyntsev date: Mon Nov 20 19:24:56 2017 +0300 description: Fixing Coverity warnings related to close(). Coverity assumes that open() can normally return 0. diffstat: njs/njs_fs.c | 24 ++++++++++++------------ 1 files changed, 12 insertions(+), 12 deletions(-) diffs (69 lines): diff -r 7ada5170b7bb -r e51a848edba3 njs/njs_fs.c --- a/njs/njs_fs.c Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/njs_fs.c Mon Nov 20 19:24:56 2017 +0300 @@ -277,8 +277,8 @@ njs_fs_read_file(njs_vm_t *vm, njs_value done: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } if (description != 0) { @@ -305,8 +305,8 @@ done: memory_error: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } njs_exception_memory_error(vm); @@ -476,8 +476,8 @@ njs_fs_read_file_sync(njs_vm_t *vm, njs_ done: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } if (description != 0) { @@ -491,8 +491,8 @@ done: memory_error: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } njs_exception_memory_error(vm); @@ -696,8 +696,8 @@ static njs_ret_t njs_fs_write_file_inter done: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } if (description != 0) { @@ -868,8 +868,8 @@ njs_fs_write_file_sync_internal(njs_vm_t done: - if (fd > 0) { - close(fd); + if (fd != -1) { + (void) close(fd); } if (description != 0) { From xeioex at nginx.com Mon Nov 20 16:25:31 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 16:25:31 +0000 Subject: [njs] Fixed a typo in njs interactive test. Message-ID: details: http://hg.nginx.org/njs/rev/7ada5170b7bb branches: changeset: 428:7ada5170b7bb user: Dmitry Volyntsev date: Mon Nov 20 19:24:56 2017 +0300 description: Fixed a typo in njs interactive test. diffstat: njs/test/njs_interactive_test.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 4fc65a23bcfc -r 7ada5170b7bb njs/test/njs_interactive_test.c --- a/njs/test/njs_interactive_test.c Mon Nov 20 19:24:55 2017 +0300 +++ b/njs/test/njs_interactive_test.c Mon Nov 20 19:24:56 2017 +0300 @@ -188,7 +188,7 @@ static njs_interactive_test_t njs_test[ { nxt_string("var o = { toString: function() { return [1] } }" ENTER "o" ENTER), nxt_string("TypeError\n" - "at main\n") }, + " at main (native)\n") }, }; From xeioex at nginx.com Mon Nov 20 16:25:31 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 16:25:31 +0000 Subject: [njs] MemoryError reimplemented without its own prototype. Message-ID: details: http://hg.nginx.org/njs/rev/5f619bcb0e7d branches: changeset: 430:5f619bcb0e7d user: Dmitry Volyntsev date: Mon Nov 20 19:24:58 2017 +0300 description: MemoryError reimplemented without its own prototype. MemoryError is a special preallocated immutable object. Its value type is NJS_OBJECT_INTERNAL_ERROR. Initially the object had its own prototype object. It introduced inconsistency between value types and prototype types, because some routines (for example, njs_object_prototype_to_string()) expect them to be pairwise aligned. diffstat: njs/njs_builtin.c | 1 - njs/njs_error.c | 103 ++++++++++++++++++++++++---------------------- njs/njs_error.h | 1 - njs/njs_vm.h | 9 +-- njs/test/njs_unit_test.c | 5 +- 5 files changed, 58 insertions(+), 61 deletions(-) diffs (222 lines): diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_builtin.c --- a/njs/njs_builtin.c Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/njs_builtin.c Mon Nov 20 19:24:58 2017 +0300 @@ -78,7 +78,6 @@ const njs_object_init_t *njs_prototype_ &njs_syntax_error_prototype_init, &njs_type_error_prototype_init, &njs_uri_error_prototype_init, - &njs_memory_error_prototype_init, }; diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_error.c --- a/njs/njs_error.c Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/njs_error.c Mon Nov 20 19:24:58 2017 +0300 @@ -498,7 +498,7 @@ njs_set_memory_error(njs_vm_t *vm) nxt_lvlhsh_init(&object->hash); nxt_lvlhsh_init(&object->shared_hash); - object->__proto__ = &prototypes[NJS_PROTOTYPE_MEMORY_ERROR].object; + object->__proto__ = &prototypes[NJS_PROTOTYPE_INTERNAL_ERROR].object; object->type = NJS_OBJECT_INTERNAL_ERROR; object->shared = 1; @@ -532,6 +532,30 @@ njs_memory_error_constructor(njs_vm_t *v } +static njs_ret_t +njs_memory_error_prototype_create(njs_vm_t *vm, njs_value_t *value) +{ + int32_t index; + njs_value_t *proto; + njs_function_t *function; + + /* MemoryError has no its own prototype. */ + + index = NJS_PROTOTYPE_INTERNAL_ERROR; + + function = value->data.u.function; + proto = njs_property_prototype_create(vm, &function->object.hash, + &vm->prototypes[index].object); + if (proto == NULL) { + proto = (njs_value_t *) &njs_value_void; + } + + vm->retval = *proto; + + return NXT_OK; +} + + static const njs_object_prop_t njs_memory_error_constructor_properties[] = { /* MemoryError.name == "MemoryError". */ @@ -552,7 +576,7 @@ static const njs_object_prop_t njs_memo { .type = NJS_NATIVE_GETTER, .name = njs_string("prototype"), - .value = njs_native_getter(njs_object_prototype_create), + .value = njs_native_getter(njs_memory_error_prototype_create), }, }; @@ -701,6 +725,26 @@ const njs_object_init_t njs_eval_error_ }; +static njs_ret_t +njs_internal_error_prototype_to_string(njs_vm_t *vm, njs_value_t *args, + nxt_uint_t nargs, njs_index_t unused) +{ + if (nargs >= 1 && njs_is_object(&args[0])) { + + /* MemoryError is a nonextensible internal error. */ + if (!args[0].data.u.object->extensible) { + static const njs_value_t name = njs_string("MemoryError"); + + vm->retval = name; + + return NJS_OK; + } + } + + return njs_error_prototype_to_string(vm, args, nargs, unused); +} + + static const njs_object_prop_t njs_internal_error_prototype_properties[] = { { @@ -708,6 +752,13 @@ static const njs_object_prop_t njs_inte .name = njs_string("name"), .value = njs_string("InternalError"), }, + + { + .type = NJS_METHOD, + .name = njs_string("toString"), + .value = njs_native_function(njs_internal_error_prototype_to_string, + 0, 0), + }, }; @@ -801,51 +852,3 @@ const njs_object_init_t njs_uri_error_p njs_uri_error_prototype_properties, nxt_nitems(njs_uri_error_prototype_properties), }; - - -static njs_ret_t -njs_memory_error_prototype_to_string(njs_vm_t *vm, njs_value_t *args, - nxt_uint_t nargs, njs_index_t unused) -{ - static const njs_value_t name = njs_string("MemoryError"); - - vm->retval = name; - - return NJS_OK; -} - - -static const njs_object_prop_t njs_memory_error_prototype_properties[] = -{ - { - .type = NJS_PROPERTY, - .name = njs_string("name"), - .value = njs_string("MemoryError"), - }, - - { - .type = NJS_PROPERTY, - .name = njs_string("message"), - .value = njs_string(""), - }, - - { - .type = NJS_METHOD, - .name = njs_string("valueOf"), - .value = njs_native_function(njs_error_prototype_value_of, 0, 0), - }, - - { - .type = NJS_METHOD, - .name = njs_string("toString"), - .value = njs_native_function(njs_memory_error_prototype_to_string, - 0, 0), - }, -}; - - -const njs_object_init_t njs_memory_error_prototype_init = { - nxt_string("MemoryError"), - njs_memory_error_prototype_properties, - nxt_nitems(njs_memory_error_prototype_properties), -}; diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_error.h --- a/njs/njs_error.h Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/njs_error.h Mon Nov 20 19:24:58 2017 +0300 @@ -71,7 +71,6 @@ extern const njs_object_init_t njs_ref_ extern const njs_object_init_t njs_syntax_error_prototype_init; extern const njs_object_init_t njs_type_error_prototype_init; extern const njs_object_init_t njs_uri_error_prototype_init; -extern const njs_object_init_t njs_memory_error_prototype_init; #endif /* _NJS_BOOLEAN_H_INCLUDED_ */ diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_vm.h --- a/njs/njs_vm.h Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/njs_vm.h Mon Nov 20 19:24:58 2017 +0300 @@ -799,8 +799,7 @@ enum njs_prototypes_e { NJS_PROTOTYPE_SYNTAX_ERROR, NJS_PROTOTYPE_TYPE_ERROR, NJS_PROTOTYPE_URI_ERROR, - NJS_PROTOTYPE_MEMORY_ERROR, -#define NJS_PROTOTYPE_MAX (NJS_PROTOTYPE_MEMORY_ERROR + 1) +#define NJS_PROTOTYPE_MAX (NJS_PROTOTYPE_URI_ERROR + 1) }; @@ -833,7 +832,8 @@ enum njs_constructor_e { NJS_CONSTRUCTOR_SYNTAX_ERROR = NJS_PROTOTYPE_SYNTAX_ERROR, NJS_CONSTRUCTOR_TYPE_ERROR = NJS_PROTOTYPE_TYPE_ERROR, NJS_CONSTRUCTOR_URI_ERROR = NJS_PROTOTYPE_URI_ERROR, - NJS_CONSTRUCTOR_MEMORY_ERROR = NJS_PROTOTYPE_MEMORY_ERROR, + /* MemoryError has no its own prototype. */ + NJS_CONSTRUCTOR_MEMORY_ERROR, #define NJS_CONSTRUCTOR_MAX (NJS_CONSTRUCTOR_MEMORY_ERROR + 1) }; @@ -975,8 +975,7 @@ struct njs_vm_s { /* * MemoryError is statically allocated immutable Error object - * with the generic type NJS_OBJECT_INTERNAL_ERROR but its own prototype - * object NJS_PROTOTYPE_MEMORY_ERROR. + * with the generic type NJS_OBJECT_INTERNAL_ERROR. */ njs_object_t memory_error_object; diff -r e51a848edba3 -r 5f619bcb0e7d njs/test/njs_unit_test.c --- a/njs/test/njs_unit_test.c Mon Nov 20 19:24:56 2017 +0300 +++ b/njs/test/njs_unit_test.c Mon Nov 20 19:24:58 2017 +0300 @@ -5291,9 +5291,6 @@ static njs_unit_test_t njs_test[] = { nxt_string("URIError('e').name + ': ' + URIError('e').message"), nxt_string("URIError: e") }, - { nxt_string("MemoryError('e').name + ': ' + MemoryError('e').message"), - nxt_string("MemoryError: ") }, - { nxt_string("var e = EvalError('e'); e.name = 'E'; e"), nxt_string("E: e") }, @@ -5342,7 +5339,7 @@ static njs_unit_test_t njs_test[] = nxt_string("URIError") }, { nxt_string("MemoryError.prototype.name"), - nxt_string("MemoryError") }, + nxt_string("InternalError") }, { nxt_string("EvalError.prototype.message"), nxt_string("") }, From xeioex at nginx.com Mon Nov 20 16:25:31 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 16:25:31 +0000 Subject: [njs] Fixed expect file tests. Message-ID: details: http://hg.nginx.org/njs/rev/4fc65a23bcfc branches: changeset: 427:4fc65a23bcfc user: Dmitry Volyntsev date: Mon Nov 20 19:24:55 2017 +0300 description: Fixed expect file tests. Using current directory for temporary files because /tmp is not available for writing in BB environment. diffstat: njs/test/njs_expect_test.exp | 96 ++++++++++++++++++++++--------------------- 1 files changed, 49 insertions(+), 47 deletions(-) diffs (262 lines): diff -r 5c6aa60224cb -r 4fc65a23bcfc njs/test/njs_expect_test.exp --- a/njs/test/njs_expect_test.exp Fri Nov 17 18:55:07 2017 +0300 +++ b/njs/test/njs_expect_test.exp Mon Nov 20 19:24:55 2017 +0300 @@ -185,11 +185,11 @@ njs_test { # require('fs') -set file [open /tmp/njs_test_file w] +set file [open njs_test_file w] puts -nonewline $file "??Z?" flush $file -exec /bin/echo -ne {\x80\x80} > /tmp/njs_test_file_non_utf8 +exec /bin/echo -ne {\x80\x80} > njs_test_file_non_utf8 njs_test { {"var fs = require('fs')\r\n" @@ -203,35 +203,37 @@ njs_test { njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFile('/tmp/njs_test_file', 'utf8', function (e, data) {console.log(data[2]+data.length)})\r\n" + {"fs.readFile('njs_test_file', 'utf8', function (e, data) {console.log(data[2]+data.length)})\r\n" "Z4\r\nundefined\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFile('/tmp/njs_test_file', function (e, data) {console.log(data[4]+data.length)})\r\n" + {"fs.readFile('njs_test_file', function (e, data) {console.log(data[4]+data.length)})\r\n" "Z7\r\nundefined\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFile('/tmp/njs_test_file', {encoding:'utf8',flag:'r+'}, function (e, data) {console.log(data)})\r\n" + {"fs.readFile('njs_test_file', {encoding:'utf8',flag:'r+'}, function (e, data) {console.log(data)})\r\n" "??Z?\r\nundefined\r\n>> "} } +exec rm -fr njs_unknown_path + +njs_test { + {"var fs = require('fs'); \r\n" + "undefined\r\n>> "} + {"fs.readFile('njs_unknown_path', 'utf8', function (e) {console.log(JSON.stringify(e))})\r\n" + "{\"errno\":2,\"path\":\"njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "} +} + njs_test { {"var fs = require('fs'); \r\n" "undefined\r\n>> "} - {"fs.readFile('/tmp/njs_unknown_path', 'utf8', function (e) {console.log(JSON.stringify(e))})\r\n" - "{\"errno\":2,\"path\":\"/tmp/njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "} -} - -njs_test { - {"var fs = require('fs'); \r\n" - "undefined\r\n>> "} - {"fs.readFile('/tmp/njs_unknown_path', {encoding:'utf8', flag:'r+'}, function (e) {console.log(e)})\r\n" + {"fs.readFile('njs_unknown_path', {encoding:'utf8', flag:'r+'}, function (e) {console.log(e)})\r\n" "Error: No such file or directory\r\nundefined\r\n>> "} } @@ -240,79 +242,79 @@ njs_test { njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file', 'utf8')[2]\r\n" + {"fs.readFileSync('njs_test_file', 'utf8')[2]\r\n" "Z\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file')[4]\r\n" + {"fs.readFileSync('njs_test_file')[4]\r\n" "Z\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file', {encoding:'utf8',flag:'r+'})\r\n" + {"fs.readFileSync('njs_test_file', {encoding:'utf8',flag:'r+'})\r\n" "??Z?\r\n>> "} } njs_test { {"var fs = require('fs'); \r\n" "undefined\r\n>> "} - {"try { fs.readFileSync('/tmp/njs_unknown_path')} catch (e) {console.log(JSON.stringify(e))}\r\n" - "{\"errno\":2,\"path\":\"/tmp/njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "} + {"try { fs.readFileSync('njs_unknown_path')} catch (e) {console.log(JSON.stringify(e))}\r\n" + "{\"errno\":2,\"path\":\"njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file_non_utf8').charCodeAt(1)\r\n" + {"fs.readFileSync('njs_test_file_non_utf8').charCodeAt(1)\r\n" "128"} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file_non_utf8', 'utf8')\r\n" + {"fs.readFileSync('njs_test_file_non_utf8', 'utf8')\r\n" "Error: Non-UTF8 file, convertion is not implemented"} } # require('fs').writeFile() -exec rm -fr /tmp/njs_test_file2 +exec rm -fr njs_test_file2 njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"function h1(e) {if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))}\r\n" + {"function h1(e) {if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))}\r\n" "undefined\r\n>> "} - {"fs.writeFile('/tmp/njs_test_file2', 'ABC', h1)\r\n" + {"fs.writeFile('njs_test_file2', 'ABC', h1)\r\n" "ABC\r\nundefined\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFile('/tmp/njs_test_file2', 'ABC', 'utf8', function (e) { if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))})\r\n" + {"fs.writeFile('njs_test_file2', 'ABC', 'utf8', function (e) { if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))})\r\n" "ABC\r\nundefined\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFile('/tmp/njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666}, function (e) { if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))})\r\n" + {"fs.writeFile('njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666}, function (e) { if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))})\r\n" "ABC\r\nundefined\r\n>> "} } -exec rm -fr /tmp/njs_wo_file +exec rm -fr njs_wo_file njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFile('/tmp/njs_wo_file', 'ABC', {mode:0o222}, function (e) {console.log(fs.readFileSync('/tmp/njs_wo_file'))})\r\n" + {"fs.writeFile('njs_wo_file', 'ABC', {mode:0o222}, function (e) {console.log(fs.readFileSync('njs_wo_file'))})\r\n" "Error: Permission denied"} } @@ -325,81 +327,81 @@ njs_test { # require('fs').writeFileSync() -exec rm -fr /tmp/njs_test_file2 +exec rm -fr njs_test_file2 njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n" + {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file2')\r\n" + {"fs.readFileSync('njs_test_file2')\r\n" "ABC\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC', 'utf8')\r\n" + {"fs.writeFileSync('njs_test_file2', 'ABC', 'utf8')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file2')\r\n" + {"fs.readFileSync('njs_test_file2')\r\n" "ABC\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n" + {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n" + {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file2')\r\n" + {"fs.readFileSync('njs_test_file2')\r\n" "ABC\r\n>> "} } njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666})\r\n" + {"fs.writeFileSync('njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666})\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file2')\r\n" + {"fs.readFileSync('njs_test_file2')\r\n" "ABC\r\n>> "} } -exec rm -fr /tmp/njs_wo_file +exec rm -fr njs_wo_file njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.writeFileSync('/tmp/njs_wo_file', 'ABC', {mode:0o222}); fs.readFileSync('/tmp/njs_wo_file')\r\n" + {"fs.writeFileSync('njs_wo_file', 'ABC', {mode:0o222}); fs.readFileSync('njs_wo_file')\r\n" "Error: Permission denied"} } # require('fs').appendFile() -exec rm -fr /tmp/njs_test_file2 +exec rm -fr njs_test_file2 njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"function h1(e) {console.log(fs.readFileSync('/tmp/njs_test_file2'))}\r\n" + {"function h1(e) {console.log(fs.readFileSync('njs_test_file2'))}\r\n" "undefined\r\n>> "} - {"function h2(e) {fs.appendFile('/tmp/njs_test_file2', 'ABC', h1)}\r\n" + {"function h2(e) {fs.appendFile('njs_test_file2', 'ABC', h1)}\r\n" "undefined\r\n>> "} - {"fs.appendFile('/tmp/njs_test_file2', 'ABC', h2)\r\n" + {"fs.appendFile('njs_test_file2', 'ABC', h2)\r\n" "ABCABC\r\nundefined\r\n>> "} } # require('fs').appendFileSync() -exec rm -fr /tmp/njs_test_file2 +exec rm -fr njs_test_file2 njs_test { {"var fs = require('fs')\r\n" "undefined\r\n>> "} - {"fs.appendFileSync('/tmp/njs_test_file2', 'ABC')\r\n" + {"fs.appendFileSync('njs_test_file2', 'ABC')\r\n" "undefined\r\n>> "} - {"fs.appendFileSync('/tmp/njs_test_file2', 'ABC')\r\n" + {"fs.appendFileSync('njs_test_file2', 'ABC')\r\n" "undefined\r\n>> "} - {"fs.readFileSync('/tmp/njs_test_file2')\r\n" + {"fs.readFileSync('njs_test_file2')\r\n" "ABCABC\r\n>> "} } From xeioex at nginx.com Mon Nov 20 17:09:30 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 17:09:30 +0000 Subject: [njs] Added tag 0.1.15 for changeset 215ca47b9167 Message-ID: details: http://hg.nginx.org/njs/rev/5eb2620a9bec branches: changeset: 432:5eb2620a9bec user: Dmitry Volyntsev date: Mon Nov 20 20:08:56 2017 +0300 description: Added tag 0.1.15 for changeset 215ca47b9167 diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 215ca47b9167 -r 5eb2620a9bec .hgtags --- a/.hgtags Mon Nov 20 20:07:15 2017 +0300 +++ b/.hgtags Mon Nov 20 20:08:56 2017 +0300 @@ -13,3 +13,4 @@ fc5df33f4e6b02a673daf3728ff690fb1e09b95e c07b060396be3622ca97b037a86076b61b850847 0.1.12 d548b78eb881ca799aa6fc8ba459d076f7db5ac8 0.1.13 d89d06dc638e78f8635c0bfbcd02469ac1a08748 0.1.14 +215ca47b9167d513fd58ac88de97659377e45275 0.1.15 From xeioex at nginx.com Mon Nov 20 17:09:30 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 20 Nov 2017 17:09:30 +0000 Subject: [njs] Version 0.1.15. Message-ID: details: http://hg.nginx.org/njs/rev/215ca47b9167 branches: changeset: 431:215ca47b9167 user: Dmitry Volyntsev date: Mon Nov 20 20:07:15 2017 +0300 description: Version 0.1.15. diffstat: CHANGES | 15 +++++++++++++++ Makefile | 2 +- 2 files changed, 16 insertions(+), 1 deletions(-) diffs (32 lines): diff -r 5f619bcb0e7d -r 215ca47b9167 CHANGES --- a/CHANGES Mon Nov 20 19:24:58 2017 +0300 +++ b/CHANGES Mon Nov 20 20:07:15 2017 +0300 @@ -1,3 +1,18 @@ + +Changes with nJScript 0.1.15 20 Nov 2017 + + *) Feature: Error, EvalError, InternalError, RangeError, + ReferenceError, SyntaxError, TypeError, URIError objects. + + *) Feature: octal literals support. + + *) Feature: File system access fs.readFile(), fs.readFileSync(), + fs.appendFile(), fs.appendFileSync(), fs.writeFile(), + fs.writeFileSync() methods. + + *) Feature: nginx modules print backtrace on exception. + + *) Bugfix: miscellaneous bugs have been fixed. Changes with nJScript 0.1.14 09 Oct 2017 diff -r 5f619bcb0e7d -r 215ca47b9167 Makefile --- a/Makefile Mon Nov 20 19:24:58 2017 +0300 +++ b/Makefile Mon Nov 20 20:07:15 2017 +0300 @@ -1,5 +1,5 @@ -NJS_VER = 0.1.14 +NJS_VER = 0.1.15 NXT_LIB = nxt From mdounin at mdounin.ru Tue Nov 21 15:15:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Nov 2017 15:15:57 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/1af00446f23e branches: changeset: 7157:1af00446f23e user: Maxim Dounin date: Tue Nov 21 17:32:12 2017 +0300 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -6,7 +6,7 @@ TEMP = tmp CC = cl OBJS = objs.msvc8 -OPENSSL = openssl-1.0.2l +OPENSSL = openssl-1.0.2m ZLIB = zlib-1.2.11 PCRE = pcre-8.41 From mdounin at mdounin.ru Tue Nov 21 15:15:58 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Nov 2017 15:15:58 +0000 Subject: [nginx] nginx-1.13.7-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/47cca243d0ed branches: changeset: 7158:47cca243d0ed user: Maxim Dounin date: Tue Nov 21 18:09:43 2017 +0300 description: nginx-1.13.7-RELEASE diffstat: docs/xml/nginx/changes.xml | 83 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 83 insertions(+), 0 deletions(-) diffs (93 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,89 @@ + + + + +? ?????????? $upstream_status. + + +in the $upstream_status variable. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????? ????????? ????? "101 Switching Protocols" ?? ?????????. + + +a segmentation fault might occur in a worker process +if a backend returned a "101 Switching Protocols" response to a subrequest. + + + + + +???? ??? ???????????????? ????????? ?????? ???? ??????????? ?????? +? ???????????????? ??????????? ????????, +?? ? ??????? ???????? ?????????? segmentation fault. + + +a segmentation fault occurred in a master process +if a shared memory zone size was changed during a reconfiguration +and the reconfiguration failed. + + + + + +? ?????? ngx_http_fastcgi_module. + + +in the ngx_http_fastcgi_module. + + + + + +nginx ????????? ?????? 500, +???? ? ????????? xslt_stylesheet +???? ?????? ????????? ??? ????????????? ??????????. + + +nginx returned the 500 error +if parameters without variables were specified +in the "xslt_stylesheet" directive. + + + + + +??? ????????????? ???????? ?????????? zlib ?? Intel +? ??? ???????? ????????? "gzip filter failed to use preallocated memory". + + +"gzip filter failed to use preallocated memory" alerts appeared in logs +when using a zlib library variant from Intel. + + + + + +????????? worker_shutdown_timeout ?? ???????? +??? ????????????? ????????? ??????-??????? +? ??? ????????????? WebSocket-??????????. + + +the "worker_shutdown_timeout" directive did not work +when using mail proxy and when proxying WebSocket connections. + + + + + + From mdounin at mdounin.ru Tue Nov 21 15:16:00 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Nov 2017 15:16:00 +0000 Subject: [nginx] release-1.13.7 tag Message-ID: details: http://hg.nginx.org/nginx/rev/679ea950eae9 branches: changeset: 7159:679ea950eae9 user: Maxim Dounin date: Tue Nov 21 18:09:44 2017 +0300 description: release-1.13.7 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -419,3 +419,4 @@ 8457ce87640f9bfe6221c4ac4466ced20e03bebe bbc642c813c829963ce8197c0ca237ab7601f3d4 release-1.13.4 0d45b4cf7c2e4e626a5a16e1fe604402ace1cea5 release-1.13.5 f87da7d9ca02b8ced4caa6c5eb9013ccd47b0117 release-1.13.6 +47cca243d0ed39bf5dcb9859184affc958b79b6f release-1.13.7 From hucong.c at foxmail.com Tue Nov 21 15:54:48 2017 From: hucong.c at foxmail.com (=?utf-8?B?6IOh6IGqIChodWNjKQ==?=) Date: Tue, 21 Nov 2017 23:54:48 +0800 Subject: [patch-1] Range filter: support multiple ranges. In-Reply-To: <20171114165703.GX26836@mdounin.ru> References: <20171114165703.GX26836@mdounin.ru> Message-ID: Hi, After some attempts, I found it is still too hard for me if the request ranges in no particular order. Looking forward to your code. Anyway, under your guidance, my changes are as follows: # HG changeset patch # User hucongcong # Date 1510309868 -28800 # Fri Nov 10 18:31:08 2017 +0800 # Node ID 5c327973a284849a18c042fa6e7e191268b94bac # Parent 32f83fe5747b55ef341595b18069bee3891874d0 Range filter: better support for multipart ranges. Introducing support for multipart ranges if the response body is not in the single buffer as long as requested ranges do not overlap and properly ordered. diff -r 32f83fe5747b -r 5c327973a284 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800 @@ -54,6 +54,7 @@ typedef struct { typedef struct { off_t offset; + ngx_uint_t index; ngx_str_t boundary_header; ngx_array_t ranges; } ngx_http_range_filter_ctx_t; @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); @@ -222,6 +225,7 @@ parse: return NGX_ERROR; } + ctx->index = (ngx_uint_t) -1; ctx->offset = r->headers_out.content_offset; ranges = r->single_range ? 1 : clcf->max_ranges; @@ -270,9 +274,8 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length, cutoff, - cutlim; - ngx_uint_t suffix; + off_t start, end, content_length, cutoff, cutlim; + ngx_uint_t suffix, descending; ngx_http_range_t *range; ngx_http_range_filter_ctx_t *mctx; @@ -281,6 +284,7 @@ ngx_http_range_parse(ngx_http_request_t ngx_http_range_body_filter_module); if (mctx) { ctx->ranges = mctx->ranges; + ctx->boundary_header = mctx->boundary_header; return NGX_OK; } } @@ -292,7 +296,8 @@ ngx_http_range_parse(ngx_http_request_t } p = r->headers_in.range->value.data + 6; - size = 0; + range = NULL; + descending = 0; content_length = r->headers_out.content_length_n; cutoff = NGX_MAX_OFF_T_VALUE / 10; @@ -369,6 +374,11 @@ ngx_http_range_parse(ngx_http_request_t found: if (start < end) { + + if (range && start < range->end) { + descending++; + } + range = ngx_array_push(&ctx->ranges); if (range == NULL) { return NGX_ERROR; @@ -377,16 +387,6 @@ ngx_http_range_parse(ngx_http_request_t range->start = start; range->end = end; - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { - return NGX_HTTP_RANGE_NOT_SATISFIABLE; - } - - size += end - start; - - if (ranges-- == 0) { - return NGX_DECLINED; - } - } else if (start == 0) { return NGX_DECLINED; } @@ -400,7 +400,7 @@ ngx_http_range_parse(ngx_http_request_t return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { + if (ctx->ranges.nelts > ranges || descending) { return NGX_DECLINED; } @@ -469,6 +469,22 @@ ngx_http_range_multipart_header(ngx_http ngx_http_range_t *range; ngx_atomic_uint_t boundary; + if (ctx->index == (ngx_uint_t) -1) { + ctx->index = 0; + range = ctx->ranges.elts; + + for (i = 0; i < ctx->ranges.nelts; i++) { + if (ctx->offset < range[i].end) { + ctx->index = i; + break; + } + } + } + + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof(CRLF "Content-Type: ") - 1 + r->headers_out.content_type.len @@ -574,6 +590,7 @@ ngx_http_range_multipart_header(ngx_http } r->headers_out.content_length_n = len; + r->headers_out.content_offset = range[0].start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; @@ -639,63 +656,11 @@ ngx_http_range_body_filter(ngx_http_requ return ngx_http_range_singlepart_body(r, ctx, in); } - /* - * multipart ranges are supported only if whole body is in a single buffer - */ - - if (ngx_buf_special(in->buf)) { - return ngx_http_next_body_filter(r, in); - } - - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { - return NGX_ERROR; - } - return ngx_http_range_multipart_body(r, ctx, in); } static ngx_int_t -ngx_http_range_test_overlapped(ngx_http_request_t *r, - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) -{ - off_t start, last; - ngx_buf_t *buf; - ngx_uint_t i; - ngx_http_range_t *range; - - if (ctx->offset) { - goto overlapped; - } - - buf = in->buf; - - if (!buf->last_buf) { - start = ctx->offset; - last = ctx->offset + ngx_buf_size(buf); - - range = ctx->ranges.elts; - for (i = 0; i < ctx->ranges.nelts; i++) { - if (start > range[i].start || last < range[i].end) { - goto overlapped; - } - } - } - - ctx->offset = ngx_buf_size(buf); - - return NGX_OK; - -overlapped: - - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "range in overlapped buffers"); - - return NGX_ERROR; -} - - -static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { @@ -786,96 +751,206 @@ static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) { - ngx_buf_t *b, *buf; - ngx_uint_t i; - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; - ngx_http_range_t *range; + off_t start, last; + ngx_buf_t *buf, *b; + ngx_chain_t *out, *cl, *tl, **ll; + ngx_http_range_t *range, *tail; + out = NULL; ll = &out; - buf = in->buf; + range = ctx->ranges.elts; - - for (i = 0; i < ctx->ranges.nelts; i++) { + tail = range + ctx->ranges.nelts; + range += ctx->index; - /* - * The boundary header of the range: - * CRLF - * "--0123456789" CRLF - * "Content-Type: image/jpeg" CRLF - * "Content-Range: bytes " - */ + for (cl = in; cl; cl = cl->next) { + + buf = cl->buf; - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + start = ctx->offset; + last = ctx->offset + ngx_buf_size(buf); + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body buf: %O-%O", start, last); + + if (ngx_buf_special(buf)) { + continue; } - b->memory = 1; - b->pos = ctx->boundary_header.data; - b->last = ctx->boundary_header.data + ctx->boundary_header.len; + if (range->end <= start || range->start >= last) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http range multipart body skip"); - hcl = ngx_alloc_chain_link(r->pool); - if (hcl == NULL) { - return NGX_ERROR; + if (buf->in_file) { + buf->file_pos = buf->file_last; + } + + buf->pos = buf->last; + buf->sync = 1; + + ctx->offset = last; + continue; } - hcl->buf = b; + if (range->start >= start) { + if (ngx_http_range_link_boundary_header(r, ctx, ll) != NGX_OK) { + return NGX_ERROR; + } + + ll = &(*ll)->next->next; - /* "SSSS-EEEE/TTTT" CRLF CRLF */ + if (buf->in_file) { + buf->file_pos += range->start - start; + } - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->start - start); + } + + start = range->start; } - b->temporary = 1; - b->pos = range[i].content_range.data; - b->last = range[i].content_range.data + range[i].content_range.len; + ctx->offset = last; + + if (range->end <= last) { + + if (range + 1 < tail && range[1].start < last) { + + ctx->offset = range->end; + + b = ngx_alloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } - rcl = ngx_alloc_chain_link(r->pool); - if (rcl == NULL) { - return NGX_ERROR; - } + tl = ngx_alloc_chain_link(r->pool); + if (tl == NULL) { + return NGX_ERROR; + } + + tl->buf = b; + tl->next = cl; + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->last_in_chain = 0; + b->last_buf = 0; + + if (buf->in_file) { + buf->file_pos += range->end - start; + } - rcl->buf = b; + if (ngx_buf_in_memory(buf)) { + buf->pos += (size_t) (range->end - start); + } + cl = tl; + buf = cl->buf; + } + + if (buf->in_file) { + buf->file_last -= last - range->end; + } - /* the range data */ + if (ngx_buf_in_memory(buf)) { + buf->last -= (size_t) (last - range->end); + } + + ctx->index++; + range++; - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; + if (range == tail) { + *ll = cl; + ll = &cl->next; + + if (ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) { + return NGX_ERROR; + } + + break; + } } - b->in_file = buf->in_file; - b->temporary = buf->temporary; - b->memory = buf->memory; - b->mmap = buf->mmap; - b->file = buf->file; + *ll = cl; + ll = &cl->next; + } + + if (out == NULL) { + return NGX_OK; + } + + return ngx_http_next_body_filter(r, out); +} + - if (buf->in_file) { - b->file_pos = buf->file_pos + range[i].start; - b->file_last = buf->file_pos + range[i].end; - } +static ngx_int_t +ngx_http_range_link_boundary_header(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl, *rcl; + ngx_http_range_t *range; + + /* + * The boundary header of the range: + * CRLF + * "--0123456789" CRLF + * "Content-Type: image/jpeg" CRLF + * "Content-Range: bytes " + */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + b->memory = 1; + b->pos = ctx->boundary_header.data; + b->last = ctx->boundary_header.data + ctx->boundary_header.len; - if (ngx_buf_in_memory(buf)) { - b->pos = buf->pos + (size_t) range[i].start; - b->last = buf->pos + (size_t) range[i].end; - } + hcl = ngx_alloc_chain_link(r->pool); + if (hcl == NULL) { + return NGX_ERROR; + } + + hcl->buf = b; + + + /* "SSSS-EEEE/TTTT" CRLF CRLF */ + + b = ngx_calloc_buf(r->pool); + if (b == NULL) { + return NGX_ERROR; + } + + range = ctx->ranges.elts; + b->temporary = 1; + b->pos = range[ctx->index].content_range.data; + b->last = range[ctx->index].content_range.data + + range[ctx->index].content_range.len; - dcl = ngx_alloc_chain_link(r->pool); - if (dcl == NULL) { - return NGX_ERROR; - } + rcl = ngx_alloc_chain_link(r->pool); + if (rcl == NULL) { + return NGX_ERROR; + } + + rcl->buf = b; + + rcl->next = NULL; + hcl->next = rcl; + *ll = hcl; - dcl->buf = b; + return NGX_OK; +} + - *ll = hcl; - hcl->next = rcl; - rcl->next = dcl; - ll = &dcl->next; - } +static ngx_int_t +ngx_http_range_link_last_boundary(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) +{ + ngx_buf_t *b; + ngx_chain_t *hcl; /* the last boundary CRLF "--0123456789--" CRLF */ @@ -885,7 +960,8 @@ ngx_http_range_multipart_body(ngx_http_r } b->temporary = 1; - b->last_buf = 1; + b->last_in_chain = 1; + b->last_buf = (r == r->main) ? 1 : 0; b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1); @@ -904,11 +980,11 @@ ngx_http_range_multipart_body(ngx_http_r } hcl->buf = b; + hcl->next = NULL; - *ll = hcl; - return ngx_http_next_body_filter(r, out); + return NGX_OK; } ------------------ Original ------------------ From: "Maxim Dounin";; Send time: Wednesday, Nov 15, 2017 0:57 AM To: "nginx-devel"; Subject: Re: [patch-1] Range filter: support multiple ranges. Hello! On Fri, Nov 10, 2017 at 07:03:01PM +0800, ?? (hucc) wrote: > Hi, > > How about this as the first patch? > > # HG changeset patch > # User hucongcong > # Date 1510309868 -28800 > # Fri Nov 10 18:31:08 2017 +0800 > # Node ID c32fddd15a26b00f8f293f6b0d8762cd9f2bfbdb > # Parent 32f83fe5747b55ef341595b18069bee3891874d0 > Range filter: support for multipart response in wider range. > > Before the patch multipart ranges are supported only if whole body > is in a single buffer. Now, the limit is canceled. If there are no > overlapping ranges and all ranges list in ascending order, nginx > will return 206 with multipart response, otherwise return 200 (OK). Introducing support for multipart ranges if the response body is not in the single buffer as long as requested ranges do not overlap and properly ordered looks like a much better idea to me. That's basically what I have in mind as possible futher enhancement of the range filter if we'll ever need better support for multipart ranges. There are various questions about the patch itself though, see below. > diff -r 32f83fe5747b -r c32fddd15a26 src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800 > @@ -54,6 +54,7 @@ typedef struct { > > typedef struct { > off_t offset; > + ngx_uint_t index; /* start with 1 */ > ngx_str_t boundary_header; > ngx_array_t ranges; > } ngx_http_range_filter_ctx_t; > @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa > static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); > -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); > +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); > > static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); > static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); > @@ -270,9 +273,8 @@ ngx_http_range_parse(ngx_http_request_t > ngx_uint_t ranges) > { > u_char *p; > - off_t start, end, size, content_length, cutoff, > - cutlim; > - ngx_uint_t suffix; > + off_t start, end, content_length, cutoff, cutlim; > + ngx_uint_t suffix, descending; > ngx_http_range_t *range; > ngx_http_range_filter_ctx_t *mctx; > > @@ -281,6 +283,7 @@ ngx_http_range_parse(ngx_http_request_t > ngx_http_range_body_filter_module); > if (mctx) { > ctx->ranges = mctx->ranges; > + ctx->boundary_header = ctx->boundary_header; > return NGX_OK; > } > } > @@ -292,7 +295,8 @@ ngx_http_range_parse(ngx_http_request_t > } > > p = r->headers_in.range->value.data + 6; > - size = 0; > + range = NULL; > + descending = 0; > content_length = r->headers_out.content_length_n; > > cutoff = NGX_MAX_OFF_T_VALUE / 10; > @@ -369,6 +373,11 @@ ngx_http_range_parse(ngx_http_request_t > found: > > if (start < end) { > + > + if (range && start < range->end) { > + descending++; > + } > + > range = ngx_array_push(&ctx->ranges); > if (range == NULL) { > return NGX_ERROR; > @@ -377,16 +386,6 @@ ngx_http_range_parse(ngx_http_request_t > range->start = start; > range->end = end; > > - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { > - return NGX_HTTP_RANGE_NOT_SATISFIABLE; > - } > - > - size += end - start; > - > - if (ranges-- == 0) { > - return NGX_DECLINED; > - } > - > } else if (start == 0) { > return NGX_DECLINED; > } > @@ -400,7 +399,7 @@ ngx_http_range_parse(ngx_http_request_t > return NGX_HTTP_RANGE_NOT_SATISFIABLE; > } > > - if (size > content_length) { > + if (ctx->ranges.nelts > ranges || descending) { > return NGX_DECLINED; > } This change basically disables support for non-ascending ranges. As previously suggested, this will break various legitimate use cases, and certainly this is not something we should do. > > @@ -469,6 +468,10 @@ ngx_http_range_multipart_header(ngx_http > ngx_http_range_t *range; > ngx_atomic_uint_t boundary; > > + if (r != r->main) { > + return ngx_http_next_header_filter(r); > + } > + > size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof(CRLF "Content-Type: ") - 1 > + r->headers_out.content_type.len > @@ -570,10 +573,11 @@ ngx_http_range_multipart_header(ngx_http > - range[i].content_range.data; > > len += ctx->boundary_header.len + range[i].content_range.len > - + (range[i].end - range[i].start); > + + (range[i].end - range[i].start); This looks like an unrelated whitespace change. > } > > r->headers_out.content_length_n = len; > + r->headers_out.content_offset = range[0].start; > > if (r->headers_out.content_length) { > r->headers_out.content_length->hash = 0; > @@ -639,63 +643,15 @@ ngx_http_range_body_filter(ngx_http_requ > return ngx_http_range_singlepart_body(r, ctx, in); > } > > - /* > - * multipart ranges are supported only if whole body is in a single buffer > - */ > - > if (ngx_buf_special(in->buf)) { > return ngx_http_next_body_filter(r, in); > } The ngx_buf_special() check should not be needed here as long as ngx_http_range_multipart_body() is modified to properly support multiple buffers. > > - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { > - return NGX_ERROR; > - } > - > return ngx_http_range_multipart_body(r, ctx, in); > } > > > static ngx_int_t > -ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > -{ > - off_t start, last; > - ngx_buf_t *buf; > - ngx_uint_t i; > - ngx_http_range_t *range; > - > - if (ctx->offset) { > - goto overlapped; > - } > - > - buf = in->buf; > - > - if (!buf->last_buf) { > - start = ctx->offset; > - last = ctx->offset + ngx_buf_size(buf); > - > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > - if (start > range[i].start || last < range[i].end) { > - goto overlapped; > - } > - } > - } > - > - ctx->offset = ngx_buf_size(buf); > - > - return NGX_OK; > - > -overlapped: > - > - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, > - "range in overlapped buffers"); > - > - return NGX_ERROR; > -} > - > - > -static ngx_int_t > ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > @@ -786,96 +742,227 @@ static ngx_int_t > ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > - ngx_buf_t *b, *buf; > - ngx_uint_t i; > - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; > - ngx_http_range_t *range; > + off_t start, last, back; > + ngx_buf_t *buf, *b; > + ngx_uint_t i, finished; > + ngx_chain_t *out, *cl, *ncl, **ll; > + ngx_http_range_t *range, *tail; > > - ll = &out; > - buf = in->buf; > range = ctx->ranges.elts; > > - for (i = 0; i < ctx->ranges.nelts; i++) { > + if (!ctx->index) { > + for (i = 0; i < ctx->ranges.nelts; i++) { > + if (ctx->offset < range[i].end) { > + ctx->index = i + 1; > + break; > + } > + } > + } All this logic with using ctx->index as range index plus 1 looks counter-intuitive and unneeded. A much better options would be (in no particular order): - use a special value to mean "uninitialized", like -1; - always initialize ctx->index to 0 and move it futher to the next range once we see that ctx->offset is larger than range[i].end; - do proper initialization somewhere in ngx_http_range_header_filter() or ngx_http_range_multipart_header(). > + > + tail = range + ctx->ranges.nelts - 1; > + range += ctx->index - 1; > + > + out = NULL; > + ll = &out; > + finished = 0; > > - /* > - * The boundary header of the range: > - * CRLF > - * "--0123456789" CRLF > - * "Content-Type: image/jpeg" CRLF > - * "Content-Range: bytes " > - */ > + for (cl = in; cl; cl = cl->next) { > + > + buf = cl->buf; > + > + start = ctx->offset; > + last = ctx->offset + ngx_buf_size(buf); > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + ctx->offset = last; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body buf: %O-%O", start, last); > + > + if (ngx_buf_special(buf)) { > + *ll = cl; > + ll = &cl->next; > + continue; > } > > - b->memory = 1; > - b->pos = ctx->boundary_header.data; > - b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + if (range->end <= start || range->start >= last) { > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body skip"); > > - hcl = ngx_alloc_chain_link(r->pool); > - if (hcl == NULL) { > - return NGX_ERROR; > + if (buf->in_file) { > + buf->file_pos = buf->file_last; > + } > + > + buf->pos = buf->last; > + buf->sync = 1; > + > + continue; Looking at this code I tend to think that our existing ngx_http_range_singlepart_body() implementation you've used as a reference is incorrect. It removes buffers from the original chain as passed to the filter - this can result in a buffer being lost from tracking by the module who owns the buffer, and a request hang if/when all available buffers will be lost. Instead, it should either preserve all existing chain links, or create a new chain. I'll take a look how to properly fix this. > } > > - hcl->buf = b; > + if (range->start >= start) { > > + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { > + return NGX_ERROR; > + } > > - /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + if (buf->in_file) { > + buf->file_pos += range->start - start; > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (ngx_buf_in_memory(buf)) { > + buf->pos += (size_t) (range->start - start); > + } > } > > - b->temporary = 1; > - b->pos = range[i].content_range.data; > - b->last = range[i].content_range.data + range[i].content_range.len; > + if (range->end <= last) { > + > + if (range < tail && range[1].start < last) { The "tail" name is not immediately obvious, and it might be better idea to name it differently. Also, range[1] looks strange when we are using range as a pointer and not array. Hopefully this test will be unneeded when code will be cleaned up to avoid moving ctx->offset backwards, see below. > + > + b = ngx_alloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > + > + ncl = ngx_alloc_chain_link(r->pool); > + if (ncl == NULL) { > + return NGX_ERROR; > + } Note: usual names for temporary chain links are "ln" and "tl". > > - rcl = ngx_alloc_chain_link(r->pool); > - if (rcl == NULL) { > - return NGX_ERROR; > - } > + ncl->buf = b; > + ncl->next = cl; > + > + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); > + b->last_in_chain = 0; > + b->last_buf = 0; > + > + back = last - range->end; > + ctx->offset -= back; This looks like a hack, there should be no need to adjust ctx->offset backwards. Instead, we should move ctx->offset only when we've done with a buffer. > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body reuse buf: %O-%O", > + ctx->offset, ctx->offset + back); > > - rcl->buf = b; > + if (buf->in_file) { > + buf->file_pos = buf->file_last - back; > + } > + > + if (ngx_buf_in_memory(buf)) { > + buf->pos = buf->last - back; > + } > > + cl = ncl; > + buf = cl->buf; > + } > + > + if (buf->in_file) { > + buf->file_last -= last - range->end; > + } > > - /* the range data */ > + if (ngx_buf_in_memory(buf)) { > + buf->last -= (size_t) (last - range->end); > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (range == tail) { > + buf->last_buf = (r == r->main) ? 1 : 0; > + buf->last_in_chain = 1; > + *ll = cl; > + ll = &cl->next; > + > + finished = 1; It is not clear why to use the "finished" flag instead of adding the boundary here. > + break; > + } > + > + range++; > + ctx->index++; > } > > - b->in_file = buf->in_file; > - b->temporary = buf->temporary; > - b->memory = buf->memory; > - b->mmap = buf->mmap; > - b->file = buf->file; > + *ll = cl; > + ll = &cl->next; > + } > + > + if (out == NULL) { > + return NGX_OK; > + } > + > + *ll = NULL; > + > + if (finished > + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) > + { > + return NGX_ERROR; > + } > + > + return ngx_http_next_body_filter(r, out); > +} > + > > - if (buf->in_file) { > - b->file_pos = buf->file_pos + range[i].start; > - b->file_last = buf->file_pos + range[i].end; > - } > +static ngx_int_t > +ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) The "ngx_chain_t ***lll" argument suggests it might be a good idea to somehow improve the interface. > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl, *rcl; > + ngx_http_range_t *range; > + > + /* > + * The boundary header of the range: > + * CRLF > + * "--0123456789" CRLF > + * "Content-Type: image/jpeg" CRLF > + * "Content-Range: bytes " > + */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - if (ngx_buf_in_memory(buf)) { > - b->pos = buf->pos + (size_t) range[i].start; > - b->last = buf->pos + (size_t) range[i].end; > - } > + b->memory = 1; > + b->pos = ctx->boundary_header.data; > + b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + > + hcl = ngx_alloc_chain_link(r->pool); > + if (hcl == NULL) { > + return NGX_ERROR; > + } > + > + hcl->buf = b; > + > + > + /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - dcl = ngx_alloc_chain_link(r->pool); > - if (dcl == NULL) { > - return NGX_ERROR; > - } > + range = ctx->ranges.elts; > + b->temporary = 1; > + b->pos = range[ctx->index - 1].content_range.data; > + b->last = range[ctx->index - 1].content_range.data > + + range[ctx->index - 1].content_range.len; > + > + rcl = ngx_alloc_chain_link(r->pool); > + if (rcl == NULL) { > + return NGX_ERROR; > + } > + > + rcl->buf = b; > > - dcl->buf = b; > + **lll = hcl; > + hcl->next = rcl; > + *lll = &rcl->next; > + > + return NGX_OK; > +} > > - *ll = hcl; > - hcl->next = rcl; > - rcl->next = dcl; > - ll = &dcl->next; > - } > + > +static ngx_int_t > +ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl; > > /* the last boundary CRLF "--0123456789--" CRLF */ > > @@ -885,7 +972,8 @@ ngx_http_range_multipart_body(ngx_http_r > } > > b->temporary = 1; > - b->last_buf = 1; > + b->last_in_chain = 1; > + b->last_buf = (r == r->main) ? 1 : 0; > > b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof("--" CRLF) - 1); > @@ -908,7 +996,7 @@ ngx_http_range_multipart_body(ngx_http_r > > *ll = hcl; > > - return ngx_http_next_body_filter(r, out); > + return NGX_OK; > } > > > ------------------ Original ------------------ > From: "?? (hucc)";; > Send time: Friday, Nov 10, 2017 4:41 AM > To: "nginx-devel"; > Subject: Re: [patch-1] Range filter: support multiple ranges. > > Hi, > > Please ignore the previous reply. The updated patch is placed at the end. > > On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote: > > >On Fri, Oct 27, 2017 at 06:50:32PM +0800, ?? (hucc) wrote: > > > >> # HG changeset patch > >> # User hucongcong > >> # Date 1509099940 -28800 > >> # Fri Oct 27 18:25:40 2017 +0800 > >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217 > >> # Parent b9850d3deb277bd433a689712c40a84401443520 > >> Range filter: support multiple ranges. > > > >This summary line is at least misleading. > > Ok, maybe the summary line is support multiple ranges when body is > in multiple buffers. > > >> When multiple ranges are requested, nginx will coalesce any of the ranges > >> that overlap, or that are separated by a gap that is smaller than the > >> NGX_HTTP_RANGE_MULTIPART_GAP macro. > > > >(Note that the patch also does reordering of ranges. For some > >reason this is not mentioned in the commit log. There are also > >other changes not mentioned in the commit log - for example, I see > >ngx_http_range_t was moved to ngx_http_request.h. These are > >probably do not belong to the patch at all.) > > I actually wait for you to give better advice. I tried my best to > make the changes easier and more readable and I will split it into > multiple patches based on your suggestions if these changes will be > accepted. > > >Reordering and/or coalescing ranges is not something that clients > >usually expect to happen. This was widely discussed at the time > >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233 > >introduced the "MAY coalesce" clause. But this doesn't make > >clients, especially old ones, magically prepared for this. > > I did not know the CVE-2011-3192. If multiple ranges list in > ascending order and there are no overlapping ranges, the code will > be much simpler. This is what I think. > > >Moreover, this will certainly break some use cases like "request > >some metadata first, and then rest of the file". So this is > >certainly not a good idea to always reorder / coalesce ranges > >unless this is really needed for some reason. (Or even at all, > >as just returning 200 might be much more compatible with various > >clients, as outlined above.) > > > >It is also not clear what you are trying to achieve with this > >patch. You may want to elaborate more on what problem you are > >trying to solve, may be there are better solutions. > > I am trying to support multiple ranges when proxy_buffering is off > and sometimes slice is enabled. The data is always cached in the > backend which is not nginx. As far as I know, similar architecture > is widely used in CDN. So the implementation of multiple ranges in > the architecture I mentioned above is required and inevitable. > Besides, P2P clients desire for this feature to gather data-pieces. > Hope I already made it clear. > > All these changes have been tested. Hope it helps! Temporarily, > the changes are as follows: > > diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800 > @@ -46,16 +46,10 @@ > > > typedef struct { > - off_t start; > - off_t end; > - ngx_str_t content_range; > -} ngx_http_range_t; > + off_t offset; > + ngx_uint_t index; /* start with 1 */ > > - > -typedef struct { > - off_t offset; > - ngx_str_t boundary_header; > - ngx_array_t ranges; > + ngx_str_t boundary_header; > } ngx_http_range_filter_ctx_t; > > > @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa > static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); > -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll); > +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll); > > static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); > static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); > @@ -234,7 +230,7 @@ parse: > r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; > r->headers_out.status_line.len = 0; > > - if (ctx->ranges.nelts == 1) { > + if (r->headers_out.ranges->nelts == 1) { > return ngx_http_range_singlepart_header(r, ctx); > } > > @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t > ngx_uint_t ranges) > { > u_char *p; > - off_t start, end, size, content_length, cutoff, > - cutlim; > - ngx_uint_t suffix; > + off_t start, end, content_length, > + cutoff, cutlim; > + ngx_uint_t suffix, descending; > ngx_http_range_t *range; > ngx_http_range_filter_ctx_t *mctx; > > @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t > mctx = ngx_http_get_module_ctx(r->main, > ngx_http_range_body_filter_module); > if (mctx) { > - ctx->ranges = mctx->ranges; > + r->headers_out.ranges = r->main->headers_out.ranges; > + ctx->boundary_header = mctx->boundary_header; > return NGX_OK; > } > } > > - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) > - != NGX_OK) > - { > + r->headers_out.ranges = ngx_array_create(r->pool, 1, > + sizeof(ngx_http_range_t)); > + if (r->headers_out.ranges == NULL) { > return NGX_ERROR; > } > > p = r->headers_in.range->value.data + 6; > - size = 0; > + range = NULL; > + descending = 0; > content_length = r->headers_out.content_length_n; > > cutoff = NGX_MAX_OFF_T_VALUE / 10; > @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t > found: > > if (start < end) { > - range = ngx_array_push(&ctx->ranges); > + > + if (range && start < range->end) { > + descending++; > + } > + > + range = ngx_array_push(r->headers_out.ranges); > if (range == NULL) { > return NGX_ERROR; > } > @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t > range->start = start; > range->end = end; > > - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) { > - return NGX_HTTP_RANGE_NOT_SATISFIABLE; > - } > - > - size += end - start; > - > - if (ranges-- == 0) { > - return NGX_DECLINED; > - } > - > } else if (start == 0) { > return NGX_DECLINED; > } > @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t > } > } > > - if (ctx->ranges.nelts == 0) { > + if (r->headers_out.ranges->nelts == 0) { > return NGX_HTTP_RANGE_NOT_SATISFIABLE; > } > > - if (size > content_length) { > + if (r->headers_out.ranges->nelts > ranges) { > + r->headers_out.ranges->nelts = ranges; > + } > + > + if (descending) { > return NGX_DECLINED; > } > > @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt > > /* "Content-Range: bytes SSSS-EEEE/TTTT" header */ > > - range = ctx->ranges.elts; > + range = r->headers_out.ranges->elts; > > content_range->value.len = ngx_sprintf(content_range->value.data, > "bytes %O-%O/%O", > @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http > ngx_http_range_t *range; > ngx_atomic_uint_t boundary; > > + if (r != r->main) { > + return ngx_http_next_header_filter(r); > + } > + > size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof(CRLF "Content-Type: ") - 1 > + r->headers_out.content_type.len > @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http > > len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1; > > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > + range = r->headers_out.ranges->elts; > + for (i = 0; i < r->headers_out.ranges->nelts; i++) { > > /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */ > > @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http > - range[i].content_range.data; > > len += ctx->boundary_header.len + range[i].content_range.len > - + (range[i].end - range[i].start); > + + (range[i].end - range[i].start); > } > > r->headers_out.content_length_n = len; > + r->headers_out.content_offset = range[0].start; > > if (r->headers_out.content_length) { > r->headers_out.content_length->hash = 0; > @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ > return ngx_http_next_body_filter(r, in); > } > > - if (ctx->ranges.nelts == 1) { > + if (r->headers_out.ranges->nelts == 1) { > return ngx_http_range_singlepart_body(r, ctx, in); > } > > - /* > - * multipart ranges are supported only if whole body is in a single buffer > - */ > - > if (ngx_buf_special(in->buf)) { > return ngx_http_next_body_filter(r, in); > } > > - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) { > - return NGX_ERROR; > - } > - > return ngx_http_range_multipart_body(r, ctx, in); > } > > > static ngx_int_t > -ngx_http_range_test_overlapped(ngx_http_request_t *r, > - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > -{ > - off_t start, last; > - ngx_buf_t *buf; > - ngx_uint_t i; > - ngx_http_range_t *range; > - > - if (ctx->offset) { > - goto overlapped; > - } > - > - buf = in->buf; > - > - if (!buf->last_buf) { > - start = ctx->offset; > - last = ctx->offset + ngx_buf_size(buf); > - > - range = ctx->ranges.elts; > - for (i = 0; i < ctx->ranges.nelts; i++) { > - if (start > range[i].start || last < range[i].end) { > - goto overlapped; > - } > - } > - } > - > - ctx->offset = ngx_buf_size(buf); > - > - return NGX_OK; > - > -overlapped: > - > - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, > - "range in overlapped buffers"); > - > - return NGX_ERROR; > -} > - > - > -static ngx_int_t > ngx_http_range_singlepart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_ > > out = NULL; > ll = &out; > - range = ctx->ranges.elts; > + range = r->headers_out.ranges->elts; > > for (cl = in; cl; cl = cl->next) { > > @@ -786,96 +740,227 @@ static ngx_int_t > ngx_http_range_multipart_body(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in) > { > - ngx_buf_t *b, *buf; > - ngx_uint_t i; > - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll; > - ngx_http_range_t *range; > + off_t start, last, back; > + ngx_buf_t *buf, *b; > + ngx_uint_t i, finished; > + ngx_chain_t *out, *cl, *ncl, **ll; > + ngx_http_range_t *range, *tail; > + > + range = r->headers_out.ranges->elts; > > - ll = &out; > - buf = in->buf; > - range = ctx->ranges.elts; > + if (!ctx->index) { > + for (i = 0; i < r->headers_out.ranges->nelts; i++) { > + if (ctx->offset < range[i].end) { > + ctx->index = i + 1; > + break; > + } > + } > + } > > - for (i = 0; i < ctx->ranges.nelts; i++) { > + tail = range + r->headers_out.ranges->nelts - 1; > + range += ctx->index - 1; > > - /* > - * The boundary header of the range: > - * CRLF > - * "--0123456789" CRLF > - * "Content-Type: image/jpeg" CRLF > - * "Content-Range: bytes " > - */ > + out = NULL; > + ll = &out; > + finished = 0; > + > + for (cl = in; cl; cl = cl->next) { > + > + buf = cl->buf; > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + start = ctx->offset; > + last = ctx->offset + ngx_buf_size(buf); > + > + ctx->offset = last; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body buf: %O-%O", start, last); > + > + if (ngx_buf_special(buf)) { > + *ll = cl; > + ll = &cl->next; > + continue; > } > > - b->memory = 1; > - b->pos = ctx->boundary_header.data; > - b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + if (range->end <= start || range->start >= last) { > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body skip"); > > - hcl = ngx_alloc_chain_link(r->pool); > - if (hcl == NULL) { > - return NGX_ERROR; > + if (buf->in_file) { > + buf->file_pos = buf->file_last; > + } > + > + buf->pos = buf->last; > + buf->sync = 1; > + > + continue; > } > > - hcl->buf = b; > + if (range->start >= start) { > > + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) { > + return NGX_ERROR; > + } > > - /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + if (buf->in_file) { > + buf->file_pos += range->start - start; > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (ngx_buf_in_memory(buf)) { > + buf->pos += (size_t) (range->start - start); > + } > } > > - b->temporary = 1; > - b->pos = range[i].content_range.data; > - b->last = range[i].content_range.data + range[i].content_range.len; > + if (range->end <= last) { > + > + if (range < tail && range[1].start < last) { > + > + b = ngx_alloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > + > + ncl = ngx_alloc_chain_link(r->pool); > + if (ncl == NULL) { > + return NGX_ERROR; > + } > > - rcl = ngx_alloc_chain_link(r->pool); > - if (rcl == NULL) { > - return NGX_ERROR; > - } > + ncl->buf = b; > + ncl->next = cl; > + > + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); > + b->last_in_chain = 0; > + b->last_buf = 0; > + > + back = last - range->end; > + ctx->offset -= back; > + > + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "http range multipart body reuse buf: %O-%O", > + ctx->offset, ctx->offset + back); > > - rcl->buf = b; > + if (buf->in_file) { > + buf->file_pos = buf->file_last - back; > + } > + > + if (ngx_buf_in_memory(buf)) { > + buf->pos = buf->last - back; > + } > > + cl = ncl; > + buf = cl->buf; > + } > + > + if (buf->in_file) { > + buf->file_last -= last - range->end; > + } > > - /* the range data */ > + if (ngx_buf_in_memory(buf)) { > + buf->last -= (size_t) (last - range->end); > + } > > - b = ngx_calloc_buf(r->pool); > - if (b == NULL) { > - return NGX_ERROR; > + if (range == tail) { > + buf->last_buf = (r == r->main) ? 1 : 0; > + buf->last_in_chain = 1; > + *ll = cl; > + ll = &cl->next; > + > + finished = 1; > + break; > + } > + > + range++; > + ctx->index++; > } > > - b->in_file = buf->in_file; > - b->temporary = buf->temporary; > - b->memory = buf->memory; > - b->mmap = buf->mmap; > - b->file = buf->file; > + *ll = cl; > + ll = &cl->next; > + } > + > + if (out == NULL) { > + return NGX_OK; > + } > + > + *ll = NULL; > + > + if (finished > + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) > + { > + return NGX_ERROR; > + } > + > + return ngx_http_next_body_filter(r, out); > +} > + > > - if (buf->in_file) { > - b->file_pos = buf->file_pos + range[i].start; > - b->file_last = buf->file_pos + range[i].end; > - } > +static ngx_int_t > +ngx_http_range_link_boundary_header(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl, *rcl; > + ngx_http_range_t *range; > + > + /* > + * The boundary header of the range: > + * CRLF > + * "--0123456789" CRLF > + * "Content-Type: image/jpeg" CRLF > + * "Content-Range: bytes " > + */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - if (ngx_buf_in_memory(buf)) { > - b->pos = buf->pos + (size_t) range[i].start; > - b->last = buf->pos + (size_t) range[i].end; > - } > + b->memory = 1; > + b->pos = ctx->boundary_header.data; > + b->last = ctx->boundary_header.data + ctx->boundary_header.len; > + > + hcl = ngx_alloc_chain_link(r->pool); > + if (hcl == NULL) { > + return NGX_ERROR; > + } > + > + hcl->buf = b; > + > + > + /* "SSSS-EEEE/TTTT" CRLF CRLF */ > + > + b = ngx_calloc_buf(r->pool); > + if (b == NULL) { > + return NGX_ERROR; > + } > > - dcl = ngx_alloc_chain_link(r->pool); > - if (dcl == NULL) { > - return NGX_ERROR; > - } > + range = r->headers_out.ranges->elts; > + b->temporary = 1; > + b->pos = range[ctx->index - 1].content_range.data; > + b->last = range[ctx->index - 1].content_range.data > + + range[ctx->index - 1].content_range.len; > + > + rcl = ngx_alloc_chain_link(r->pool); > + if (rcl == NULL) { > + return NGX_ERROR; > + } > + > + rcl->buf = b; > > - dcl->buf = b; > + **lll = hcl; > + hcl->next = rcl; > + *lll = &rcl->next; > + > + return NGX_OK; > +} > > - *ll = hcl; > - hcl->next = rcl; > - rcl->next = dcl; > - ll = &dcl->next; > - } > + > +static ngx_int_t > +ngx_http_range_link_last_boundary(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll) > +{ > + ngx_buf_t *b; > + ngx_chain_t *hcl; > > /* the last boundary CRLF "--0123456789--" CRLF */ > > @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r > } > > b->temporary = 1; > - b->last_buf = 1; > + b->last_in_chain = 1; > + b->last_buf = (r == r->main) ? 1 : 0; > > b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN > + sizeof("--" CRLF) - 1); > @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r > > *ll = hcl; > > - return ngx_http_next_body_filter(r, out); > + return NGX_OK; > } > > > diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c > --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800 > @@ -22,6 +22,8 @@ typedef struct { > ngx_str_t etag; > unsigned last:1; > unsigned active:1; > + unsigned multipart:1; > + ngx_uint_t index; > ngx_http_request_t *sr; > } ngx_http_slice_ctx_t; > > @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re > { > off_t end; > ngx_int_t rc; > + ngx_uint_t i; > ngx_table_elt_t *h; > + ngx_http_range_t *range; > ngx_http_slice_ctx_t *ctx; > ngx_http_slice_loc_conf_t *slcf; > ngx_http_slice_content_range_t cr; > @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re > > r->allow_ranges = 1; > r->subrequest_ranges = 1; > - r->single_range = 1; > > rc = ngx_http_next_header_filter(r); > > - if (r != r->main) { > - return rc; > + if (r == r->main) { > + r->preserve_body = 1; > + > + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > + ctx->multipart = (r->headers_out.ranges->nelts != 1); > + range = r->headers_out.ranges->elts; > + > + if (ctx->start + (off_t) slcf->size <= range[0].start) { > + ctx->start = slcf->size * (range[0].start / slcf->size); > + } > + > + ctx->end = range[r->headers_out.ranges->nelts - 1].end; > + > + } else { > + ctx->end = cr.complete_length; > + } > } > > - r->preserve_body = 1; > + if (ctx->multipart) { > + range = r->headers_out.ranges->elts; > + > + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) { > + > + if (ctx->start < range[i].end) { > + ctx->index = i; > + break; > + } > > - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { > - ctx->start = slcf->size > - * (r->headers_out.content_offset / slcf->size); > + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) { > + i++; > + ctx->index = i; > + ctx->start = slcf->size * (range[i].start / slcf->size); > + > + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "range multipart so fast forward to %O-%O @%O", > + range[i].start, range[i].end, ctx->start); > + break; > + } > } > - > - ctx->end = r->headers_out.content_offset > - + r->headers_out.content_length_n; > - > - } else { > - ctx->end = cr.complete_length; > } > > return rc; > diff -r 32f83fe5747b src/http/ngx_http_request.h > --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800 > +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800 > @@ -251,6 +251,13 @@ typedef struct { > > > typedef struct { > + off_t start; > + off_t end; > + ngx_str_t content_range; > +} ngx_http_range_t; > + > + > +typedef struct { > ngx_list_t headers; > ngx_list_t trailers; > > @@ -278,6 +285,7 @@ typedef struct { > u_char *content_type_lowcase; > ngx_uint_t content_type_hash; > > + ngx_array_t *ranges; /* ngx_http_range_t */ > ngx_array_t cache_control; > > off_t content_length_n; > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From danieleggert at me.com Tue Nov 21 20:51:35 2017 From: danieleggert at me.com (Daniel Eggert) Date: Tue, 21 Nov 2017 21:51:35 +0100 Subject: Asynchronous Handler & Callbacks Message-ID: <9FC7E2E4-3A3F-45B6-A537-B8F25F1A5B18@me.com> I'm writing my own Module and Handler for nginx, but I can't figure out how to do asynchronous work. From my handler, I'm calling an existing library, that'll run a callback on a thread of its own when it is done. I can't figure out how to switch back to the nginx event loop thread. It seems like it's ok for me to run r->blocked++; return NGX_AGAIN; inside my callback handler the first time it's called, but once the "external" callback runs, it'd need to tell nginx to run my handler again. I'm assuming that I'd do that my posting an event to the request's event loop and then call ngx_http_core_run_phases(r) on the request. But I'm a bit lost as to whether that's the right approach. Thanks in advance. /Daniel From weixu365 at gmail.com Wed Nov 22 06:31:25 2017 From: weixu365 at gmail.com (Wei Xu) Date: Wed, 22 Nov 2017 17:31:25 +1100 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: <20171109170702.GH26836@mdounin.ru> <20171113194946.GR26836@mdounin.ru> Message-ID: Hi, Is there any place to view the status of current proposed patches? I'm not sure if this patch had been accepted, still waiting or rejected? In order to avoid errors in production, I'm running the patched version now. But I think it would be better to run the official one, and also I can introduce this solution for 'Connection reset by peer errors' to other teams. On Tue, Nov 14, 2017 at 2:03 PM, Wei Xu wrote: > Hi, > > Really nice, much simpler than my patch. It's great to have a default > timeout value. thanks for you time. > > > > On Tue, Nov 14, 2017 at 6:49 AM, Maxim Dounin wrote: > >> Hello! >> >> On Sun, Nov 12, 2017 at 11:25:20PM +1100, Wei Xu wrote: >> >> > We are running Nginx and upstream on the same machine using docker, so >> > there's no firewall. >> >> Note that this isn't usually true. Docker uses iptables >> implicitly, and unless you specifically checked your iptables >> configuration - likely you are using firewall. >> >> > I did a test locally and captured the network packages. >> > >> > For the normal requests, upstream send a [FIN, ACK] to nginx after >> > keep-alive timeout (500 ms), and nginx also send a [FIN, ACK] back, then >> > upstream send a [ACK] to close the connection completely. >> >> [...] >> >> > For more detailed description of the test process, you can reference my >> > post at: >> > https://theantway.com/2017/11/analyze-connection-reset-error >> -in-nginx-upstream-with-keep-alive-enabled/ >> >> The test demonstrates that it is indeed possible to trigger the >> problem in question. Unfortunately, it doesn't provide any proof >> that what you observed in production is the same issue though. >> >> While it is more or less clear that the race condition in question >> is real, it seems to be very unlikely with typical workloads. And >> even when triggered, in most cases nginx handles this good enough, >> re-trying the request per proxy_next_upstream. >> >> Nevertheless, thank you for detailed testing. A simple test case >> that reliably demonstrates the race is appreciated, and I was able >> to reduce it to your client script and nginx with the following >> trivial configuration: >> >> upstream u { >> server 127.0.0.1:8082; >> keepalive 10; >> } >> >> server { >> listen 8080; >> >> location / { >> proxy_pass http://u; >> proxy_http_version 1.1; >> proxy_set_header Connection ""; >> } >> } >> >> server { >> listen 8082; >> >> keepalive_timeout 500ms; >> >> location / { >> return 200 ok\n; >> } >> } >> >> > To Fix the issue, I tried to add a timeout for keep-alived upstream, and >> > you can check the patch at: >> > https://github.com/weixu365/nginx/blob/docker-1.13.6/docker/ >> stretch/patches/01-http-upstream-keepalive-timeout.patch >> > >> > The patch is for my current testing, and I can create a different >> format if >> > you need. >> >> The patch looks good enough for testing, though there are various >> minor issues - notably testing timeout for NGX_CONF_UNSET_MSEC at >> runtime, using wrong type for timeout during parsing (time_t >> instead of ngx_msec_t). >> >> Also I tend to think that using a separate keepalive_timeout >> directive should be easier, and we probably want to introduce some >> default value for it. >> >> Please take a look if the following patch works for you: >> >> # HG changeset patch >> # User Maxim Dounin >> # Date 1510601341 -10800 >> # Mon Nov 13 22:29:01 2017 +0300 >> # Node ID 9ba0a577601b7c1b714eb088bc0b0d21c6354699 >> # Parent 6f592a42570898e1539d2e0b86017f32bbf665c8 >> Upstream keepalive: keepalive_timeout directive. >> >> The directive configures maximum time a connection can be kept in the >> cache. By configuring a time which is smaller than the corresponding >> timeout on the backend side one can avoid the race between closing >> a connection by the backend and nginx trying to use the same connection >> to send a request at the same time. >> >> diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c >> b/src/http/modules/ngx_http_upstream_keepalive_module.c >> --- a/src/http/modules/ngx_http_upstream_keepalive_module.c >> +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c >> @@ -12,6 +12,7 @@ >> >> typedef struct { >> ngx_uint_t max_cached; >> + ngx_msec_t timeout; >> >> ngx_queue_t cache; >> ngx_queue_t free; >> @@ -84,6 +85,13 @@ static ngx_command_t ngx_http_upstream_ >> 0, >> NULL }, >> >> + { ngx_string("keepalive_timeout"), >> + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_msec_slot, >> + NGX_HTTP_SRV_CONF_OFFSET, >> + offsetof(ngx_http_upstream_keepalive_srv_conf_t, timeout), >> + NULL }, >> + >> ngx_null_command >> }; >> >> @@ -141,6 +149,8 @@ ngx_http_upstream_init_keepalive(ngx_con >> >> us->peer.init = ngx_http_upstream_init_keepalive_peer; >> >> + ngx_conf_init_msec_value(kcf->timeout, 60000); >> + >> /* allocate cache items and add to free queue */ >> >> cached = ngx_pcalloc(cf->pool, >> @@ -261,6 +271,10 @@ found: >> c->write->log = pc->log; >> c->pool->log = pc->log; >> >> + if (c->read->timer_set) { >> + ngx_del_timer(c->read); >> + } >> + >> pc->connection = c; >> pc->cached = 1; >> >> @@ -339,9 +353,8 @@ ngx_http_upstream_free_keepalive_peer(ng >> >> pc->connection = NULL; >> >> - if (c->read->timer_set) { >> - ngx_del_timer(c->read); >> - } >> + ngx_add_timer(c->read, kp->conf->timeout); >> + >> if (c->write->timer_set) { >> ngx_del_timer(c->write); >> } >> @@ -392,7 +405,7 @@ ngx_http_upstream_keepalive_close_handle >> >> c = ev->data; >> >> - if (c->close) { >> + if (c->close || c->read->timedout) { >> goto close; >> } >> >> @@ -485,6 +498,8 @@ ngx_http_upstream_keepalive_create_conf( >> * conf->max_cached = 0; >> */ >> >> + conf->timeout = NGX_CONF_UNSET_MSEC; >> + >> return conf; >> } >> >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Wed Nov 22 15:56:32 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 22 Nov 2017 15:56:32 +0000 Subject: [njs] Fixed building by GCC with -O3. Message-ID: details: http://hg.nginx.org/njs/rev/22cc52416e84 branches: changeset: 433:22cc52416e84 user: Dmitry Volyntsev date: Wed Nov 22 18:55:57 2017 +0300 description: Fixed building by GCC with -O3. diffstat: njs/njs_fs.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (21 lines): diff -r 5eb2620a9bec -r 22cc52416e84 njs/njs_fs.c --- a/njs/njs_fs.c Mon Nov 20 20:08:56 2017 +0300 +++ b/njs/njs_fs.c Wed Nov 22 18:55:57 2017 +0300 @@ -566,6 +566,8 @@ static njs_ret_t njs_fs_write_file_inter } mode = NULL; + /* GCC complains about uninitialized flag.length. */ + flag.length = 0; flag.start = NULL; encoding.length = 0; encoding.start = NULL; @@ -753,6 +755,8 @@ njs_fs_write_file_sync_internal(njs_vm_t } mode = NULL; + /* GCC complains about uninitialized flag.length. */ + flag.length = 0; flag.start = NULL; encoding.length = 0; encoding.start = NULL; From mdounin at mdounin.ru Wed Nov 22 16:00:17 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Nov 2017 19:00:17 +0300 Subject: Fwd: [ module ] Add http upstream keep alive timeout parameter In-Reply-To: References: <20171109170702.GH26836@mdounin.ru> <20171113194946.GR26836@mdounin.ru> Message-ID: <20171122160016.GB78325@mdounin.ru> Hello! On Wed, Nov 22, 2017 at 05:31:25PM +1100, Wei Xu wrote: > Hi, > > Is there any place to view the status of current proposed patches? I'm not > sure if this patch had been accepted, still waiting or rejected? > > In order to avoid errors in production, I'm running the patched version > now. But I think it would be better to run the official one, and also I can > introduce this solution for 'Connection reset by peer errors' to other > teams. The patch in question is sitting in my patch queue waiting for further work - I consider introducing keepalive_requests at the same time, and probably $upstream_connection and $upstream_connection_requests variables. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Nov 22 17:40:18 2017 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 22 Nov 2017 17:40:18 +0000 Subject: [njs] Fixed unit tests for NetBSD 7. Message-ID: details: http://hg.nginx.org/njs/rev/c69b48375b90 branches: changeset: 434:c69b48375b90 user: Dmitry Volyntsev date: Wed Nov 22 20:38:10 2017 +0300 description: Fixed unit tests for NetBSD 7. diffstat: njs/test/njs_unit_test.c | 12 ++++++++++++ 1 files changed, 12 insertions(+), 0 deletions(-) diffs (69 lines): diff -r 22cc52416e84 -r c69b48375b90 njs/test/njs_unit_test.c --- a/njs/test/njs_unit_test.c Wed Nov 22 18:55:57 2017 +0300 +++ b/njs/test/njs_unit_test.c Wed Nov 22 20:38:10 2017 +0300 @@ -538,8 +538,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("'0' ** 0.1"), nxt_string("0") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("0 ** '-0.1'"), nxt_string("Infinity") }, +#endif { nxt_string("(-0) ** 3"), nxt_string("-0") }, @@ -550,8 +552,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("(-0) ** '-3'"), nxt_string("-Infinity") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("'-0' ** -2"), nxt_string("Infinity") }, +#endif { nxt_string("(-3) ** 0.1"), nxt_string("NaN") }, @@ -604,8 +608,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("var a = '0'; a **= 0.1"), nxt_string("0") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("var a = 0; a **= '-0.1'"), nxt_string("Infinity") }, +#endif { nxt_string("var a = -0; a **= 3"), nxt_string("-0") }, @@ -616,8 +622,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("var a = -0; a **= '-3'"), nxt_string("-Infinity") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("var a = '-0'; a **= -2"), nxt_string("Infinity") }, +#endif { nxt_string("var a = -3; a **= 0.1"), nxt_string("NaN") }, @@ -7799,8 +7807,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("Math.pow('0', 0.1)"), nxt_string("0") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("Math.pow(0, '-0.1')"), nxt_string("Infinity") }, +#endif { nxt_string("Math.pow(-0, 3)"), nxt_string("-0") }, @@ -7811,8 +7821,10 @@ static njs_unit_test_t njs_test[] = { nxt_string("Math.pow(-0, '-3')"), nxt_string("-Infinity") }, +#ifndef __NetBSD__ /* NetBSD 7: pow(0, negative) == -Infinity. */ { nxt_string("Math.pow('-0', -2)"), nxt_string("Infinity") }, +#endif { nxt_string("Math.pow(-3, 0.1)"), nxt_string("NaN") }, From mdounin at mdounin.ru Thu Nov 23 13:35:12 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Nov 2017 13:35:12 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/0a5e3d893a0c branches: changeset: 7160:0a5e3d893a0c user: Maxim Dounin date: Thu Nov 23 16:32:58 2017 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1013007 -#define NGINX_VERSION "1.13.7" +#define nginx_version 1013008 +#define NGINX_VERSION "1.13.8" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From mdounin at mdounin.ru Thu Nov 23 13:35:14 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Nov 2017 13:35:14 +0000 Subject: [nginx] Configure: fixed clang detection on MINIX. Message-ID: details: http://hg.nginx.org/nginx/rev/325b3042edd6 branches: changeset: 7161:325b3042edd6 user: Maxim Dounin date: Thu Nov 23 16:33:40 2017 +0300 description: Configure: fixed clang detection on MINIX. As per POSIX, basic regular expressions have no alternations, and the interpretation of the "\|" construct is undefined. At least on MINIX and Solaris grep interprets "\|" as literal "|", and not as an alternation as GNU grep does. Removed such constructs introduced in f1daa0356a1d. This fixes clang detection on MINIX. diffstat: auto/cc/clang | 2 +- auto/cc/name | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) diffs (28 lines): diff --git a/auto/cc/clang b/auto/cc/clang --- a/auto/cc/clang +++ b/auto/cc/clang @@ -5,7 +5,7 @@ # clang -NGX_CLANG_VER=`$CC -v 2>&1 | grep '\(clang\|LLVM\) version' 2>&1 \ +NGX_CLANG_VER=`$CC -v 2>&1 | grep 'version' 2>&1 \ | sed -e 's/^.* version \(.*\)/\1/'` echo " + clang version: $NGX_CLANG_VER" diff --git a/auto/cc/name b/auto/cc/name --- a/auto/cc/name +++ b/auto/cc/name @@ -44,7 +44,11 @@ elif `$CC -v 2>&1 | grep 'gcc version' > NGX_CC_NAME=gcc echo " + using GNU C compiler" -elif `$CC -v 2>&1 | grep '\(clang\|LLVM\) version' >/dev/null 2>&1`; then +elif `$CC -v 2>&1 | grep 'clang version' >/dev/null 2>&1`; then + NGX_CC_NAME=clang + echo " + using Clang C compiler" + +elif `$CC -v 2>&1 | grep 'LLVM version' >/dev/null 2>&1`; then NGX_CC_NAME=clang echo " + using Clang C compiler" From ru at nginx.com Tue Nov 28 09:59:15 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 28 Nov 2017 09:59:15 +0000 Subject: [nginx] Fixed "changing binary" when reaper is not init. Message-ID: details: http://hg.nginx.org/nginx/rev/8b84d60ef13d branches: changeset: 7162:8b84d60ef13d user: Ruslan Ermilov date: Tue Nov 28 12:00:24 2017 +0300 description: Fixed "changing binary" when reaper is not init. On some systems, it's possible that reaper of orphaned processes is set to something other than "init" process. On such systems, the changing binary procedure did not work. The fix is to check if PPID has changed, instead of assuming it's always 1 for orphaned processes. diffstat: src/core/nginx.c | 1 + src/os/unix/ngx_daemon.c | 1 + src/os/unix/ngx_process.c | 7 ++++--- src/os/unix/ngx_process.h | 2 ++ src/os/unix/ngx_process_cycle.c | 1 + src/os/win32/ngx_process.h | 2 ++ src/os/win32/ngx_process_cycle.c | 1 + 7 files changed, 12 insertions(+), 3 deletions(-) diffs (109 lines): diff -r 325b3042edd6 -r 8b84d60ef13d src/core/nginx.c --- a/src/core/nginx.c Thu Nov 23 16:33:40 2017 +0300 +++ b/src/core/nginx.c Tue Nov 28 12:00:24 2017 +0300 @@ -228,6 +228,7 @@ main(int argc, char *const *argv) #endif ngx_pid = ngx_getpid(); + ngx_parent = ngx_getppid(); log = ngx_log_init(ngx_prefix); if (log == NULL) { diff -r 325b3042edd6 -r 8b84d60ef13d src/os/unix/ngx_daemon.c --- a/src/os/unix/ngx_daemon.c Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/unix/ngx_daemon.c Tue Nov 28 12:00:24 2017 +0300 @@ -26,6 +26,7 @@ ngx_daemon(ngx_log_t *log) exit(0); } + ngx_parent = ngx_pid; ngx_pid = ngx_getpid(); if (setsid() == -1) { diff -r 325b3042edd6 -r 8b84d60ef13d src/os/unix/ngx_process.c --- a/src/os/unix/ngx_process.c Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/unix/ngx_process.c Tue Nov 28 12:00:24 2017 +0300 @@ -194,6 +194,7 @@ ngx_spawn_process(ngx_cycle_t *cycle, ng return NGX_INVALID_PID; case 0: + ngx_parent = ngx_pid; ngx_pid = ngx_getpid(); proc(cycle, data); break; @@ -371,12 +372,12 @@ ngx_signal_handler(int signo, siginfo_t break; case ngx_signal_value(NGX_CHANGEBIN_SIGNAL): - if (getppid() > 1 || ngx_new_binary > 0) { + if (ngx_getppid() == ngx_parent || ngx_new_binary > 0) { /* * Ignore the signal in the new binary if its parent is - * not the init process, i.e. the old binary's process - * is still running. Or ignore the signal in the old binary's + * not changed, i.e. the old binary's process is still + * running. Or ignore the signal in the old binary's * process if the new binary's process is already running. */ diff -r 325b3042edd6 -r 8b84d60ef13d src/os/unix/ngx_process.h --- a/src/os/unix/ngx_process.h Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/unix/ngx_process.h Tue Nov 28 12:00:24 2017 +0300 @@ -54,6 +54,7 @@ typedef struct { #define ngx_getpid getpid +#define ngx_getppid getppid #ifndef ngx_log_pid #define ngx_log_pid ngx_pid @@ -79,6 +80,7 @@ extern char **ngx_argv; extern char **ngx_os_argv; extern ngx_pid_t ngx_pid; +extern ngx_pid_t ngx_parent; extern ngx_socket_t ngx_channel; extern ngx_int_t ngx_process_slot; extern ngx_int_t ngx_last_process; diff -r 325b3042edd6 -r 8b84d60ef13d src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/unix/ngx_process_cycle.c Tue Nov 28 12:00:24 2017 +0300 @@ -31,6 +31,7 @@ static void ngx_cache_loader_process_han ngx_uint_t ngx_process; ngx_uint_t ngx_worker; ngx_pid_t ngx_pid; +ngx_pid_t ngx_parent; sig_atomic_t ngx_reap; sig_atomic_t ngx_sigio; diff -r 325b3042edd6 -r 8b84d60ef13d src/os/win32/ngx_process.h --- a/src/os/win32/ngx_process.h Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/win32/ngx_process.h Tue Nov 28 12:00:24 2017 +0300 @@ -14,6 +14,7 @@ typedef DWORD ngx_pid_t; #define ngx_getpid GetCurrentProcessId +#define ngx_getppid() 0 #define ngx_log_pid ngx_pid @@ -73,6 +74,7 @@ extern ngx_int_t ngx_last_pro extern ngx_process_t ngx_processes[NGX_MAX_PROCESSES]; extern ngx_pid_t ngx_pid; +extern ngx_pid_t ngx_parent; #endif /* _NGX_PROCESS_H_INCLUDED_ */ diff -r 325b3042edd6 -r 8b84d60ef13d src/os/win32/ngx_process_cycle.c --- a/src/os/win32/ngx_process_cycle.c Thu Nov 23 16:33:40 2017 +0300 +++ b/src/os/win32/ngx_process_cycle.c Tue Nov 28 12:00:24 2017 +0300 @@ -31,6 +31,7 @@ static ngx_thread_value_t __stdcall ngx_ ngx_uint_t ngx_process; ngx_uint_t ngx_worker; ngx_pid_t ngx_pid; +ngx_pid_t ngx_parent; ngx_uint_t ngx_inherited; ngx_pid_t ngx_new_binary; From pluknet at nginx.com Tue Nov 28 11:17:47 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 28 Nov 2017 11:17:47 +0000 Subject: [nginx] Removed unused FreeBSD-specific definitions in ngx_posix_config.h. Message-ID: details: http://hg.nginx.org/nginx/rev/fc0d06224eda branches: changeset: 7163:fc0d06224eda user: Sergey Kandaurov date: Tue Nov 28 13:09:54 2017 +0300 description: Removed unused FreeBSD-specific definitions in ngx_posix_config.h. diffstat: src/os/unix/ngx_posix_config.h | 20 -------------------- 1 files changed, 0 insertions(+), 20 deletions(-) diffs (30 lines): diff -r 8b84d60ef13d -r fc0d06224eda src/os/unix/ngx_posix_config.h --- a/src/os/unix/ngx_posix_config.h Tue Nov 28 12:00:24 2017 +0300 +++ b/src/os/unix/ngx_posix_config.h Tue Nov 28 13:09:54 2017 +0300 @@ -145,26 +145,6 @@ typedef struct aiocb ngx_aiocb_t; #define ngx_debug_init() -#if (__FreeBSD__) && (__FreeBSD_version < 400017) - -#include /* ALIGN() */ - -/* - * FreeBSD 3.x has no CMSG_SPACE() and CMSG_LEN() and has the broken CMSG_DATA() - */ - -#undef CMSG_SPACE -#define CMSG_SPACE(l) (ALIGN(sizeof(struct cmsghdr)) + ALIGN(l)) - -#undef CMSG_LEN -#define CMSG_LEN(l) (ALIGN(sizeof(struct cmsghdr)) + (l)) - -#undef CMSG_DATA -#define CMSG_DATA(cmsg) ((u_char *)(cmsg) + ALIGN(sizeof(struct cmsghdr))) - -#endif - - extern char **environ; From gmm at csdoc.com Wed Nov 29 10:33:28 2017 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 29 Nov 2017 12:33:28 +0200 Subject: [PATCH] Workaround for systemd error messages about nginx pid file. Message-ID: <9179e652-ccb8-8bbb-2934-1f0ca0414e7b@csdoc.com> # HG changeset patch # User Gena Makhomed # Date 1511951401 -7200 # Wed Nov 29 12:30:01 2017 +0200 # Node ID b529ea784244e13d8a5e58a12c8b639351652057 # Parent fc0d06224edac2c7cfbfd9a4def478f285d9957b Workaround for systemd error messages about nginx pid file. Race condition exists between nginx writing and systemd reading pid file. Sometimes systemd can produce error messages about nginx pid file: systemd: Failed to read PID from file /var/run/nginx.pid: Invalid argument systemd: PID file /var/run/nginx.pid not readable (yet?) after start. This patch add small delay before nginx original process exit to eliminate race condition between nginx and systemd. diff -r fc0d06224eda -r b529ea784244 src/os/unix/ngx_daemon.c --- a/src/os/unix/ngx_daemon.c Tue Nov 28 13:09:54 2017 +0300 +++ b/src/os/unix/ngx_daemon.c Wed Nov 29 12:30:01 2017 +0200 @@ -23,6 +23,7 @@ break; default: + ngx_msleep(100); exit(0); } From mdounin at mdounin.ru Wed Nov 29 12:30:54 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Nov 2017 15:30:54 +0300 Subject: [PATCH] Workaround for systemd error messages about nginx pid file. In-Reply-To: <9179e652-ccb8-8bbb-2934-1f0ca0414e7b@csdoc.com> References: <9179e652-ccb8-8bbb-2934-1f0ca0414e7b@csdoc.com> Message-ID: <20171129123054.GH78325@mdounin.ru> Hello! On Wed, Nov 29, 2017 at 12:33:28PM +0200, Gena Makhomed wrote: > # HG changeset patch > # User Gena Makhomed > # Date 1511951401 -7200 > # Wed Nov 29 12:30:01 2017 +0200 > # Node ID b529ea784244e13d8a5e58a12c8b639351652057 > # Parent fc0d06224edac2c7cfbfd9a4def478f285d9957b > Workaround for systemd error messages about nginx pid file. > > Race condition exists between nginx writing and systemd reading pid file. > Sometimes systemd can produce error messages about nginx pid file: > > systemd: Failed to read PID from file /var/run/nginx.pid: Invalid argument > systemd: PID file /var/run/nginx.pid not readable (yet?) after start. > > This patch add small delay before nginx original process exit > to eliminate race condition between nginx and systemd. > > diff -r fc0d06224eda -r b529ea784244 src/os/unix/ngx_daemon.c > --- a/src/os/unix/ngx_daemon.c Tue Nov 28 13:09:54 2017 +0300 > +++ b/src/os/unix/ngx_daemon.c Wed Nov 29 12:30:01 2017 +0200 > @@ -23,6 +23,7 @@ > break; > > default: > + ngx_msleep(100); > exit(0); > } No, thanks. As a systemd-specific workaround this definitely should be in the service unit file instead, if at all. Moreover, as previously explained, the message in question is harmless and the workaround is not needed for anything but silencing the message. -- Maxim Dounin http://mdounin.ru/ From gmm at csdoc.com Wed Nov 29 13:30:04 2017 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 29 Nov 2017 15:30:04 +0200 Subject: [PATCH] Workaround for systemd error messages about nginx pid file. In-Reply-To: <20171129123054.GH78325@mdounin.ru> References: <9179e652-ccb8-8bbb-2934-1f0ca0414e7b@csdoc.com> <20171129123054.GH78325@mdounin.ru> Message-ID: <1ab37c16-e904-93d9-6d36-61f8be6c87c8@csdoc.com> On 29.11.2017 14:30, Maxim Dounin wrote: > No, thanks. As a systemd-specific workaround this definitely > should be in the service unit file instead, if at all. Moreover, as > previously explained, the message in question is harmless and the > workaround is not needed for anything but silencing the message. These messages in question are harmless only if race condition occurs. There are cases when these systemd messages talk about real problems, for example, if user add to config pid directive with different from unit-file pidfile location, or if nginx compiled with different --pid-path= configure argument. For example: In unit-file: PIDFile=/var/run/nginx.pid in nginx config: pid /different/path/to/nginx.pid; or in binary: --pid-path=/usr/local/nginx/logs/nginx.pid The main purpose of my patch is to suppress harmless systemd messages and leave alone all systemd messages which are indicates real problems. nginx service unit file is not very good location for this workaround, because there are many different linux distros and unit file sources, not all nginx users use official packages from nginx.org site. -- Best regards, Gena