From mdounin at mdounin.ru Tue Apr 2 12:34:22 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 2 Apr 2013 12:34:22 +0000 Subject: [nginx] svn commit: r5165 - branches/stable-1.2/docs/xml/nginx Message-ID: <20130402123422.D639C3F9C0D@mail.nginx.com> Author: mdounin Date: 2013-04-02 12:34:21 +0000 (Tue, 02 Apr 2013) New Revision: 5165 URL: http://trac.nginx.org/nginx/changeset/5165/nginx Log: nginx-1.2.8-RELEASE Modified: branches/stable-1.2/docs/xml/nginx/changes.xml Modified: branches/stable-1.2/docs/xml/nginx/changes.xml =================================================================== --- branches/stable-1.2/docs/xml/nginx/changes.xml 2013-03-29 18:18:42 UTC (rev 5164) +++ branches/stable-1.2/docs/xml/nginx/changes.xml 2013-04-02 12:34:21 UTC (rev 5165) @@ -5,6 +5,61 @@ + + + + +??? ????????????? ????????? "ssl_session_cache shared" +????? ?????? ????? ?? ???????????, +???? ????????????? ????? ? ??????????? ??????.
+??????? Piotr Sikora. +
+ +new sessions were not always stored +if the "ssl_session_cache shared" directive was used +and there was no free space in shared memory.
+Thanks to Piotr Sikora. +
+
+ + + +?????? ????? ????????, +???? ?????????????? ?????????? +? ??? ????????? ?????????? ??????????? DNS-??????.
+??????? Lanshun Zhou. +
+ +responses might hang +if subrequests were used +and a DNS error happened during subrequest processing.
+Thanks to Lanshun Zhou. +
+
+ + + +? ?????? ngx_http_mp4_module.
+??????? Gernot Vormayr. +
+ +in the ngx_http_mp4_module.
+Thanks to Gernot Vormayr. +
+
+ + + +? ????????? ????? ????????????? ????????. + + +in backend usage accounting. + + + +
+ + From mdounin at mdounin.ru Tue Apr 2 12:34:39 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 2 Apr 2013 12:34:39 +0000 Subject: [nginx] svn commit: r5166 - in tags: . release-1.2.8 Message-ID: <20130402123439.8888B3F9FD0@mail.nginx.com> Author: mdounin Date: 2013-04-02 12:34:39 +0000 (Tue, 02 Apr 2013) New Revision: 5166 URL: http://trac.nginx.org/nginx/changeset/5166/nginx Log: release-1.2.8 tag Added: tags/release-1.2.8/ Index: tags/release-1.2.8 =================================================================== --- branches/stable-1.2 2013-04-02 12:34:21 UTC (rev 5165) +++ tags/release-1.2.8 2013-04-02 12:34:39 UTC (rev 5166) Property changes on: tags/release-1.2.8 ___________________________________________________________________ Added: svn:ignore ## -0,0 +1,14 ## +access.log +client_body_temp +fastcgi_temp +proxy_temp +scgi_temp +uwsgi_temp +GNUmakefile +Makefile +makefile +nginx +nginx.conf +nginx-*.tar.gz +objs* +tmp Added: svn:mergeinfo ## -0,0 +1 ## +/trunk:4611-4632,4636-4657,4671-4672,4674-4676,4682,4684-4699,4704-4706,4713,4736-4741,4754,4756-4771,4775,4777-4780,4782-4785,4795,4811-4820,4822-4824,4828-4835,4840-4844,4865-4872,4885-4887,4890-4896,4913-4925,4933-4934,4939,4944-4949,4961-4969,4973-4974,4976-4994,4997,4999-5005,5011-5025,5027-5031,5066,5070-5071,5078,5082-5083,5098,5109,5113-5114,5117,5123,5127-5134,5138 \ No newline at end of property From daniel.black at openquery.com Wed Apr 3 00:21:12 2013 From: daniel.black at openquery.com (Daniel Black) Date: Wed, 3 Apr 2013 10:21:12 +1000 (EST) Subject: wiki page for spdy Message-ID: <387544041.4192.1364948472202.JavaMail.root@zimbra.lentz.com.au> I see the spdy module sponsored by Automattic hasn't got a wiki page yet. Ok to transfer the following into a page? http://nginx.org/patches/spdy/README.txt Daniel From piotr at cloudflare.com Wed Apr 3 01:06:02 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 2 Apr 2013 18:06:02 -0700 Subject: SSL: reject unsupported protocols "negotiated" during handshake Message-ID: Hey, OpenSSL doesn't do anything to verify that "negotiated" protocol was actually advertised to the client, so we have to do it ourselves. Note: I dislike the way NGX_HTTP_NPN_NEGOTIATED is defined here, but it kind of matches the way NGX_HTTP_NPN_ADVERTISE is defined and I didn't see a better place to add it. Feel free to move it elsewhere. Best regards, Piotr Sikora diff -r 4bcd35e7a0f0 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Fri Mar 29 08:47:37 2013 +0000 +++ b/src/http/ngx_http_request.c Tue Apr 02 17:54:05 2013 -0700 @@ -712,6 +712,10 @@ } +#ifdef TLSEXT_TYPE_next_proto_neg +#define NGX_HTTP_NPN_NEGOTIATED "http/1.1" +#endif + static void ngx_http_ssl_handshake_handler(ngx_connection_t *c) { @@ -727,17 +731,34 @@ c->ssl->no_wait_shutdown = 1; -#if (NGX_HTTP_SPDY && defined TLSEXT_TYPE_next_proto_neg) +#ifdef TLSEXT_TYPE_next_proto_neg { unsigned int len; const unsigned char *data; + static const ngx_str_t http = ngx_string(NGX_HTTP_NPN_NEGOTIATED); +#if (NGX_HTTP_SPDY) static const ngx_str_t spdy = ngx_string(NGX_SPDY_NPN_NEGOTIATED); +#endif SSL_get0_next_proto_negotiated(c->ssl->connection, &data, &len); - if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) { - ngx_http_spdy_init(c->read); - return; + if (len) { +#if (NGX_HTTP_SPDY) + if (len == spdy.len && ngx_strncmp(data, spdy.data, spdy.len) == 0) + { + ngx_http_spdy_init(c->read); + return; + } +#endif + + if (len != http.len || ngx_strncmp(data, http.data, http.len)) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "client negotiated unsupported protocol \"%*s\"", + len, data); + + ngx_http_close_connection(c); + return; + } } } #endif From daniel.black at openquery.com Wed Apr 3 01:28:33 2013 From: daniel.black at openquery.com (Daniel Black) Date: Wed, 3 Apr 2013 11:28:33 +1000 (EST) Subject: wiki page for spdy In-Reply-To: <387544041.4192.1364948472202.JavaMail.root@zimbra.lentz.com.au> Message-ID: <1806368498.4194.1364952513204.JavaMail.root@zimbra.lentz.com.au> Opps Found http://nginx.org/en/docs/http/ngx_http_spdy_module.html that contains some of the directives that I see in the code. ----- Original Message ----- > I see the spdy module sponsored by Automattic hasn't got a wiki page > yet. > > Ok to transfer the following into a page? > > http://nginx.org/patches/spdy/README.txt > > Daniel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- -- Daniel Black, Engineer @ Open Query (http://openquery.com) Remote expertise & maintenance for MySQL/MariaDB server environments. From yaoweibin at gmail.com Wed Apr 3 04:09:25 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Wed, 3 Apr 2013 12:09:25 +0800 Subject: About the return http status 203 Message-ID: Hi, Today In our test box I noticed that nginx sent bad response with 203 response. You can reproduce the problem with the return directive: location / { return 203; } And the response looks like: HTTP/1.1 Server: nginx/1.3.14 Date: Wed, 03 Apr 2013 03:54:38 GMT Content-Type: application/octet-stream Content-Length: 0 Connection: keep-alive This is actually not an illegal HTTP/1.1 response. I noticed the related code in the ngx_http_header_filter_module.c: 53 static ngx_str_t ngx_http_status_lines[] = { 54 55 ngx_string("200 OK"), 56 ngx_string("201 Created"), 57 ngx_string("202 Accepted"), 58 ngx_null_string, /* "203 Non-Authoritative Information" */ 59 ngx_string("204 No Content"), 60 ngx_null_string, /* "205 Reset Content" */ 61 ngx_string("206 Partial Content"), It seems this behaviour is expected intentionally. It does not follow the RFC 2616. If we replace it to be the meaningful status code. Is there any problem? Thanks. -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 3 11:02:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Apr 2013 15:02:46 +0400 Subject: SSL: reject unsupported protocols "negotiated" during handshake In-Reply-To: References: Message-ID: <20130403110246.GX62550@mdounin.ru> Hello! On Tue, Apr 02, 2013 at 06:06:02PM -0700, Piotr Sikora wrote: > Hey, > OpenSSL doesn't do anything to verify that "negotiated" protocol > was actually advertised to the client, so we have to do it ourselves. Do we care? I think it's ok to assume HTTP by default, even if a client sent something different from what we've advertised. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Apr 3 11:22:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Apr 2013 15:22:16 +0400 Subject: About the return http status 203 In-Reply-To: References: Message-ID: <20130403112216.GY62550@mdounin.ru> Hello! On Wed, Apr 03, 2013 at 12:09:25PM +0800, Weibin Yao wrote: > Hi, > > Today In our test box I noticed that nginx sent bad response with 203 > response. You can reproduce the problem with the return directive: > > location / { > return 203; > } > > And the response looks like: > > HTTP/1.1 > Server: nginx/1.3.14 > Date: Wed, 03 Apr 2013 03:54:38 GMT > Content-Type: application/octet-stream > Content-Length: 0 > Connection: keep-alive > > This is actually not an illegal HTTP/1.1 response. > > I noticed the related code in the ngx_http_header_filter_module.c: > > 53 static ngx_str_t ngx_http_status_lines[] = { > 54 > 55 ngx_string("200 OK"), > 56 ngx_string("201 Created"), > 57 ngx_string("202 Accepted"), > 58 ngx_null_string, /* "203 Non-Authoritative Information" */ > 59 ngx_string("204 No Content"), > 60 ngx_null_string, /* "205 Reset Content" */ > 61 ngx_string("206 Partial Content"), > > It seems this behaviour is expected intentionally. It does not follow the > RFC 2616. If we replace it to be the meaningful status code. Is there any > problem? It think this is a bug. The ngx_http_status_lines[] array doesn't contain lines for status codes nginx doesn't generate itself, but the code doesn't expect such a hole may be actually hit. On the other hand, for unknown codes outside of the known range it correctly returns just a number in a status line. Quick and dirty patch follows: --- a/src/http/ngx_http_header_filter_module.c +++ b/src/http/ngx_http_header_filter_module.c @@ -270,6 +270,12 @@ ngx_http_header_filter(ngx_http_request_ len += NGX_INT_T_LEN; status_line = NULL; } + + if (status_line && status_line->len == 0) { + status = r->headers_out.status; + len += NGX_INT_T_LEN; + status_line = NULL; + } } clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Wed Apr 3 14:13:36 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Wed, 3 Apr 2013 14:13:36 +0000 Subject: [nginx] svn commit: r5167 - trunk/src/http/modules Message-ID: <20130403141336.941E83F9C18@mail.nginx.com> Author: vbart Date: 2013-04-03 14:13:35 +0000 (Wed, 03 Apr 2013) New Revision: 5167 URL: http://trac.nginx.org/nginx/changeset/5167/nginx Log: Limit req: rate should be non-zero. Specifying zero rate caused division by zero when calculating delays. Modified: trunk/src/http/modules/ngx_http_limit_req_module.c Modified: trunk/src/http/modules/ngx_http_limit_req_module.c =================================================================== --- trunk/src/http/modules/ngx_http_limit_req_module.c 2013-04-02 12:34:39 UTC (rev 5166) +++ trunk/src/http/modules/ngx_http_limit_req_module.c 2013-04-03 14:13:35 UTC (rev 5167) @@ -795,7 +795,7 @@ } rate = ngx_atoi(value[i].data + 5, len - 5); - if (rate <= NGX_ERROR) { + if (rate <= 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid rate \"%V\"", &value[i]); return NGX_CONF_ERROR; From piotr at cloudflare.com Wed Apr 3 22:16:14 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 3 Apr 2013 15:16:14 -0700 Subject: SSL: reject unsupported protocols "negotiated" during handshake In-Reply-To: <20130403110246.GX62550@mdounin.ru> References: <20130403110246.GX62550@mdounin.ru> Message-ID: Hey Maxim, > Do we care? I think it's ok to assume HTTP by default, even if a > client sent something different from what we've advertised. I'm not sure about you, but I do. I don't see a point in trying to process something that is known to fail down the line... Especially, if it produces noise in the logs. Right now, forced SPDY/3 request is logged like that: access.log: 127.0.0.1 - - [03/Apr/2013:14:05:10 -0700] "\x80\x03\x00\x01\x01\x00\x00\xDB\x00\x00\x00\x01\x00\x00\x00\x00`\x0080\xE3\xC6\xA7\xC2\x00\xC1\x00>\xFF\x00\x00\x00\x08\x00\x00\x00\x05:host\x00\x00\x00\x10example.net:7070\x00\x00\x00\x07:method\x00\x00\x00\x03GET\x00\x00\x00\x05:path\x00\x00\x00\x01/\x00\x00\x00\x07:scheme\x00\x00\x00\x05https\x00\x00\x00\x08:version\x00\x00\x00\x08HTTP/1.1\x00\x00\x00\x06accept\x00\x00\x00\x03*/*\x00\x00\x00\x0Faccept-encoding\x00\x00\x00" 400 189 "-" "-" error.log: 2013/04/03 14:05:10 [info] 54833#0: *4 client sent invalid method while reading client request line, client: 127.0.0.1, server: _, request: "?`80??>:hostexample.net:7070:methodGET:path/:schemehttp:versioHTTP/1.1accept*/*accept-encoding" vs patched: error.log: 2013/04/03 14:08:59 [error] 55828#0: *1 client negotiated unsupported protocol "spdy/3" while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:7070 Best regards, Piotr Sikora From vbart at nginx.com Thu Apr 4 14:19:08 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Thu, 4 Apr 2013 14:19:08 +0000 Subject: [nginx] svn commit: r5168 - trunk/src/http Message-ID: <20130404141908.2E8E73F9F53@mail.nginx.com> Author: vbart Date: 2013-04-04 14:19:06 +0000 (Thu, 04 Apr 2013) New Revision: 5168 URL: http://trac.nginx.org/nginx/changeset/5168/nginx Log: Upstream: removed surplus ngx_resolve_name_done() call. It will be called in ngx_http_upstream_finalize_request(). Modified: trunk/src/http/ngx_http_upstream.c Modified: trunk/src/http/ngx_http_upstream.c =================================================================== --- trunk/src/http/ngx_http_upstream.c 2013-04-03 14:13:35 UTC (rev 5167) +++ trunk/src/http/ngx_http_upstream.c 2013-04-04 14:19:06 UTC (rev 5168) @@ -3276,19 +3276,10 @@ { ngx_http_request_t *r = data; - ngx_http_upstream_t *u; - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "cleanup http upstream request: \"%V\"", &r->uri); - u = r->upstream; - - if (u->resolved && u->resolved->ctx) { - ngx_resolve_name_done(u->resolved->ctx); - u->resolved->ctx = NULL; - } - - ngx_http_upstream_finalize_request(r, u, NGX_DONE); + ngx_http_upstream_finalize_request(r, r->upstream, NGX_DONE); } From mdounin at mdounin.ru Thu Apr 4 14:40:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Apr 2013 18:40:10 +0400 Subject: SSL: reject unsupported protocols "negotiated" during handshake In-Reply-To: References: <20130403110246.GX62550@mdounin.ru> Message-ID: <20130404144010.GH62550@mdounin.ru> Hello! On Wed, Apr 03, 2013 at 03:16:14PM -0700, Piotr Sikora wrote: > Hey Maxim, > > > Do we care? I think it's ok to assume HTTP by default, even if a > > client sent something different from what we've advertised. > > I'm not sure about you, but I do. I don't see a point in trying to > process something that is known to fail down the line... Especially, > if it produces noise in the logs. > > Right now, forced SPDY/3 request is logged like that: > > access.log: > 127.0.0.1 - - [03/Apr/2013:14:05:10 -0700] > "\x80\x03\x00\x01\x01\x00\x00\xDB\x00\x00\x00\x01\x00\x00\x00\x00`\x0080\xE3\xC6\xA7\xC2\x00\xC1\x00>\xFF\x00\x00\x00\x08\x00\x00\x00\x05:host\x00\x00\x00\x10example.net:7070\x00\x00\x00\x07:method\x00\x00\x00\x03GET\x00\x00\x00\x05:path\x00\x00\x00\x01/\x00\x00\x00\x07:scheme\x00\x00\x00\x05https\x00\x00\x00\x08:version\x00\x00\x00\x08HTTP/1.1\x00\x00\x00\x06accept\x00\x00\x00\x03*/*\x00\x00\x00\x0Faccept-encoding\x00\x00\x00" > 400 189 "-" "-" > > error.log: > 2013/04/03 14:05:10 [info] 54833#0: *4 client sent invalid method > while reading client request line, client: 127.0.0.1, server: _, > request: "?`80??>:hostexample.net:7070:methodGET:path/:schemehttp:versioHTTP/1.1accept*/*accept-encoding" > > vs patched: > > error.log: > 2013/04/03 14:08:59 [error] 55828#0: *1 client negotiated unsupported > protocol "spdy/3" while SSL handshaking, client: 127.0.0.1, server: > 0.0.0.0:7070 As long as this is something _forced_ and doesn't happen as normal behaviour of some clients, I would rather preserve current behaviour. For me it looks better to assume HTTP for something which is not HTTP rather than reject HTTP which e.g. happened to be hardcoded to claim HTTP/1.0 instead of HTTP/1.1 we advertise. If "spdy/3" happens to generate too much noise in logs as observed in real life - we may consider blocking it specifically. -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Sat Apr 6 01:39:08 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 5 Apr 2013 18:39:08 -0700 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: <20120911002335.GW40452@mdounin.ru> References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> Message-ID: Hello! Here attaches V2 of my patch to make ngx_http_upstream detect truncated responses. Changes since V1 are as follows: 1. No longer change r->headers_out.status. 2. Just skip sending the last buf (i.e., setting the last_buf or last_in_chain flags) for errors. 3. Use u->length and u->pipe->length to test good "eof" instead of testing u->headers_in.content_length_n. Things that I'm not sure are 1. Whether to emit an error message when truncation happens. (This is not implemented in my patch.) 2. Whether to use NGX_HTTP_BAD_GATEWAY to finalize the upstream instead of NGX_ERROR. Comments are highly appreciated as always :) Thanks! -agentzh --- nginx-1.2.7/src/http/ngx_http_upstream.c 2013-02-11 06:39:49.000000000 -0800 +++ nginx-1.2.7-patched/src/http/ngx_http_upstream.c 2013-04-05 12:24:34.108742922 -0700 @@ -2399,7 +2399,7 @@ ngx_http_upstream_process_non_buffered_u if (c->read->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } @@ -2446,13 +2446,20 @@ ngx_http_upstream_process_non_buffered_r if (u->busy_bufs == NULL) { if (u->length == 0 - || upstream->read->eof - || upstream->read->error) + || (upstream->read->eof + && u->length == -1 + && u->pipe + && u->pipe->length == -1)) { ngx_http_upstream_finalize_request(r, u, 0); return; } + if (upstream->read->eof || upstream->read->error) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } + b->pos = b->start; b->last = b->start; } @@ -2720,7 +2727,9 @@ ngx_http_upstream_process_request(ngx_ht #endif - if (p->upstream_done || p->upstream_eof || p->upstream_error) { + if (p->upstream_done + || (p->upstream_eof && u->length == -1 && p->length == -1)) + { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); #if 0 @@ -2729,6 +2738,14 @@ ngx_http_upstream_process_request(ngx_ht ngx_http_upstream_finalize_request(r, u, 0); return; } + + if (p->upstream_eof || p->upstream_error) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http upstream exit: %p", p->out); + + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } } if (p->downstream_error) { @@ -3087,7 +3104,8 @@ ngx_http_upstream_finalize_request(ngx_h if (u->header_sent && rc != NGX_HTTP_REQUEST_TIME_OUT - && (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE)) + && rc != NGX_HTTP_GATEWAY_TIME_OUT + && rc >= NGX_HTTP_SPECIAL_RESPONSE) { rc = 0; } -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.7-upstream_truncation.patch Type: application/octet-stream Size: 2527 bytes Desc: not available URL: From agentzh at gmail.com Sun Apr 7 01:13:07 2013 From: agentzh at gmail.com (agentzh) Date: Sat, 6 Apr 2013 18:13:07 -0700 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> Message-ID: Hello! Below attaches V3 of my patch to make ngx_http_upstream detect truncated responses. Changes since V2 are as follows: 1. Use 502 to call ngx_http_upstream_finalize_request when data truncation happens. 2. When the header is already sent and a special response error code is specified in ngx_http_upstream_finalize_request, use NGX_ERROR instead of 0 to finalize the request. Thanks! -agentzh --- nginx-1.2.7/src/http/ngx_http_upstream.c 2013-02-11 06:39:49.000000000 -0800 +++ nginx-1.2.7-patched/src/http/ngx_http_upstream.c 2013-04-06 17:16:54.444520038 -0700 @@ -2399,7 +2399,7 @@ ngx_http_upstream_process_non_buffered_u if (c->read->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } @@ -2446,13 +2446,20 @@ ngx_http_upstream_process_non_buffered_r if (u->busy_bufs == NULL) { if (u->length == 0 - || upstream->read->eof - || upstream->read->error) + || (upstream->read->eof + && u->length == -1 + && u->pipe + && u->pipe->length == -1)) { ngx_http_upstream_finalize_request(r, u, 0); return; } + if (upstream->read->eof || upstream->read->error) { + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); + return; + } + b->pos = b->start; b->last = b->start; } @@ -2720,7 +2727,9 @@ ngx_http_upstream_process_request(ngx_ht #endif - if (p->upstream_done || p->upstream_eof || p->upstream_error) { + if (p->upstream_done + || (p->upstream_eof && u->length == -1 && p->length == -1)) + { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); #if 0 @@ -2729,6 +2738,14 @@ ngx_http_upstream_process_request(ngx_ht ngx_http_upstream_finalize_request(r, u, 0); return; } + + if (p->upstream_eof || p->upstream_error) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http upstream exit: %p", p->out); + + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); + return; + } } if (p->downstream_error) { @@ -3087,9 +3104,9 @@ ngx_http_upstream_finalize_request(ngx_h if (u->header_sent && rc != NGX_HTTP_REQUEST_TIME_OUT - && (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE)) + && rc >= NGX_HTTP_SPECIAL_RESPONSE) { - rc = 0; + rc = NGX_ERROR; } if (rc == NGX_DECLINED) { -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.7-upstream_truncation_v3.patch Type: application/octet-stream Size: 2563 bytes Desc: not available URL: From agentzh at gmail.com Sun Apr 7 23:25:11 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 7 Apr 2013 16:25:11 -0700 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> Message-ID: Hello! Here attaches the upstream_truncation V4 patch. Changes since V3 are * set u->length to -1 in u->input_filter_init in ngx_uwsgi and ngx_scgi because they do not set u->pipe->length (like ngx_proxy) but set u->length (via ngx_http_upstream) which causes false positive for response data truncation. Maybe a better fix is to copy most of the logic in ngx_http_proxy_filter_init and ngx_http_proxy_copy_filter to ngx_uwsgi and ngx_scgi but I'm afraid it may be out of the scope of this patch. Thanks! -agentzh diff --exclude '*~' --exclude '*.swp' -urp nginx-1.2.7/src/http/modules/ngx_http_scgi_module.c nginx-1.2.7-patched/src/http/modules/ngx_http_scgi_module.c --- nginx-1.2.7/src/http/modules/ngx_http_scgi_module.c 2013-02-09 19:08:42.000000000 -0800 +++ nginx-1.2.7-patched/src/http/modules/ngx_http_scgi_module.c 2013-04-07 12:09:55.900492634 -0700 @@ -39,6 +39,7 @@ static ngx_int_t ngx_http_scgi_process_s static ngx_int_t ngx_http_scgi_process_header(ngx_http_request_t *r); static void ngx_http_scgi_abort_request(ngx_http_request_t *r); static void ngx_http_scgi_finalize_request(ngx_http_request_t *r, ngx_int_t rc); +static ngx_int_t ngx_http_scgi_input_filter_init(void *data); static void *ngx_http_scgi_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_scgi_merge_loc_conf(ngx_conf_t *cf, void *parent, @@ -446,6 +447,8 @@ ngx_http_scgi_handler(ngx_http_request_t u->pipe->input_filter = ngx_event_pipe_copy_input_filter; u->pipe->input_ctx = r; + u->input_filter_init = ngx_http_scgi_input_filter_init; + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -1046,6 +1049,17 @@ ngx_http_scgi_finalize_request(ngx_http_ } +static ngx_int_t +ngx_http_scgi_input_filter_init(void *data) +{ + ngx_http_request_t *r = data; + + r->upstream->length = -1; + + return NGX_OK; +} + + static void * ngx_http_scgi_create_loc_conf(ngx_conf_t *cf) { diff --exclude '*~' --exclude '*.swp' -urp nginx-1.2.7/src/http/modules/ngx_http_uwsgi_module.c nginx-1.2.7-patched/src/http/modules/ngx_http_uwsgi_module.c --- nginx-1.2.7/src/http/modules/ngx_http_uwsgi_module.c 2013-02-09 19:08:42.000000000 -0800 +++ nginx-1.2.7-patched/src/http/modules/ngx_http_uwsgi_module.c 2013-04-07 11:58:24.546915778 -0700 @@ -46,6 +46,7 @@ static ngx_int_t ngx_http_uwsgi_process_ static void ngx_http_uwsgi_abort_request(ngx_http_request_t *r); static void ngx_http_uwsgi_finalize_request(ngx_http_request_t *r, ngx_int_t rc); +static ngx_int_t ngx_http_uwsgi_input_filter_init(void *data); static void *ngx_http_uwsgi_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_uwsgi_merge_loc_conf(ngx_conf_t *cf, void *parent, @@ -479,6 +480,8 @@ ngx_http_uwsgi_handler(ngx_http_request_ u->pipe->input_filter = ngx_event_pipe_copy_input_filter; u->pipe->input_ctx = r; + u->input_filter_init = ngx_http_uwsgi_input_filter_init; + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -1086,6 +1089,17 @@ ngx_http_uwsgi_finalize_request(ngx_http } +static ngx_int_t +ngx_http_uwsgi_input_filter_init(void *data) +{ + ngx_http_request_t *r = data; + + r->upstream->length = -1; + + return NGX_OK; +} + + static void * ngx_http_uwsgi_create_loc_conf(ngx_conf_t *cf) { diff --exclude '*~' --exclude '*.swp' -urp nginx-1.2.7/src/http/ngx_http_upstream.c nginx-1.2.7-patched/src/http/ngx_http_upstream.c --- nginx-1.2.7/src/http/ngx_http_upstream.c 2013-02-11 06:39:49.000000000 -0800 +++ nginx-1.2.7-patched/src/http/ngx_http_upstream.c 2013-04-06 17:16:54.444520038 -0700 @@ -2399,7 +2399,7 @@ ngx_http_upstream_process_non_buffered_u if (c->read->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } @@ -2446,13 +2446,20 @@ ngx_http_upstream_process_non_buffered_r if (u->busy_bufs == NULL) { if (u->length == 0 - || upstream->read->eof - || upstream->read->error) + || (upstream->read->eof + && u->length == -1 + && u->pipe + && u->pipe->length == -1)) { ngx_http_upstream_finalize_request(r, u, 0); return; } + if (upstream->read->eof || upstream->read->error) { + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); + return; + } + b->pos = b->start; b->last = b->start; } @@ -2720,7 +2727,9 @@ ngx_http_upstream_process_request(ngx_ht #endif - if (p->upstream_done || p->upstream_eof || p->upstream_error) { + if (p->upstream_done + || (p->upstream_eof && u->length == -1 && p->length == -1)) + { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); #if 0 @@ -2729,6 +2738,14 @@ ngx_http_upstream_process_request(ngx_ht ngx_http_upstream_finalize_request(r, u, 0); return; } + + if (p->upstream_eof || p->upstream_error) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http upstream exit: %p", p->out); + + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); + return; + } } if (p->downstream_error) { @@ -3087,9 +3104,9 @@ ngx_http_upstream_finalize_request(ngx_h if (u->header_sent && rc != NGX_HTTP_REQUEST_TIME_OUT - && (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE)) + && rc >= NGX_HTTP_SPECIAL_RESPONSE) { - rc = 0; + rc = NGX_ERROR; } if (rc == NGX_DECLINED) { -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.7-upstream_truncation_v4.patch Type: application/octet-stream Size: 5564 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Apr 8 00:23:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 04:23:18 +0400 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> Message-ID: <20130408002317.GK62550@mdounin.ru> Hello! On Sun, Apr 07, 2013 at 04:25:11PM -0700, agentzh wrote: > Here attaches the upstream_truncation V4 patch. Changes since V3 are > > * set u->length to -1 in u->input_filter_init in ngx_uwsgi and > ngx_scgi because they do not set u->pipe->length (like ngx_proxy) but > set u->length (via ngx_http_upstream) which causes false positive for > response data truncation. This looks wrong. The u->length should be used/checked in case of non-buffered processing only, u->pipe->length - in case of buffered. The patch seems to check both, and this is probably what causes your problems. -- Maxim Dounin http://nginx.org/en/donation.html From jefftk at google.com Mon Apr 8 18:39:07 2013 From: jefftk at google.com (Jeff Kaufman) Date: Mon, 8 Apr 2013 14:39:07 -0400 Subject: no content length on 204s; hangs wget Message-ID: I've written a content handler for nginx that does: return NGX_HTTP_NO_CONTENT; This produces output like: $ curl -D- 'http://localhost:8050/no_content_test' HTTP/1.1 204 No Content Server: nginx/1.2.7 Connection: keep-alive Date: Mon, 08 Apr 2013 18:28:02 GMT Testing this in curl it's fine, but wget waits for a content header, timing out after 60s. I tried doing the opposite of ngx_http_clear_content_length to set a content length, but it doesn't get sent. Is there a way to add a Content-Length on a 204 response or is this a wget bug? Jeff Full source: https://github.com/pagespeed/ngx_pagespeed/blob/3ae84a3c17d78046477eae68e98a5f50405a292a/src/ngx_pagespeed.cc#L2053 From mdounin at mdounin.ru Mon Apr 8 18:54:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 22:54:25 +0400 Subject: no content length on 204s; hangs wget In-Reply-To: References: Message-ID: <20130408185424.GX62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 02:39:07PM -0400, Jeff Kaufman wrote: > I've written a content handler for nginx that does: > > return NGX_HTTP_NO_CONTENT; > > This produces output like: > > $ curl -D- 'http://localhost:8050/no_content_test' > HTTP/1.1 204 No Content > Server: nginx/1.2.7 > Connection: keep-alive > Date: Mon, 08 Apr 2013 18:28:02 GMT > > Testing this in curl it's fine, but wget waits for a content header, > timing out after 60s. I tried doing the opposite of > ngx_http_clear_content_length to set a content length, but it doesn't > get sent. > > Is there a way to add a Content-Length on a 204 response or is this a wget bug? Adding a Content-Length will be inconsistent with RFC2616, http://tools.ietf.org/html/rfc2616#section-10.2.5: The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields. So it looks like a wget bug. -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Mon Apr 8 19:05:31 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 8 Apr 2013 12:05:31 -0700 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: <20130408002317.GK62550@mdounin.ru> References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> <20130408002317.GK62550@mdounin.ru> Message-ID: Hello! On Sun, Apr 7, 2013 at 5:23 PM, Maxim Dounin wrote: > > This looks wrong. > > The u->length should be used/checked in case of non-buffered processing > only, u->pipe->length - in case of buffered. The patch seems to > check both, and this is probably what causes your problems. > Thanks for the quick comment! But the problem is that for HTTP 1.0 responses without a Content-Length response header, the end of the body is indicated by closing the connection. And for HTTP 1.0 responses without Content-Length in non-buffered mode, u->length is always -1. Also, for chunked HTTP 1.1 response in non-buffered mode, u->length is also always -1. Simply checking u->length == -1 for "good eof" for these two cases are not enough because for the latter, data truncation cannot be detected at all. And that's also why I check p->length == -1 at the same time. Do you have any better way to do this? Thanks! -agentzh From mdounin at mdounin.ru Mon Apr 8 22:36:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Apr 2013 02:36:28 +0400 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> <20130408002317.GK62550@mdounin.ru> Message-ID: <20130408223628.GY62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 12:05:31PM -0700, agentzh wrote: > Hello! > > On Sun, Apr 7, 2013 at 5:23 PM, Maxim Dounin wrote: > > > > This looks wrong. > > > > The u->length should be used/checked in case of non-buffered processing > > only, u->pipe->length - in case of buffered. The patch seems to > > check both, and this is probably what causes your problems. > > > > Thanks for the quick comment! > > But the problem is that for HTTP 1.0 responses without a > Content-Length response header, the end of the body is indicated by > closing the connection. > > And for HTTP 1.0 responses without Content-Length in non-buffered > mode, u->length is always -1. Also, for chunked HTTP 1.1 response in > non-buffered mode, u->length is also always -1. Simply checking > u->length == -1 for "good eof" for these two cases are not enough > because for the latter, data truncation cannot be detected at all. And > that's also why I check p->length == -1 at the same time. > > Do you have any better way to do this? I think the information that u->length == -1 isn't ok at eof should be explicitly exposed by protocol modules (or, rather, module - as of now, it's not ok only in case of HTTP/1.1 with chunked). -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Mon Apr 8 23:12:18 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 8 Apr 2013 16:12:18 -0700 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: <20130408223628.GY62550@mdounin.ru> References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> <20130408002317.GK62550@mdounin.ru> <20130408223628.GY62550@mdounin.ru> Message-ID: Hello! On Mon, Apr 8, 2013 at 3:36 PM, Maxim Dounin wrote: > > I think the information that u->length == -1 isn't ok at eof > should be explicitly exposed by protocol modules (or, rather, > module - as of now, it's not ok only in case of HTTP/1.1 with > chunked). > Will you work on the patch directly? This issue keeps bothering me (and of my users) for long. Guessing your mind is no easy task for me and I've ended up tweaking my patches over and over again without real gains ;) Thanks! -agentzh From mdounin at mdounin.ru Tue Apr 9 00:30:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Apr 2013 04:30:30 +0400 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: References: <20120910103540.GN40452@mdounin.ru> <20120911002335.GW40452@mdounin.ru> <20130408002317.GK62550@mdounin.ru> <20130408223628.GY62550@mdounin.ru> Message-ID: <20130409003030.GZ62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 04:12:18PM -0700, agentzh wrote: > Hello! > > On Mon, Apr 8, 2013 at 3:36 PM, Maxim Dounin wrote: > > > > I think the information that u->length == -1 isn't ok at eof > > should be explicitly exposed by protocol modules (or, rather, > > module - as of now, it's not ok only in case of HTTP/1.1 with > > chunked). > > > > Will you work on the patch directly? This issue keeps bothering me > (and of my users) for long. > > Guessing your mind is no easy task for me and I've ended up tweaking > my patches over and over again without real gains ;) I have plans to start working on upstream error handling cleanup, and on this problem in particular, in about two weeks. -- Maxim Dounin http://nginx.org/en/donation.html From thierry.magnien at sfr.com Wed Apr 10 13:26:15 2013 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Wed, 10 Apr 2013 13:26:15 +0000 Subject: How to avoid blocking Nginx with long request Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C26C032@EXCN015.encara.local.ads> Hi, I'm writing an Nginx module that uses information stored in memory to redirect requests to other servers. Basically when a GET requests arrives, it makes some checks and decides to which Location the requests shall be redirected. In order to have Nginx update the information it holds in memory, I send him a specific POST request to trigger it. However, reloading information takes quite a lot of time and I have some questions related with this: - while the POST request is handled in my module, the worker that took the request is blocked until it has finished processing, but if GET requests come in, are they handled by other workers or can I have some GET requests getting blocked ? - if I want the processing not to block, can I use an event timer, in order to release the worker quickly and have the processing take place "in background" ? Or will it block a worker anyway ? Thanks, Thierry From mdounin at mdounin.ru Wed Apr 10 13:40:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Apr 2013 17:40:55 +0400 Subject: How to avoid blocking Nginx with long request In-Reply-To: <5D103CE839D50E4CBC62C9FD7B83287C26C032@EXCN015.encara.local.ads> References: <5D103CE839D50E4CBC62C9FD7B83287C26C032@EXCN015.encara.local.ads> Message-ID: <20130410134054.GG62550@mdounin.ru> Hello! On Wed, Apr 10, 2013 at 01:26:15PM +0000, MAGNIEN, Thierry wrote: > Hi, > > I'm writing an Nginx module that uses information stored in > memory to redirect requests to other servers. Basically when a > GET requests arrives, it makes some checks and decides to which > Location the requests shall be redirected. In order to have > Nginx update the information it holds in memory, I send him a > specific POST request to trigger it. > > However, reloading information takes quite a lot of time and I > have some questions related with this: > - while the POST request is handled in my module, the worker > that took the request is blocked until it has finished > processing, but if GET requests come in, are they handled by > other workers or can I have some GET requests getting blocked ? If a worker process is blocked, it can't accept new connections. But connections already accepted, e.g. just before the POST request in question, are bound to the worker and will be blocked till you return to the event loop. > - if I want the processing not to block, can I use an event > timer, in order to release the worker quickly and have the > processing take place "in background" ? Or will it block a > worker anyway ? There is no such thing as "in background". As nginx is event driven, everything happens in event handlers, and timers are events as well. That is, just running a task in a timer handler will equally block the worker process. What might help is splitting the task into smaller ones and returning to the event loop inbetween, to let other requests time to be served. This may be done e.g. via 1ms timer. Alternatively, you may consider doing a reload with standard nginx configuration reload mechanism. This way all blocking operations are done in master process, then new workers with updated configuration are spawned to handle requests. -- Maxim Dounin http://nginx.org/en/donation.html From jefftk at google.com Wed Apr 10 13:56:40 2013 From: jefftk at google.com (Jeff Kaufman) Date: Wed, 10 Apr 2013 09:56:40 -0400 Subject: How to avoid blocking Nginx with long request In-Reply-To: <20130410134054.GG62550@mdounin.ru> References: <5D103CE839D50E4CBC62C9FD7B83287C26C032@EXCN015.encara.local.ads> <20130410134054.GG62550@mdounin.ru> Message-ID: Why is your module taking a long time? Is it doing heavy computation or is it blocked on IO? If it's blocking IO, can you rewrite it to use asynchronous IO and never block? Another option would be to put your code in a separate process and reverse proxy to it. Or you could be crazy and do what ngx_pagespeed does: use threads. This is very tricky to get right. On Wed, Apr 10, 2013 at 9:40 AM, Maxim Dounin wrote: > Hello! > > On Wed, Apr 10, 2013 at 01:26:15PM +0000, MAGNIEN, Thierry wrote: > >> Hi, >> >> I'm writing an Nginx module that uses information stored in >> memory to redirect requests to other servers. Basically when a >> GET requests arrives, it makes some checks and decides to which >> Location the requests shall be redirected. In order to have >> Nginx update the information it holds in memory, I send him a >> specific POST request to trigger it. >> >> However, reloading information takes quite a lot of time and I >> have some questions related with this: >> - while the POST request is handled in my module, the worker >> that took the request is blocked until it has finished >> processing, but if GET requests come in, are they handled by >> other workers or can I have some GET requests getting blocked ? > > If a worker process is blocked, it can't accept new connections. > But connections already accepted, e.g. just before the POST > request in question, are bound to the worker and will be blocked > till you return to the event loop. > >> - if I want the processing not to block, can I use an event >> timer, in order to release the worker quickly and have the >> processing take place "in background" ? Or will it block a >> worker anyway ? > > There is no such thing as "in background". As nginx is event > driven, everything happens in event handlers, and timers are > events as well. That is, just running a task in a timer handler > will equally block the worker process. > > What might help is splitting the task into smaller ones and > returning to the event loop inbetween, to let other requests time > to be served. This may be done e.g. via 1ms timer. > > Alternatively, you may consider doing a reload with standard nginx > configuration reload mechanism. This way all blocking operations > are done in master process, then new workers with updated > configuration are spawned to handle requests. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From thierry.magnien at sfr.com Wed Apr 10 14:48:41 2013 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Wed, 10 Apr 2013 14:48:41 +0000 Subject: How to avoid blocking Nginx with long request In-Reply-To: References: <5D103CE839D50E4CBC62C9FD7B83287C26C032@EXCN015.encara.local.ads> <20130410134054.GG62550@mdounin.ru> Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C26C1F9@EXCN015.encara.local.ads> Hi, Thanks for your answers. The module is blocked by strong computation. I tried to reduce tasks to separate them but that represents a large amount of work so I wanted to be sure there was no alternative ;-) However, the reload mechanism seems a good way to solve my problem, I'll search in that direction. Thanks to both, Thierry -----Message d'origine----- De?: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] De la part de Jeff Kaufman Envoy??: mercredi 10 avril 2013 15:57 ??: nginx-devel at nginx.org Objet?: Re: How to avoid blocking Nginx with long request Why is your module taking a long time? Is it doing heavy computation or is it blocked on IO? If it's blocking IO, can you rewrite it to use asynchronous IO and never block? Another option would be to put your code in a separate process and reverse proxy to it. Or you could be crazy and do what ngx_pagespeed does: use threads. This is very tricky to get right. On Wed, Apr 10, 2013 at 9:40 AM, Maxim Dounin wrote: > Hello! > > On Wed, Apr 10, 2013 at 01:26:15PM +0000, MAGNIEN, Thierry wrote: > >> Hi, >> >> I'm writing an Nginx module that uses information stored in >> memory to redirect requests to other servers. Basically when a >> GET requests arrives, it makes some checks and decides to which >> Location the requests shall be redirected. In order to have >> Nginx update the information it holds in memory, I send him a >> specific POST request to trigger it. >> >> However, reloading information takes quite a lot of time and I >> have some questions related with this: >> - while the POST request is handled in my module, the worker >> that took the request is blocked until it has finished >> processing, but if GET requests come in, are they handled by >> other workers or can I have some GET requests getting blocked ? > > If a worker process is blocked, it can't accept new connections. > But connections already accepted, e.g. just before the POST > request in question, are bound to the worker and will be blocked > till you return to the event loop. > >> - if I want the processing not to block, can I use an event >> timer, in order to release the worker quickly and have the >> processing take place "in background" ? Or will it block a >> worker anyway ? > > There is no such thing as "in background". As nginx is event > driven, everything happens in event handlers, and timers are > events as well. That is, just running a task in a timer handler > will equally block the worker process. > > What might help is splitting the task into smaller ones and > returning to the event loop inbetween, to let other requests time > to be served. This may be done e.g. via 1ms timer. > > Alternatively, you may consider doing a reload with standard nginx > configuration reload mechanism. This way all blocking operations > are done in master process, then new workers with updated > configuration are spawned to handle requests. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Wed Apr 10 17:07:46 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 10 Apr 2013 17:07:46 +0000 Subject: [nginx] svn commit: r5169 - in trunk: auto/lib/perl src/http/modules/perl Message-ID: <20130410170746.4C6623F9EB7@mail.nginx.com> Author: mdounin Date: 2013-04-10 17:07:44 +0000 (Wed, 10 Apr 2013) New Revision: 5169 URL: http://trac.nginx.org/nginx/changeset/5169/nginx Log: Configure: fixed nginx.so rebuild (broken by r5145). To avoid further breaks it's now done properly, all the dependencies are now passed to Makefile.PL. While here, fixed include list passed to Makefile.PL to use Makefile variables rather than a list expanded during configure. Modified: trunk/auto/lib/perl/make trunk/src/http/modules/perl/Makefile.PL Modified: trunk/auto/lib/perl/make =================================================================== --- trunk/auto/lib/perl/make 2013-04-04 14:19:06 UTC (rev 5168) +++ trunk/auto/lib/perl/make 2013-04-10 17:07:44 UTC (rev 5169) @@ -31,7 +31,8 @@ cd $NGX_OBJS/src/http/modules/perl \\ && NGX_PM_CFLAGS="\$(NGX_PM_CFLAGS) -g $NGX_CC_OPT" \\ - NGX_INCS="$CORE_INCS $NGX_OBJS $HTTP_INCS" \\ + NGX_INCS="\$(CORE_INCS) \$(HTTP_INCS)" \\ + NGX_DEPS="\$(CORE_DEPS) \$(HTTP_DEPS)" \\ $NGX_PERL Makefile.PL \\ LIB=$NGX_PERL_MODULES \\ INSTALLSITEMAN3DIR=$NGX_PERL_MODULES_MAN Modified: trunk/src/http/modules/perl/Makefile.PL =================================================================== --- trunk/src/http/modules/perl/Makefile.PL 2013-04-04 14:19:06 UTC (rev 5168) +++ trunk/src/http/modules/perl/Makefile.PL 2013-04-10 17:07:44 UTC (rev 5169) @@ -21,8 +21,10 @@ } (split /\s+/, $ENV{NGX_INCS})), depend => { - 'nginx.c' => - "../../../../../src/http/modules/perl/ngx_http_perl_module.h" + 'nginx.c' => join(" ", map { + "../../../../../$_" + } (split(/\s+/, $ENV{NGX_DEPS}), + "src/http/modules/perl/ngx_http_perl_module.h")) }, PM => { From piotr at cloudflare.com Thu Apr 11 04:38:35 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 10 Apr 2013 21:38:35 -0700 Subject: [PATCH] Fix "$upstream_response_length" for upstream requests with buffering off Message-ID: Hey, the value of "$upstream_response_length" variable is being incorrectly reported as "0" for upstream requests with buffering off. Attached patch fixes this. Best regards, Piotr Sikora diff -r 482fda984556 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Apr 10 17:07:44 2013 +0000 +++ b/src/http/ngx_http_upstream.c Wed Apr 10 21:29:59 2013 -0700 @@ -3307,7 +3307,7 @@ u->state->response_sec = tp->sec - u->state->response_sec; u->state->response_msec = tp->msec - u->state->response_msec; - if (u->pipe) { + if (u->pipe->read_length) { u->state->response_length = u->pipe->read_length; } } From mdounin at mdounin.ru Thu Apr 11 11:28:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Apr 2013 15:28:29 +0400 Subject: [PATCH] Fix "$upstream_response_length" for upstream requests with buffering off In-Reply-To: References: Message-ID: <20130411112829.GM62550@mdounin.ru> Hello! On Wed, Apr 10, 2013 at 09:38:35PM -0700, Piotr Sikora wrote: > Hey, > the value of "$upstream_response_length" variable is being incorrectly > reported as "0" for upstream requests with buffering off. Looks like valid problem, thanks for reporting this. > diff -r 482fda984556 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Wed Apr 10 17:07:44 2013 +0000 > +++ b/src/http/ngx_http_upstream.c Wed Apr 10 21:29:59 2013 -0700 > @@ -3307,7 +3307,7 @@ > u->state->response_sec = tp->sec - u->state->response_sec; > u->state->response_msec = tp->msec - u->state->response_msec; > > - if (u->pipe) { > + if (u->pipe->read_length) { > u->state->response_length = u->pipe->read_length; > } > } The patch is wrong as u->pipe might not exists at all, and the code will result in null pointer dereference. Correct patch should be: --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3307,7 +3307,7 @@ ngx_http_upstream_finalize_request(ngx_h u->state->response_sec = tp->sec - u->state->response_sec; u->state->response_msec = tp->msec - u->state->response_msec; - if (u->pipe) { + if (u->pipe && u->pipe->read_length) { u->state->response_length = u->pipe->read_length; } } Is it looks good for you? -- Maxim Dounin http://nginx.org/en/donation.html From pluknet at nginx.com Thu Apr 11 13:49:14 2013 From: pluknet at nginx.com (pluknet at nginx.com) Date: Thu, 11 Apr 2013 13:49:14 +0000 Subject: [nginx] svn commit: r5170 - trunk/src/http Message-ID: <20130411134914.BCD253F9E74@mail.nginx.com> Author: pluknet Date: 2013-04-11 13:49:13 +0000 (Thu, 11 Apr 2013) New Revision: 5170 URL: http://trac.nginx.org/nginx/changeset/5170/nginx Log: Upstream: fixed $upstream_response_length without buffering. Reported by Piotr Sikora. Modified: trunk/src/http/ngx_http_upstream.c Modified: trunk/src/http/ngx_http_upstream.c =================================================================== --- trunk/src/http/ngx_http_upstream.c 2013-04-10 17:07:44 UTC (rev 5169) +++ trunk/src/http/ngx_http_upstream.c 2013-04-11 13:49:13 UTC (rev 5170) @@ -3307,7 +3307,7 @@ u->state->response_sec = tp->sec - u->state->response_sec; u->state->response_msec = tp->msec - u->state->response_msec; - if (u->pipe) { + if (u->pipe && u->pipe->read_length) { u->state->response_length = u->pipe->read_length; } } From piotr at cloudflare.com Thu Apr 11 18:41:19 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 11 Apr 2013 11:41:19 -0700 Subject: [PATCH] Fix "$upstream_response_length" for upstream requests with buffering off In-Reply-To: <20130411112829.GM62550@mdounin.ru> References: <20130411112829.GM62550@mdounin.ru> Message-ID: Hey Maxim, > The patch is wrong as u->pipe might not exists at all, and the > code will result in null pointer dereference. Argh, I was actually looking yesterday whether this might be the case, but I didn't find a code path that would result in u->pipe not being allocated... It seems that I totally forgot about memcached and my own upstream modules... Good catch, thanks! > Correct patch should be: > > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -3307,7 +3307,7 @@ ngx_http_upstream_finalize_request(ngx_h > u->state->response_sec = tp->sec - u->state->response_sec; > u->state->response_msec = tp->msec - u->state->response_msec; > > - if (u->pipe) { > + if (u->pipe && u->pipe->read_length) { > u->state->response_length = u->pipe->read_length; > } > } > > Is it looks good for you? Yeah, thanks! Best regards, Piotr Sikora From pengqian at ruijie.com.cn Fri Apr 12 07:38:46 2013 From: pengqian at ruijie.com.cn (=?gb2312?B?xe3HqyjR0MH5ILij1t0p?=) Date: Fri, 12 Apr 2013 07:38:46 +0000 Subject: DNS bug report Message-ID: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9859@fzex.ruijie.com.cn> Hi all, Recently, we have tested the NGX reverse proxy by TestCenter and found a segmentation fault in DNS module. BUG condition: 1. The rn link two(or more) ctxs, As we know the end ctx get a timeout event. 2. When rn recive a CNAME type response, it will create a new rn node. 3. The new rn link the same ctxs and send a query. Although the first ctx->name point the cname, the end ctx->name remain to point the original name. 4. The end ctx timeout occours, but it can't del from the new rn link for ctx->name point the original name. 5. The new rn recvice the response(code 2), it will call all ctx->handle. Unfortunately the end ctx has been freed, then the segmentation fault occurs. svn diff Index: ngx_resolver.c =================================================================== --- ngx_resolver.c (revision 5170) +++ ngx_resolver.c (working copy) @@ -607,6 +607,7 @@ rn->waiting = ctx; ctx->state = NGX_AGAIN; + ctx->next = NULL; return NGX_AGAIN; Thanks Pengqian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Fri Apr 12 10:50:23 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 12 Apr 2013 14:50:23 +0400 Subject: DNS bug report In-Reply-To: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9859@fzex.ruijie.com.cn> References: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9859@fzex.ruijie.com.cn> Message-ID: <20130412105023.GF80793@lo0.su> Hi, On Fri, Apr 12, 2013 at 07:38:46AM +0000, ??(?? ??) wrote: > Hi all, > > Recently, we have tested the NGX reverse proxy by TestCenter and found a segmentation fault in DNS module. > > BUG condition: > 1. The rn link two(or more) ctxs, As we know the end ctx get a timeout event. > 2. When rn recive a CNAME type response, it will create a new rn node. > 3. The new rn link the same ctxs and send a query. Although the first ctx->name point the cname, the end ctx->name remain to point the original name. > 4. The end ctx timeout occours, but it can't del from the new rn link for ctx->name point the original name. > 5. The new rn recvice the response(code 2), it will call all ctx->handle. Unfortunately the end ctx has been freed, then the segmentation fault occurs. > > svn diff > Index: ngx_resolver.c > =================================================================== > --- ngx_resolver.c (revision 5170) > +++ ngx_resolver.c (working copy) > @@ -607,6 +607,7 @@ > rn->waiting = ctx; > > ctx->state = NGX_AGAIN; > + ctx->next = NULL; > > return NGX_AGAIN; > > > > Thanks > Pengqian Thanks for your report. However, we have difficulty trying to understand your description. Could you please provide steps on how to reproduce the problem without going to ngx_resolver.c internals? What name nginx tries to resolve, what it gets in a reply from the DNS server, what happens next, etc. From pengqian at ruijie.com.cn Fri Apr 12 14:38:05 2013 From: pengqian at ruijie.com.cn (=?gb2312?B?xe3HqyjR0MH5ILij1t0p?=) Date: Fri, 12 Apr 2013 14:38:05 +0000 Subject: =?UTF-8?B?562U5aSNOiBETlMgYnVnIHJlcG9ydA==?= In-Reply-To: <20130412105023.GF80793@lo0.su> Message-ID: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9879@fzex.ruijie.com.cn> For example: Two(or more) requests visit A.com at the same time. DNS module: Send query A.com Recv response A.com(CNAME:b.com) Send query B.com [a little while] --> event timeout Recv response B.com(code:2 Server fail)--->segmentation fault -----????----- ???: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] ?? Ruslan Ermilov ????: 2013?4?12? 18:50 ???: nginx-devel at nginx.org ??: Re: DNS bug report Hi, On Fri, Apr 12, 2013 at 07:38:46AM +0000, ??(?? ??) wrote: > Hi all, > > Recently, we have tested the NGX reverse proxy by TestCenter and found a segmentation fault in DNS module. > > BUG condition: > 1. The rn link two(or more) ctxs, As we know the end ctx get a timeout event. > 2. When rn recive a CNAME type response, it will create a new rn node. > 3. The new rn link the same ctxs and send a query. Although the first ctx->name point the cname, the end ctx->name remain to point the original name. > 4. The end ctx timeout occours, but it can't del from the new rn link for ctx->name point the original name. > 5. The new rn recvice the response(code 2), it will call all ctx->handle. Unfortunately the end ctx has been freed, then the segmentation fault occurs. > > svn diff > Index: ngx_resolver.c > =================================================================== > --- ngx_resolver.c (revision 5170) > +++ ngx_resolver.c (working copy) > @@ -607,6 +607,7 @@ > rn->waiting = ctx; > > ctx->state = NGX_AGAIN; > + ctx->next = NULL; > > return NGX_AGAIN; > > > > Thanks > Pengqian Thanks for your report. However, we have difficulty trying to understand your description. Could you please provide steps on how to reproduce the problem without going to ngx_resolver.c internals? What name nginx tries to resolve, what it gets in a reply from the DNS server, what happens next, etc. _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From vbart at nginx.com Fri Apr 12 15:02:33 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Fri, 12 Apr 2013 15:02:33 +0000 Subject: [nginx] svn commit: r5171 - trunk/src/event/modules Message-ID: <20130412150233.B900A3F9C13@mail.nginx.com> Author: vbart Date: 2013-04-12 15:02:33 +0000 (Fri, 12 Apr 2013) New Revision: 5171 URL: http://trac.nginx.org/nginx/changeset/5171/nginx Log: Events: protection from stale events in eventport and devpoll. Stale write event may happen if read and write events was reported both, and processing of the read event closed descriptor. In practice this might result in "sendfilev() failed (134: ..." or "writev() failed (134: ..." errors when switching to next upstream server. See report here: http://mailman.nginx.org/pipermail/nginx/2013-April/038421.html Modified: trunk/src/event/modules/ngx_devpoll_module.c trunk/src/event/modules/ngx_eventport_module.c Modified: trunk/src/event/modules/ngx_devpoll_module.c =================================================================== --- trunk/src/event/modules/ngx_devpoll_module.c 2013-04-11 13:49:13 UTC (rev 5170) +++ trunk/src/event/modules/ngx_devpoll_module.c 2013-04-12 15:02:33 UTC (rev 5171) @@ -343,7 +343,7 @@ ngx_fd_t fd; ngx_err_t err; ngx_int_t i; - ngx_uint_t level; + ngx_uint_t level, instance; ngx_event_t *rev, *wev, **queue; ngx_connection_t *c; struct pollfd pfd; @@ -510,7 +510,13 @@ ngx_locked_post_event(rev, queue); } else { + instance = rev->instance; + rev->handler(rev); + + if (c->fd == -1 || wev->instance != instance) { + continue; + } } } Modified: trunk/src/event/modules/ngx_eventport_module.c =================================================================== --- trunk/src/event/modules/ngx_eventport_module.c 2013-04-11 13:49:13 UTC (rev 5170) +++ trunk/src/event/modules/ngx_eventport_module.c 2013-04-12 15:02:33 UTC (rev 5171) @@ -551,7 +551,7 @@ } else { rev->handler(rev); - if (ev->closed) { + if (ev->closed || ev->instance != instance) { continue; } } From vbart at nginx.com Fri Apr 12 15:04:24 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Fri, 12 Apr 2013 15:04:24 +0000 Subject: [nginx] svn commit: r5172 - trunk/src/event/modules Message-ID: <20130412150424.6B0B03F9C3E@mail.nginx.com> Author: vbart Date: 2013-04-12 15:04:23 +0000 (Fri, 12 Apr 2013) New Revision: 5172 URL: http://trac.nginx.org/nginx/changeset/5172/nginx Log: Events: handle only active events in eventport. We generate both read and write events if an error event was returned by port_getn() without POLLIN/POLLOUT, but we should not try to handle inactive events, they may even have no handler. Modified: trunk/src/event/modules/ngx_eventport_module.c Modified: trunk/src/event/modules/ngx_eventport_module.c =================================================================== --- trunk/src/event/modules/ngx_eventport_module.c 2013-04-12 15:02:33 UTC (rev 5171) +++ trunk/src/event/modules/ngx_eventport_module.c 2013-04-12 15:04:23 UTC (rev 5172) @@ -530,6 +530,14 @@ rev = c->read; wev = c->write; + if (!rev->active) { + revents &= ~POLLIN; + } + + if (!wew->active) { + revents &= ~POLLOUT; + } + rev->active = 0; wev->active = 0; From vbart at nginx.com Fri Apr 12 17:31:08 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Fri, 12 Apr 2013 17:31:08 +0000 Subject: [nginx] svn commit: r5173 - trunk/src/event/modules Message-ID: <20130412173108.A66BE3F9FF3@mail.nginx.com> Author: vbart Date: 2013-04-12 17:31:08 +0000 (Fri, 12 Apr 2013) New Revision: 5173 URL: http://trac.nginx.org/nginx/changeset/5173/nginx Log: Events: fixed typos in two previous commits. Modified: trunk/src/event/modules/ngx_devpoll_module.c trunk/src/event/modules/ngx_eventport_module.c Modified: trunk/src/event/modules/ngx_devpoll_module.c =================================================================== --- trunk/src/event/modules/ngx_devpoll_module.c 2013-04-12 15:04:23 UTC (rev 5172) +++ trunk/src/event/modules/ngx_devpoll_module.c 2013-04-12 17:31:08 UTC (rev 5173) @@ -514,7 +514,7 @@ rev->handler(rev); - if (c->fd == -1 || wev->instance != instance) { + if (c->fd == -1 || rev->instance != instance) { continue; } } Modified: trunk/src/event/modules/ngx_eventport_module.c =================================================================== --- trunk/src/event/modules/ngx_eventport_module.c 2013-04-12 15:04:23 UTC (rev 5172) +++ trunk/src/event/modules/ngx_eventport_module.c 2013-04-12 17:31:08 UTC (rev 5173) @@ -534,7 +534,7 @@ revents &= ~POLLIN; } - if (!wew->active) { + if (!wev->active) { revents &= ~POLLOUT; } From ru at nginx.com Fri Apr 12 19:12:14 2013 From: ru at nginx.com (ru at nginx.com) Date: Fri, 12 Apr 2013 19:12:14 +0000 Subject: [nginx] svn commit: r5174 - trunk/src/http/modules Message-ID: <20130412191214.5E0D13F9F46@mail.nginx.com> Author: ru Date: 2013-04-12 19:12:13 +0000 (Fri, 12 Apr 2013) New Revision: 5174 URL: http://trac.nginx.org/nginx/changeset/5174/nginx Log: Upstream: warn if multiple non-stackable balancers are installed. Modified: trunk/src/http/modules/ngx_http_upstream_ip_hash_module.c trunk/src/http/modules/ngx_http_upstream_least_conn_module.c Modified: trunk/src/http/modules/ngx_http_upstream_ip_hash_module.c =================================================================== --- trunk/src/http/modules/ngx_http_upstream_ip_hash_module.c 2013-04-12 17:31:08 UTC (rev 5173) +++ trunk/src/http/modules/ngx_http_upstream_ip_hash_module.c 2013-04-12 19:12:13 UTC (rev 5174) @@ -252,6 +252,11 @@ uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module); + if (uscf->peer.init_upstream) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "load balancing method redefined"); + } + uscf->peer.init_upstream = ngx_http_upstream_init_ip_hash; uscf->flags = NGX_HTTP_UPSTREAM_CREATE Modified: trunk/src/http/modules/ngx_http_upstream_least_conn_module.c =================================================================== --- trunk/src/http/modules/ngx_http_upstream_least_conn_module.c 2013-04-12 17:31:08 UTC (rev 5173) +++ trunk/src/http/modules/ngx_http_upstream_least_conn_module.c 2013-04-12 19:12:13 UTC (rev 5174) @@ -387,6 +387,11 @@ uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module); + if (uscf->peer.init_upstream) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "load balancing method redefined"); + } + uscf->peer.init_upstream = ngx_http_upstream_init_least_conn; uscf->flags = NGX_HTTP_UPSTREAM_CREATE From mdounin at mdounin.ru Tue Apr 16 10:15:00 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 16 Apr 2013 10:15:00 +0000 Subject: [nginx] svn commit: r5175 - trunk/src/http Message-ID: <20130416101500.E39543F9F35@mail.nginx.com> Author: mdounin Date: 2013-04-16 10:14:59 +0000 (Tue, 16 Apr 2013) New Revision: 5175 URL: http://trac.nginx.org/nginx/changeset/5175/nginx Log: Request body: only read body in main request (ticket #330). Before 1.3.9 an attempt to read body in a subrequest only caused problems if body wasn't already read or discarded in a main request. Starting with 1.3.9 it might also cause problems if body was discarded by a main request before subrequest start. Fix is to just ignore attempts to read request body in a subrequest, which looks like right thing to do anyway. Modified: trunk/src/http/ngx_http_request_body.c Modified: trunk/src/http/ngx_http_request_body.c =================================================================== --- trunk/src/http/ngx_http_request_body.c 2013-04-12 19:12:13 UTC (rev 5174) +++ trunk/src/http/ngx_http_request_body.c 2013-04-16 10:14:59 UTC (rev 5175) @@ -49,7 +49,7 @@ } #endif - if (r->request_body || r->discard_body) { + if (r != r->main || r->request_body || r->discard_body) { post_handler(r); return NGX_OK; } From mdounin at mdounin.ru Tue Apr 16 12:58:04 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 16 Apr 2013 12:58:04 +0000 Subject: [nginx] svn commit: r5176 - trunk/src/event/modules Message-ID: <20130416125804.174B73F9C0C@mail.nginx.com> Author: mdounin Date: 2013-04-16 12:58:03 +0000 (Tue, 16 Apr 2013) New Revision: 5176 URL: http://trac.nginx.org/nginx/changeset/5176/nginx Log: Events: backout eventport changes (r5172) for now. Evenport method needs more work. Changes in r5172, while being correct, introduce various new regressions with current code. Modified: trunk/src/event/modules/ngx_eventport_module.c Modified: trunk/src/event/modules/ngx_eventport_module.c =================================================================== --- trunk/src/event/modules/ngx_eventport_module.c 2013-04-16 10:14:59 UTC (rev 5175) +++ trunk/src/event/modules/ngx_eventport_module.c 2013-04-16 12:58:03 UTC (rev 5176) @@ -530,14 +530,6 @@ rev = c->read; wev = c->write; - if (!rev->active) { - revents &= ~POLLIN; - } - - if (!wev->active) { - revents &= ~POLLOUT; - } - rev->active = 0; wev->active = 0; From mdounin at mdounin.ru Tue Apr 16 14:05:12 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 16 Apr 2013 14:05:12 +0000 Subject: [nginx] svn commit: r5177 - trunk/docs/xml/nginx Message-ID: <20130416140512.2B5713F9C10@mail.nginx.com> Author: mdounin Date: 2013-04-16 14:05:11 +0000 (Tue, 16 Apr 2013) New Revision: 5177 URL: http://trac.nginx.org/nginx/changeset/5177/nginx Log: nginx-1.3.16-RELEASE Modified: trunk/docs/xml/nginx/changes.xml Modified: trunk/docs/xml/nginx/changes.xml =================================================================== --- trunk/docs/xml/nginx/changes.xml 2013-04-16 12:58:03 UTC (rev 5176) +++ trunk/docs/xml/nginx/changes.xml 2013-04-16 14:05:11 UTC (rev 5177) @@ -5,6 +5,57 @@ + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ??????????; +?????? ????????? ? 1.3.9. + + +a segmentation fault might occur in a worker process +if subrequests were used; +the bug had appeared in 1.3.9. + + + + + +????????? tcp_nodelay ???????? ?????? +??? ????????????? WebSocket-?????????? ? unix domain ?????. + + +the "tcp_nodelay" directive caused an error +if a WebSocket connection was proxied into a unix domain socket. + + + + + +?????????? $upstream_response_length ?????????? ???????? "0", +???? ?? ?????????????? ???????????.
+??????? Piotr Sikora. +
+ +the $upstream_response_length variable has an incorrect value "0" +if buffering was not used.
+Thanks to Piotr Sikora. +
+
+ + + +? ??????? ????????? ?????????? eventport ? /dev/poll. + + +in the eventport and /dev/poll methods. + + + +
+ + From mdounin at mdounin.ru Tue Apr 16 14:05:22 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 16 Apr 2013 14:05:22 +0000 Subject: [nginx] svn commit: r5178 - tags Message-ID: <20130416140522.E00A83FA17E@mail.nginx.com> Author: mdounin Date: 2013-04-16 14:05:22 +0000 (Tue, 16 Apr 2013) New Revision: 5178 URL: http://trac.nginx.org/nginx/changeset/5178/nginx Log: release-1.3.16 tag Added: tags/release-1.3.16/ From pluknet at nginx.com Wed Apr 17 11:53:38 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 17 Apr 2013 15:53:38 +0400 Subject: DNS bug report In-Reply-To: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9879@fzex.ruijie.com.cn> References: <0E2381B3873AD44F9C4B3D0EC0BC9A1460DB9879@fzex.ruijie.com.cn> Message-ID: <55B5440F-EB64-4D62-BBDB-C770A8D20F2B@nginx.com> On Apr 12, 2013, at 6:38 PM, ??(?? ??) wrote: > For example: > Two(or more) requests visit A.com at the same time. > DNS module: > Send query A.com > Recv response A.com(CNAME:b.com) > Send query B.com > [a little while] --> event timeout > Recv response B.com(code:2 Server fail)--->segmentation fault Hello. I tried to reproduce it with the described conditions but to no avail. Can you please provide what "nginx -V" shows? Thanks in advance. -- Sergey Kandaurov pluknet at nginx.com From fantasyvideo at 126.com Thu Apr 18 08:55:58 2013 From: fantasyvideo at 126.com (=?gb2312?B?w867w7mk1/fK0g==?=) Date: Thu, 18 Apr 2013 16:55:58 +0800 Subject: regarding the error "can't open auto/options" Message-ID: <000001ce3c12$8f207530$ad615f90$@126.com> I got the nginx source code by svn. Then tried to compile it with ?./configure?. ?can?t open auto/options? is reported. I find that there is no options folder or file in auto folder. How should I solve it? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fantasyvideo at 126.com Thu Apr 18 09:00:14 2013 From: fantasyvideo at 126.com (Tony) Date: Thu, 18 Apr 2013 17:00:14 +0800 (CST) Subject: regarding the error "can't open auto/options" Message-ID: <1c05cff.ac6f.13e1c5f65d5.Coremail.fantasyvideo@126.com> I got the nginx source code by svn. Then tried to compile it with ?./configure?. ?can?t open auto/options? is reported. I find that there is no options folder or file in auto folder. How should I solve it? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Apr 18 09:36:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Apr 2013 13:36:54 +0400 Subject: regarding the error "can't open auto/options" In-Reply-To: <1c05cff.ac6f.13e1c5f65d5.Coremail.fantasyvideo@126.com> References: <1c05cff.ac6f.13e1c5f65d5.Coremail.fantasyvideo@126.com> Message-ID: <20130418093654.GP92338@mdounin.ru> Hello! On Thu, Apr 18, 2013 at 05:00:14PM +0800, Tony wrote: > I got the nginx source code by svn. > > Then tried to compile it with ?./configure?. ?can?t open > auto/options? is reported. I find that there is no options > folder or file in auto folder. > > How should I solve it? The "auto/options" file is there: http://trac.nginx.org/nginx/browser/nginx/trunk/auto/options If there is no such file in your svn checkout - it probably means you did something wrong. On the other hand, the "./configure" won't work in svn checkout as the "configure" script is under "auto/" directory. If you need to run configure in an svn checkout, you should run "auto/configure" from the checkout root. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Thu Apr 18 14:16:45 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 18 Apr 2013 14:16:45 +0000 Subject: [nginx] svn commit: r5179 - trunk/src/core Message-ID: <20130418141645.3DE563F9FD1@mail.nginx.com> Author: ru Date: 2013-04-18 14:16:44 +0000 (Thu, 18 Apr 2013) New Revision: 5179 URL: http://trac.nginx.org/nginx/changeset/5179/nginx Log: Version bump. Modified: trunk/src/core/nginx.h Modified: trunk/src/core/nginx.h =================================================================== --- trunk/src/core/nginx.h 2013-04-16 14:05:22 UTC (rev 5178) +++ trunk/src/core/nginx.h 2013-04-18 14:16:44 UTC (rev 5179) @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1003016 -#define NGINX_VERSION "1.3.16" +#define nginx_version 1003017 +#define NGINX_VERSION "1.3.17" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From ru at nginx.com Thu Apr 18 14:26:08 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 18 Apr 2013 14:26:08 +0000 Subject: [nginx] svn commit: r5180 - in trunk/auto/lib: md5 sha1 Message-ID: <20130418142608.8E2223F9C1B@mail.nginx.com> Author: ru Date: 2013-04-18 14:26:08 +0000 (Thu, 18 Apr 2013) New Revision: 5180 URL: http://trac.nginx.org/nginx/changeset/5180/nginx Log: Configure: uniformly refer to libs when searching for md5 and sha1. Modified: trunk/auto/lib/md5/conf trunk/auto/lib/sha1/conf Modified: trunk/auto/lib/md5/conf =================================================================== --- trunk/auto/lib/md5/conf 2013-04-18 14:16:44 UTC (rev 5179) +++ trunk/auto/lib/md5/conf 2013-04-18 14:26:08 UTC (rev 5180) @@ -52,7 +52,7 @@ # FreeBSD, Solaris 10 - ngx_feature="system md library" + ngx_feature="md5 in system md library" ngx_feature_name=NGX_HAVE_MD5 ngx_feature_run=no ngx_feature_incs="#include " @@ -67,7 +67,7 @@ # Solaris 8/9 - ngx_feature="system md5 library" + ngx_feature="md5 in system md5 library" ngx_feature_libs="-lmd5" . auto/feature @@ -78,7 +78,7 @@ # OpenSSL crypto library - ngx_feature="OpenSSL md5 crypto library" + ngx_feature="md5 in system OpenSSL crypto library" ngx_feature_name="NGX_OPENSSL_MD5" ngx_feature_incs="#include " ngx_feature_libs="-lcrypto" Modified: trunk/auto/lib/sha1/conf =================================================================== --- trunk/auto/lib/sha1/conf 2013-04-18 14:16:44 UTC (rev 5179) +++ trunk/auto/lib/sha1/conf 2013-04-18 14:26:08 UTC (rev 5180) @@ -57,7 +57,7 @@ # OpenSSL crypto library - ngx_feature="OpenSSL sha1 crypto library" + ngx_feature="sha1 in system OpenSSL crypto library" ngx_feature_incs="#include " ngx_feature_libs="-lcrypto" . auto/feature From mdounin at mdounin.ru Fri Apr 19 12:19:57 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Fri, 19 Apr 2013 12:19:57 +0000 Subject: [nginx] svn commit: r5181 - in trunk: auto/lib/perl src/http/modules/perl Message-ID: <20130419121957.C61483F9FD1@mail.nginx.com> Author: mdounin Date: 2013-04-19 12:19:57 +0000 (Fri, 19 Apr 2013) New Revision: 5181 URL: http://trac.nginx.org/nginx/changeset/5181/nginx Log: Configure: fixed perl Makefile generation (ticket #334). Dependancy tracking introduced in r5169 were not handled absolute path names properly. Absolute names might appear in CORE_DEPS if --with-openssl or --with-pcre configure arguments are used to build OpenSSL/PCRE libraries. Additionally, revert part of r5169 to set NGX_INCS from Makefile variables. Makefile variables have $ngx_include_opt in them, which might result in wrong include paths being used. As a side effect, this also restores build with --with-http_perl_module and --without-http at the same time. Modified: trunk/auto/lib/perl/make trunk/src/http/modules/perl/Makefile.PL Modified: trunk/auto/lib/perl/make =================================================================== --- trunk/auto/lib/perl/make 2013-04-18 14:26:08 UTC (rev 5180) +++ trunk/auto/lib/perl/make 2013-04-19 12:19:57 UTC (rev 5181) @@ -31,7 +31,7 @@ cd $NGX_OBJS/src/http/modules/perl \\ && NGX_PM_CFLAGS="\$(NGX_PM_CFLAGS) -g $NGX_CC_OPT" \\ - NGX_INCS="\$(CORE_INCS) \$(HTTP_INCS)" \\ + NGX_INCS="$CORE_INCS $NGX_OBJS $HTTP_INCS" \\ NGX_DEPS="\$(CORE_DEPS) \$(HTTP_DEPS)" \\ $NGX_PERL Makefile.PL \\ LIB=$NGX_PERL_MODULES \\ Modified: trunk/src/http/modules/perl/Makefile.PL =================================================================== --- trunk/src/http/modules/perl/Makefile.PL 2013-04-18 14:26:08 UTC (rev 5180) +++ trunk/src/http/modules/perl/Makefile.PL 2013-04-19 12:19:57 UTC (rev 5181) @@ -22,7 +22,7 @@ depend => { 'nginx.c' => join(" ", map { - "../../../../../$_" + m#^/# ? $_ : "../../../../../$_" } (split(/\s+/, $ENV{NGX_DEPS}), "src/http/modules/perl/ngx_http_perl_module.h")) }, From dave at daveb.net Mon Apr 22 15:46:36 2013 From: dave at daveb.net (Dave Bailey) Date: Mon, 22 Apr 2013 08:46:36 -0700 Subject: protobuf-nginx (nginx code generator for protocol buffers messages) Message-ID: Hi, I've written a Google Protocol Buffers code generator for nginx module developers interested in using protobuf messages within nginx natively. The project is on Github here: https://github.com/dbcode/protobuf-nginx/ The README.md gives more details, but the intent is to generate code that follows the "nginx way", i.e. using nginx data types such as ngx_str_t, ngx_array_t, and ngx_rbtree_t; memory pools; and overall programming style. I have some more work to do on it, but at this point the generated code is fully functional. I'd be interested to hear any feedback from module developers who have considered using protobuf in nginx. -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at akins.org Mon Apr 22 20:44:49 2013 From: brian at akins.org (Brian Akins) Date: Mon, 22 Apr 2013 16:44:49 -0400 Subject: protobuf-nginx (nginx code generator for protocol buffers messages) In-Reply-To: References: Message-ID: I use protobufs in nginx via Lua. An example: https://github.com/bakins/lua-resty-riak Easy and speedy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 23 10:04:13 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 23 Apr 2013 10:04:13 +0000 Subject: [nginx] svn commit: r5182 - trunk/src/http/modules/perl Message-ID: <20130423100413.E0CFC3F9C45@mail.nginx.com> Author: mdounin Date: 2013-04-23 10:04:12 +0000 (Tue, 23 Apr 2013) New Revision: 5182 URL: http://trac.nginx.org/nginx/changeset/5182/nginx Log: Perl: request body handling fixed. As of 1.3.9, chunked request body may be available with r->headers_in.content_length_n <= 0. Additionally, request body may be in multiple buffers even if r->request_body_in_single_buf was requested. Modified: trunk/src/http/modules/perl/nginx.xs Modified: trunk/src/http/modules/perl/nginx.xs =================================================================== --- trunk/src/http/modules/perl/nginx.xs 2013-04-19 12:19:57 UTC (rev 5181) +++ trunk/src/http/modules/perl/nginx.xs 2013-04-23 10:04:12 UTC (rev 5182) @@ -357,7 +357,7 @@ ngx_http_perl_set_request(r); - if (r->headers_in.content_length_n <= 0) { + if (r->headers_in.content_length_n <= 0 && !r->headers_in.chunked) { XSRETURN_UNDEF; } @@ -386,7 +386,10 @@ dXSTARG; ngx_http_request_t *r; + u_char *p, *data; size_t len; + ngx_buf_t *buf; + ngx_chain_t *cl; ngx_http_perl_set_request(r); @@ -397,13 +400,43 @@ XSRETURN_UNDEF; } - len = r->request_body->bufs->buf->last - r->request_body->bufs->buf->pos; + cl = r->request_body->bufs; + buf = cl->buf; + if (cl->next == NULL) { + len = buf->last - buf->pos; + data = buf->pos; + goto done; + } + + len = buf->last - buf->pos; + cl = cl->next; + + for ( /* void */ ; cl; cl = cl->next) { + buf = cl->buf; + len += buf->last - buf->pos; + } + + p = ngx_pnalloc(r->pool, len); + if (p == NULL) { + return XSRETURN_UNDEF; + } + + data = p; + cl = r->request_body->bufs; + + for ( /* void */ ; cl; cl = cl->next) { + buf = cl->buf; + p = ngx_cpymem(p, buf->pos, buf->last - buf->pos); + } + + done: + if (len == 0) { XSRETURN_UNDEF; } - ngx_http_perl_set_targ(r->request_body->bufs->buf->pos, len); + ngx_http_perl_set_targ(data, len); ST(0) = TARG; From vbart at nginx.com Tue Apr 23 10:15:50 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Tue, 23 Apr 2013 10:15:50 +0000 Subject: [nginx] svn commit: r5183 - trunk/src/http Message-ID: <20130423101550.3D35E3F9C0F@mail.nginx.com> Author: vbart Date: 2013-04-23 10:15:49 +0000 (Tue, 23 Apr 2013) New Revision: 5183 URL: http://trac.nginx.org/nginx/changeset/5183/nginx Log: SPDY: set NGX_TCP_NODELAY_DISABLED for fake connections. This is to avoid setting the TCP_NODELAY flag on SPDY socket in ngx_http_upstream_send_response(). The latter works per request, but in SPDY case it might affect other streams in connection. Modified: trunk/src/http/ngx_http_spdy.c Modified: trunk/src/http/ngx_http_spdy.c =================================================================== --- trunk/src/http/ngx_http_spdy.c 2013-04-23 10:04:12 UTC (rev 5182) +++ trunk/src/http/ngx_http_spdy.c 2013-04-23 10:15:49 UTC (rev 5183) @@ -1830,6 +1830,7 @@ fc->log = log; fc->buffered = 0; fc->sndlowat = 1; + fc->tcp_nodelay = NGX_TCP_NODELAY_DISABLED; r = ngx_http_create_request(fc); if (r == NULL) { From mdounin at mdounin.ru Wed Apr 24 13:03:43 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 24 Apr 2013 13:03:43 +0000 Subject: [nginx] svn commit: r5184 - trunk/src/core Message-ID: <20130424130343.F0B263F9C0F@mail.nginx.com> Author: mdounin Date: 2013-04-24 13:03:43 +0000 (Wed, 24 Apr 2013) New Revision: 5184 URL: http://trac.nginx.org/nginx/changeset/5184/nginx Log: Version bump. Modified: trunk/src/core/nginx.h Modified: trunk/src/core/nginx.h =================================================================== --- trunk/src/core/nginx.h 2013-04-23 10:15:49 UTC (rev 5183) +++ trunk/src/core/nginx.h 2013-04-24 13:03:43 UTC (rev 5184) @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1003017 -#define NGINX_VERSION "1.3.17" +#define nginx_version 1004000 +#define NGINX_VERSION "1.4.0" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From shai.duvdevani at gmail.com Wed Apr 24 13:32:20 2013 From: shai.duvdevani at gmail.com (Shai Duvdevani) Date: Wed, 24 Apr 2013 16:32:20 +0300 Subject: new directive: "proxy_next_tries N" Message-ID: >> diff -ur /old/src/http/ngx_http_upstream.c /new/src/http/ngx_http_upstream.c >> --- /old/src/http/ngx_http_upstream.c 2013-04-21 18:25:09.619437856 +0000 >> +++ /new/src/http/ngx_http_upstream.c 2013-04-23 21:29:06.106568703 +0000 >> @@ -2904,6 +2904,11 @@ >> if (status) { >> u->state->status = status; >> >> + if (u->conf->next_upstream_tries != NGX_CONF_UNSET_UINT && ++r->us_tries >= u->conf->next_upstream_tries) { >> + ngx_http_upstream_finalize_request(r, u, status); >> + return; >> + } >> + >> if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) { >> >> #if (NGX_HTTP_CACHE) > >Introducing r->us_tries for this looks wrong, there is no need for >such counter at request level. Instead, probably u->peer.tries >should be set accordingly. > >The test against NGX_CONF_UNSET_UINT looks wrong, too, and >suggests that configuration inheritance isn't handled properly. [Gist: https://gist.github.com/shai-d/5446961 ] Maxim, thank you for your review! :) I agree about comparing to NGX_CONF_UNSET_UINT. It should be set to 0 (endless tries) by default. I avoided u->peer.tries because we wanted N retries per request and not N retries per upstream. As I understand it, all requests share the same instance of peers. If this is the case, In a high concurrency system with some percentage of errors, peers will statistically always have tries > N and many requests will be lost. Am I wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 24 13:59:34 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 24 Apr 2013 13:59:34 +0000 Subject: [nginx] svn commit: r5185 - trunk/docs/xml/nginx Message-ID: <20130424135934.719A93FAA7E@mail.nginx.com> Author: mdounin Date: 2013-04-24 13:59:34 +0000 (Wed, 24 Apr 2013) New Revision: 5185 URL: http://trac.nginx.org/nginx/changeset/5185/nginx Log: nginx-1.4.0-RELEASE Modified: trunk/docs/xml/nginx/changes.xml Modified: trunk/docs/xml/nginx/changes.xml =================================================================== --- trunk/docs/xml/nginx/changes.xml 2013-04-24 13:03:43 UTC (rev 5184) +++ trunk/docs/xml/nginx/changes.xml 2013-04-24 13:59:34 UTC (rev 5185) @@ -5,6 +5,35 @@ + + + + +nginx ?? ????????? ? ??????? ngx_http_perl_module, +???? ????????????? ???????? --with-openssl; +?????? ????????? ? 1.3.16. + + +nginx could not be built with the ngx_http_perl_module +if the --with-openssl option was used; +the bug had appeared in 1.3.16. + + + + + +? ?????? ? ????? ??????? ?? ?????? ngx_http_perl_module; +?????? ????????? ? 1.3.9. + + +in a request body handling in the ngx_http_perl_module; +the bug had appeared in 1.3.9. + + + + + + From mdounin at mdounin.ru Wed Apr 24 13:59:45 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 24 Apr 2013 13:59:45 +0000 Subject: [nginx] svn commit: r5186 - tags Message-ID: <20130424135945.8CE6A3F9EB7@mail.nginx.com> Author: mdounin Date: 2013-04-24 13:59:45 +0000 (Wed, 24 Apr 2013) New Revision: 5186 URL: http://trac.nginx.org/nginx/changeset/5186/nginx Log: release-1.4.0 tag Added: tags/release-1.4.0/ From mdounin at mdounin.ru Wed Apr 24 14:50:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 18:50:24 +0400 Subject: new directive: "proxy_next_tries N" In-Reply-To: References: Message-ID: <20130424145024.GN10443@mdounin.ru> Hello! On Wed, Apr 24, 2013 at 04:32:20PM +0300, Shai Duvdevani wrote: > >> diff -ur /old/src/http/ngx_http_upstream.c > /new/src/http/ngx_http_upstream.c > >> --- /old/src/http/ngx_http_upstream.c 2013-04-21 18:25:09.619437856 > +0000 > >> +++ /new/src/http/ngx_http_upstream.c 2013-04-23 21:29:06.106568703 > +0000 > >> @@ -2904,6 +2904,11 @@ > >> if (status) { > >> u->state->status = status; > >> > >> + if (u->conf->next_upstream_tries != NGX_CONF_UNSET_UINT && > ++r->us_tries >= u->conf->next_upstream_tries) { > >> + ngx_http_upstream_finalize_request(r, u, status); > >> + return; > >> + } > >> + > >> if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) { > >> > >> #if (NGX_HTTP_CACHE) > > > >Introducing r->us_tries for this looks wrong, there is no need for > >such counter at request level. Instead, probably u->peer.tries > >should be set accordingly. > > > >The test against NGX_CONF_UNSET_UINT looks wrong, too, and > >suggests that configuration inheritance isn't handled properly. > > [Gist: https://gist.github.com/shai-d/5446961 ] > > Maxim, thank you for your review! :) > > I agree about comparing to NGX_CONF_UNSET_UINT. It should be set to 0 > (endless tries) by default. > > I avoided u->peer.tries because we wanted N retries per request and not N > retries per upstream. > As I understand it, all requests share the same instance of peers. > If this is the case, In a high concurrency system with some percentage of > errors, peers will statistically always have tries > N and many requests > will be lost. > Am I wrong? The u->peer structure is allocated per request (it is actually a ngx_peer_connection_t sturcture within the ngx_http_upstream_t stucture). It is initialized once request enters upstream module and nginx starts to talk to a group of upstream servers, see ngx_http_upstream_init_request() function. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Wed Apr 24 14:57:35 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 24 Apr 2013 18:57:35 +0400 Subject: new directive: "proxy_next_tries N" In-Reply-To: References: Message-ID: <20130424145735.GD22410@lo0.su> On Wed, Apr 24, 2013 at 04:32:20PM +0300, Shai Duvdevani wrote: > >> diff -ur /old/src/http/ngx_http_upstream.c > /new/src/http/ngx_http_upstream.c > >> --- /old/src/http/ngx_http_upstream.c 2013-04-21 18:25:09.619437856 > +0000 > >> +++ /new/src/http/ngx_http_upstream.c 2013-04-23 21:29:06.106568703 > +0000 > >> @@ -2904,6 +2904,11 @@ > >> if (status) { > >> u->state->status = status; > >> > >> + if (u->conf->next_upstream_tries != NGX_CONF_UNSET_UINT && > ++r->us_tries >= u->conf->next_upstream_tries) { > >> + ngx_http_upstream_finalize_request(r, u, status); > >> + return; > >> + } > >> + > >> if (u->peer.tries == 0 || !(u->conf->next_upstream & ft_type)) { > >> > >> #if (NGX_HTTP_CACHE) > > > >Introducing r->us_tries for this looks wrong, there is no need for > >such counter at request level. Instead, probably u->peer.tries > >should be set accordingly. > > > >The test against NGX_CONF_UNSET_UINT looks wrong, too, and > >suggests that configuration inheritance isn't handled properly. > > [Gist: https://gist.github.com/shai-d/5446961 ] > > Maxim, thank you for your review! :) > > I agree about comparing to NGX_CONF_UNSET_UINT. It should be set to 0 > (endless tries) by default. > > I avoided u->peer.tries because we wanted N retries per request and not N > retries per upstream. You store the limit per upstream{}, but want it to affect tries per request? That's kinda strange. > As I understand it, all requests share the same instance of peers. > If this is the case, In a high concurrency system with some percentage of > errors, peers will statistically always have tries > N and many requests > will be lost. > Am I wrong? From shai.duvdevani at gmail.com Wed Apr 24 15:42:34 2013 From: shai.duvdevani at gmail.com (Shai Duvdevani) Date: Wed, 24 Apr 2013 18:42:34 +0300 Subject: new directive: "proxy_next_tries N" In-Reply-To: <20130424145735.GD22410@lo0.su> References: <20130424145735.GD22410@lo0.su> Message-ID: Hi Ruslan, I agree, I think it should be in the request too. There's only 1 "request retry" per request, there's no point in putting it anywhere else. On Wed, Apr 24, 2013 at 5:57 PM, Ruslan Ermilov wrote: > On Wed, Apr 24, 2013 at 04:32:20PM +0300, Shai Duvdevani wrote: > > >> diff -ur /old/src/http/ngx_http_upstream.c > > /new/src/http/ngx_http_upstream.c > > >> --- /old/src/http/ngx_http_upstream.c 2013-04-21 18:25:09.619437856 > > +0000 > > >> +++ /new/src/http/ngx_http_upstream.c 2013-04-23 21:29:06.106568703 > > +0000 > > >> @@ -2904,6 +2904,11 @@ > > >> if (status) { > > >> u->state->status = status; > > >> > > >> + if (u->conf->next_upstream_tries != NGX_CONF_UNSET_UINT && > > ++r->us_tries >= u->conf->next_upstream_tries) { > > >> + ngx_http_upstream_finalize_request(r, u, status); > > >> + return; > > >> + } > > >> + > > >> if (u->peer.tries == 0 || !(u->conf->next_upstream & > ft_type)) { > > >> > > >> #if (NGX_HTTP_CACHE) > > > > > >Introducing r->us_tries for this looks wrong, there is no need for > > >such counter at request level. Instead, probably u->peer.tries > > >should be set accordingly. > > > > > >The test against NGX_CONF_UNSET_UINT looks wrong, too, and > > >suggests that configuration inheritance isn't handled properly. > > > > [Gist: https://gist.github.com/shai-d/5446961 ] > > > > Maxim, thank you for your review! :) > > > > I agree about comparing to NGX_CONF_UNSET_UINT. It should be set to 0 > > (endless tries) by default. > > > > I avoided u->peer.tries because we wanted N retries per request and not N > > retries per upstream. > > You store the limit per upstream{}, but want it to affect tries > per request? That's kinda strange. > > > As I understand it, all requests share the same instance of peers. > > If this is the case, In a high concurrency system with some percentage of > > errors, peers will statistically always have tries > N and many requests > > will be lost. > > Am I wrong? > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Apr 25 11:34:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Apr 2013 15:34:20 +0400 Subject: source code repository switch to Mercurial Message-ID: <20130425113420.GC10443@mdounin.ru> Hello! Following 1.4.0 release we are finally switching nginx source code repository to Mercurial (http://mercurial.selenic.com/). The repository is now available at http://hg.nginx.org/nginx/. If you are working with nginx svn repository - please consider switching to Mercurial as svn repository will not be updated anymore. Submitting further patches as Mercurial changesets is also appreciated. :) -- Maxim Dounin http://nginx.org/en/donation.html From ja.nginx at mailnull.com Thu Apr 25 19:59:15 2013 From: ja.nginx at mailnull.com (SamB) Date: Thu, 25 Apr 2013 21:59:15 +0200 Subject: [PATCH] Return http status code from XSLT In-Reply-To: <20130311182048.GF15378@mdounin.ru> References: <51391D88.4060906@mailnull.com> <20130311182048.GF15378@mdounin.ru> Message-ID: <51798B13.2000805@mailnull.com> On 11. 3. 2013 19:20, Maxim Dounin wrote: > Hello! > > On Fri, Mar 08, 2013 at 12:06:48AM +0100, SamB wrote: > >> Hi, >> >> this patch provides simple possibility to return http error code >> from within XSLT transformation result. >> This is simple way to quickly and correctly return i.e. 404 error >> codes instead of producing dummy soft-404 pages. >> >> Sample XSLT: >> >> >> >> >> > While an ability to alter status code returned is intresting, I > don't think it should be done this way, abusing output attributes. > I would rather think of something like an XSLT variable with a > predefined name queried after a transformation. > Hi, I've modified my previous patch to use variables as you proposed. Global variable HTTP_STATUS_CODE is queried to get new status code - maybe it should be bound to some namespace also (?) - I will modify patch if you mind so. Example XSLT : NOT FOUND ! Best regards Sam diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c index a6ae1ce..b69f130 100644 --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -477,8 +477,10 @@ ngx_http_xslt_apply_stylesheet(ngx_http_request_t *r, ngx_uint_t i; xmlChar *buf; xmlDocPtr doc, res; + xmlXPathObjectPtr xpathObj; ngx_http_xslt_sheet_t *sheet; ngx_http_xslt_filter_loc_conf_t *conf; conf = ngx_http_get_module_loc_conf(r, ngx_http_xslt_filter_module); sheet = conf->sheets.elts; @@ -519,6 +521,25 @@ ngx_http_xslt_apply_stylesheet(ngx_http_request_t *r, ctx->params.elts, NULL, NULL, ctx->transform); + xpathObj = xsltVariableLookup(ctx->transform, (const xmlChar *)"HTTP_STATUS_CODE", NULL); + if (xpathObj) + { + ngx_uint_t status = 0; + xmlChar *statusStr; + + statusStr = xmlXPathCastToString(xpathObj); + status = strtoul((const char *)statusStr, NULL, 10); + + if (status > 0 + && r->headers_out.status != status) + { + r->headers_out.status = status; + r->headers_out.status_line.len = 0; + } + + xmlXPathFreeObject(xpathObj); + } + xsltFreeTransformContext(ctx->transform); xmlFreeDoc(doc); From zls.sogou at gmail.com Sat Apr 27 04:43:21 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Sat, 27 Apr 2013 12:43:21 +0800 Subject: [PATCH] use signed value when comparing timer with 0 and check lingering_time setting Message-ID: In ngx_http_lingering_close_handler and ngx_http_discarded_request_body_handler, there's risk that r->lingering_time is smaller than ngx_time(), then comparing timer which is a unsigned value with zero will never return true. This can cause long time connection for some kind of requests (For example, lingering_time is set smaller than lingering_timeout) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lingering.patch Type: application/octet-stream Size: 1777 bytes Desc: not available URL: