From allegrox at gmail.com Sat Sep 3 02:43:26 2011 From: allegrox at gmail.com (x x) Date: Sat, 3 Sep 2011 10:43:26 +0800 Subject: How to take full advantage system memory? Message-ID: Hi,All: My host have a very large physical memory, how to get nginx take full advantage of memory as the file system cache? From mdounin at mdounin.ru Sat Sep 3 08:44:08 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 3 Sep 2011 12:44:08 +0400 Subject: How to take full advantage system memory? In-Reply-To: References: Message-ID: <20110903084408.GM1137@mdounin.ru> Hello! On Sat, Sep 03, 2011 at 10:43:26AM +0800, x x wrote: > Hi,All: > > My host have a very large physical memory, how to get nginx take > full advantage of memory as the file system cache? Please not pollute developers list with user questions, thanks. Maxim Dounin From mdounin at mdounin.ru Sun Sep 4 11:33:47 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:47 +0400 Subject: [PATCH 00 of 15] upstream keepalive patch queue Message-ID: Hello! Here is the keepalive patch queue, posting it here for further review and testing. Note this series is for nginx 1.1.1, first 2 patches were already committed into trunk (just skip them if you are working with svn trunk). This series includes multiple fixes for problems found during testing since last post: - https connection caching support; - better detection of connections which can't be cached; - cpu hog in round-robin balancer; - segmentation fault when using with proxy_cache/fastcgi_cache; FastCGI keepalive support now requires "fastcgi_keep_conn on;" in config. Without the directive previous behaviour is preserved to make patches less intrusive. Upstream keepalive module is included as a last patch, it's compiled in by default. All patches may be found here: http://nginx.org/patches/patch-nginx-keepalive-full-4.txt Maxim Dounin From mdounin at mdounin.ru Sun Sep 4 11:33:48 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:48 +0400 Subject: [PATCH 01 of 15] Correct SSL shutdown handling In-Reply-To: References: Message-ID: <18293703cbf48c934f8f.1315136028@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1314880281 -14400 # Node ID 18293703cbf48c934f8f601c235b7d9e06e93be5 # Parent 5d94f8b3e01d74ec6bd5bdcae176a8d3b998237d Correct SSL shutdown handling. If connection has unsent alerts, SSL_shutdown() tries to send them even if SSL_set_shutdown(SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN) was used. This can be prevented by SSL_set_quiet_shutdown(). SSL_set_shutdown() is required nevertheless to preserve session. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1205,6 +1205,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) if (c->timedout) { mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; + SSL_set_quiet_shutdown(c->ssl->connection, 1); } else { mode = SSL_get_shutdown(c->ssl->connection); @@ -1216,6 +1217,10 @@ ngx_ssl_shutdown(ngx_connection_t *c) if (c->ssl->no_send_shutdown) { mode |= SSL_SENT_SHUTDOWN; } + + if (c->ssl->no_wait_shutdown && c->ssl->no_send_shutdown) { + SSL_set_quiet_shutdown(c->ssl->connection, 1); + } } SSL_set_shutdown(c->ssl->connection, mode); From mdounin at mdounin.ru Sun Sep 4 11:33:49 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:49 +0400 Subject: [PATCH 02 of 15] Proper setting of read->eof in pipe code In-Reply-To: References: Message-ID: <028614c84148775551b6.1315136029@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1314887926 -14400 # Node ID 028614c84148775551b669785a1b4d637b831678 # Parent 18293703cbf48c934f8f601c235b7d9e06e93be5 Proper setting of read->eof in pipe code. Setting read->eof to 0 seems to be just a typo. It appeared in nginx-0.0.1-2003-10-28-18:45:41 import (r164), while identical code in ngx_recv.c introduced in the same import do actually set read->eof to 1. Failure to set read->eof to 1 results in EOF not being generally detectable from connection flags. On the other hand, kqueue won't report any read events on such a connection since we use EV_CLEAR. This resulted in read timeouts if such connection was cached and used for another request. diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -149,7 +149,7 @@ ngx_event_pipe_read_upstream(ngx_event_p && p->upstream->read->pending_eof) { p->upstream->read->ready = 0; - p->upstream->read->eof = 0; + p->upstream->read->eof = 1; p->upstream_eof = 1; p->read = 1; From mdounin at mdounin.ru Sun Sep 4 11:33:50 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:50 +0400 Subject: [PATCH 03 of 15] Workaround for cpu hog on errors with cached connections In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1314894208 -14400 # Node ID aab344fc305b4447f03461e04c1de291cce6caf9 # Parent 028614c84148775551b669785a1b4d637b831678 Workaround for cpu hog on errors with cached connections. Just doing another connect isn't safe as peer.get() may expect peer.tries to be strictly positive (this is the case e.g. with round robin with multiple upstream servers). Increment peer.tries to at least avoid cpu hog in round robin balancer (with the patch alert will be seen instead). This is not enough to fully address the problem though, hence TODO. We should be able to inform balancer that the error wasn't considered fatal and it may make sense to retry the same peer. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2812,6 +2812,10 @@ ngx_http_upstream_next(ngx_http_request_ if (u->peer.cached && ft_type == NGX_HTTP_UPSTREAM_FT_ERROR) { status = 0; + /* TODO: inform balancer instead */ + + u->peer.tries++; + } else { switch(ft_type) { From mdounin at mdounin.ru Sun Sep 4 11:33:51 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:51 +0400 Subject: [PATCH 04 of 15] Upstream: separate pool for peer connections In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1314894208 -14400 # Node ID db8b34c0e7391efc6a68f43ebeace8407f22e19c # Parent aab344fc305b4447f03461e04c1de291cce6caf9 Upstream: separate pool for peer connections. This is required to support persistant https connections as various ssl structures are allocated from connection's pool. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1136,8 +1136,20 @@ ngx_http_upstream_connect(ngx_http_reque c->sendfile &= r->connection->sendfile; u->output.sendfile = c->sendfile; - c->pool = r->pool; + if (c->pool == NULL) { + + /* we need separate pool here to be able to cache SSL connections */ + + c->pool = ngx_create_pool(128, r->connection->log); + if (c->pool == NULL) { + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_INTERNAL_SERVER_ERROR); + return; + } + } + c->log = r->connection->log; + c->pool->log = c->log; c->read->log = c->log; c->write->log = c->log; @@ -2890,6 +2902,7 @@ ngx_http_upstream_next(ngx_http_request_ } #endif + ngx_destroy_pool(u->peer.connection->pool); ngx_close_connection(u->peer.connection); } @@ -2984,6 +2997,7 @@ ngx_http_upstream_finalize_request(ngx_h "close http upstream connection: %d", u->peer.connection->fd); + ngx_destroy_pool(u->peer.connection->pool); ngx_close_connection(u->peer.connection); } From mdounin at mdounin.ru Sun Sep 4 11:33:52 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:52 +0400 Subject: [PATCH 05 of 15] Upstream: content_length_n API change In-Reply-To: References: Message-ID: <7383d1e59b73aff371fd.1315136032@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1314889887 -14400 # Node ID 7383d1e59b73aff371fd7cdc2f0fd800bce0b84e # Parent db8b34c0e7391efc6a68f43ebeace8407f22e19c Upstream: content_length_n API change. We no longer use r->headers_out.content_length_n as a primary source of backend's response length. Instead we parse response length to u->headers_in.content_length_n and copy to r->headers_out.content_length_n when needed. diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -344,8 +344,8 @@ found: while (*p && *p++ != CR) { /* void */ } - r->headers_out.content_length_n = ngx_atoof(len, p - len - 1); - if (r->headers_out.content_length_n == -1) { + u->headers_in.content_length_n = ngx_atoof(len, p - len - 1); + if (u->headers_in.content_length_n == -1) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "memcached sent invalid length in response \"%V\" " "for key \"%V\"", diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -72,6 +72,8 @@ static void ngx_http_upstream_finalize_r static ngx_int_t ngx_http_upstream_process_header_line(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); +static ngx_int_t ngx_http_upstream_process_content_length(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_set_cookie(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t @@ -96,8 +98,6 @@ static ngx_int_t ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_copy_content_type(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); -static ngx_int_t ngx_http_upstream_copy_content_length(ngx_http_request_t *r, - ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_copy_last_modified(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_rewrite_location(ngx_http_request_t *r, @@ -149,9 +149,9 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_copy_content_type, 0, 1 }, { ngx_string("Content-Length"), - ngx_http_upstream_process_header_line, + ngx_http_upstream_process_content_length, offsetof(ngx_http_upstream_headers_in_t, content_length), - ngx_http_upstream_copy_content_length, 0, 0 }, + ngx_http_upstream_ignore_header_line, 0, 0 }, { ngx_string("Date"), ngx_http_upstream_process_header_line, @@ -396,6 +396,8 @@ ngx_http_upstream_create(ngx_http_reques r->cache = NULL; #endif + u->headers_in.content_length_n = -1; + return NGX_OK; } @@ -800,6 +802,7 @@ ngx_http_upstream_cache_send(ngx_http_re u->buffer.pos += c->header_start; ngx_memzero(&u->headers_in, sizeof(ngx_http_upstream_headers_in_t)); + u->headers_in.content_length_n = -1; if (ngx_list_init(&u->headers_in.headers, r->pool, 8, sizeof(ngx_table_elt_t)) @@ -1295,6 +1298,7 @@ ngx_http_upstream_reinit(ngx_http_reques } ngx_memzero(&u->headers_in, sizeof(ngx_http_upstream_headers_in_t)); + u->headers_in.content_length_n = -1; if (ngx_list_init(&u->headers_in.headers, r->pool, 8, sizeof(ngx_table_elt_t)) @@ -1936,10 +1940,10 @@ ngx_http_upstream_process_headers(ngx_ht r->headers_out.status = u->headers_in.status_n; r->headers_out.status_line = u->headers_in.status_line; - u->headers_in.content_length_n = r->headers_out.content_length_n; - - if (r->headers_out.content_length_n != -1) { - u->length = (size_t) r->headers_out.content_length_n; + r->headers_out.content_length_n = u->headers_in.content_length_n; + + if (u->headers_in.content_length_n != -1) { + u->length = (size_t) u->headers_in.content_length_n; } else { u->length = NGX_MAX_SIZE_T_VALUE; @@ -3078,6 +3082,21 @@ ngx_http_upstream_ignore_header_line(ngx static ngx_int_t +ngx_http_upstream_process_content_length(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset) +{ + ngx_http_upstream_t *u; + + u = r->upstream; + + u->headers_in.content_length = h; + u->headers_in.content_length_n = ngx_atoof(h->value.data, h->value.len); + + return NGX_OK; +} + + +static ngx_int_t ngx_http_upstream_process_set_cookie(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { @@ -3446,26 +3465,6 @@ ngx_http_upstream_copy_content_type(ngx_ static ngx_int_t -ngx_http_upstream_copy_content_length(ngx_http_request_t *r, ngx_table_elt_t *h, - ngx_uint_t offset) -{ - ngx_table_elt_t *ho; - - ho = ngx_list_push(&r->headers_out.headers); - if (ho == NULL) { - return NGX_ERROR; - } - - *ho = *h; - - r->headers_out.content_length = ho; - r->headers_out.content_length_n = ngx_atoof(h->value.data, h->value.len); - - return NGX_OK; -} - - -static ngx_int_t ngx_http_upstream_copy_last_modified(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { From mdounin at mdounin.ru Sun Sep 4 11:33:53 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:53 +0400 Subject: [PATCH 06 of 15] Upstream: r->upstream->length type change to off_t In-Reply-To: References: Message-ID: <3e730e31d8eb54db2d79.1315136033@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1314890513 -14400 # Node ID 3e730e31d8eb54db2d79079b3eadbcc8bba2e914 # Parent 7383d1e59b73aff371fd7cdc2f0fd800bce0b84e Upstream: r->upstream->length type change to off_t. Previous use of size_t may cause wierd effects on 32bit platforms with certain big responses transferred in unbuffered mode. Nuke "if (size > u->length)" check as it's not usefull anyway (preread body data isn't subject to this check) and now requires additional check for u->length being positive. diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -407,7 +407,7 @@ ngx_http_memcached_filter(void *data, ss u = ctx->request->upstream; b = &u->buffer; - if (u->length == ctx->rest) { + if (u->length == (ssize_t) ctx->rest) { if (ngx_strncmp(b->last, ngx_http_memcached_end + NGX_HTTP_MEMCACHED_END - ctx->rest, diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1942,12 +1942,7 @@ ngx_http_upstream_process_headers(ngx_ht r->headers_out.content_length_n = u->headers_in.content_length_n; - if (u->headers_in.content_length_n != -1) { - u->length = (size_t) u->headers_in.content_length_n; - - } else { - u->length = NGX_MAX_SIZE_T_VALUE; - } + u->length = u->headers_in.content_length_n; return NGX_OK; } @@ -2419,10 +2414,6 @@ ngx_http_upstream_process_non_buffered_r size = b->end - b->last; - if (size > u->length) { - size = u->length; - } - if (size && upstream->read->ready) { n = upstream->recv(upstream, b->last, size); @@ -2519,7 +2510,7 @@ ngx_http_upstream_non_buffered_filter(vo cl->buf->last = b->last; cl->buf->tag = u->output.tag; - if (u->length == NGX_MAX_SIZE_T_VALUE) { + if (u->length == -1) { return NGX_OK; } diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -267,7 +267,7 @@ struct ngx_http_upstream_s { ngx_http_upstream_resolved_t *resolved; ngx_buf_t buffer; - size_t length; + off_t length; ngx_chain_t *out_bufs; ngx_chain_t *busy_bufs; From mdounin at mdounin.ru Sun Sep 4 11:33:54 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:54 +0400 Subject: [PATCH 07 of 15] Upstream: pipe length and input_filter_init in buffered mode In-Reply-To: References: Message-ID: <657ec1945f55f25b5fa4.1315136034@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1297818715 -10800 # Node ID 657ec1945f55f25b5fa4913b1acd2f8f310f9283 # Parent 3e730e31d8eb54db2d79079b3eadbcc8bba2e914 Upstream: pipe length and input_filter_init in buffered mode. As long as ngx_event_pipe() has more data read from upstream than specified in p->length it's passed to input filter even if buffer isn't yet full. This allows to process data with known length without relying on connection close to signal data end. By default p->length is set to -1 in upstream module, i.e. end of data is indicated by connection close. To set it from per-protocol handlers upstream input_filter_init() now called in buffered mode (as well as in unbuffered mode). diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -392,8 +392,31 @@ ngx_event_pipe_read_upstream(ngx_event_p cl->buf->file_last - cl->buf->file_pos); } + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, p->log, 0, + "pipe length: %O", p->length); + #endif + if (p->free_raw_bufs && p->length != -1) { + cl = p->free_raw_bufs; + + if (cl->buf->last - cl->buf->pos >= p->length) { + + /* STUB */ cl->buf->num = p->num++; + + if (p->input_filter(p, cl->buf) == NGX_ERROR) { + return NGX_ABORT; + } + + p->free_raw_bufs = cl->next; + } + } + + if (p->length == 0) { + p->upstream_done = 1; + p->read = 1; + } + if ((p->upstream_eof || p->upstream_error) && p->free_raw_bufs) { /* STUB */ p->free_raw_bufs->buf->num = p->num++; @@ -848,6 +871,12 @@ ngx_event_pipe_copy_input_filter(ngx_eve } p->last_in = &cl->next; + if (p->length == -1) { + return NGX_OK; + } + + p->length -= b->last - b->pos; + return NGX_OK; } diff --git a/src/event/ngx_event_pipe.h b/src/event/ngx_event_pipe.h --- a/src/event/ngx_event_pipe.h +++ b/src/event/ngx_event_pipe.h @@ -65,6 +65,7 @@ struct ngx_event_pipe_s { ssize_t busy_size; off_t read_length; + off_t length; off_t max_temp_file_size; ssize_t temp_file_write_size; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2304,6 +2304,15 @@ ngx_http_upstream_send_response(ngx_http p->send_timeout = clcf->send_timeout; p->send_lowat = clcf->send_lowat; + p->length = -1; + + if (u->input_filter_init + && u->input_filter_init(p->input_ctx) != NGX_OK) + { + ngx_http_upstream_finalize_request(r, u, 0); + return; + } + u->read_event_handler = ngx_http_upstream_process_upstream; r->write_event_handler = ngx_http_upstream_process_downstream; From mdounin at mdounin.ru Sun Sep 4 11:33:55 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:55 +0400 Subject: [PATCH 08 of 15] Upstream: keepalive flag In-Reply-To: References: Message-ID: <541f6cb00f1f5d5a93f5.1315136035@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1297818715 -10800 # Node ID 541f6cb00f1f5d5a93f5e272b4121585c2364417 # Parent 657ec1945f55f25b5fa4913b1acd2f8f310f9283 Upstream: keepalive flag. This patch introduces r->upstream->keepalive flag, which is set by protocol handlers if connection to upstream is in good state and can be kept alive. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1297,6 +1297,8 @@ ngx_http_upstream_reinit(ngx_http_reques return NGX_ERROR; } + u->keepalive = 0; + ngx_memzero(&u->headers_in, sizeof(ngx_http_upstream_headers_in_t)); u->headers_in.content_length_n = -1; @@ -2006,6 +2008,11 @@ ngx_http_upstream_process_body_in_memory } } + if (u->length == 0) { + ngx_http_upstream_finalize_request(r, u, 0); + return; + } + if (ngx_handle_read_event(rev, 0) != NGX_OK) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -308,6 +308,7 @@ struct ngx_http_upstream_s { #endif unsigned buffering:1; + unsigned keepalive:1; unsigned request_sent:1; unsigned header_sent:1; From mdounin at mdounin.ru Sun Sep 4 11:33:56 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:56 +0400 Subject: [PATCH 09 of 15] Keepalive support in memcached In-Reply-To: References: Message-ID: <85fd18d013f15b975f7d.1315136036@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315040808 -14400 # Node ID 85fd18d013f15b975f7dd9cabbc5acbe02d12153 # Parent 541f6cb00f1f5d5a93f5e272b4121585c2364417 Keepalive support in memcached. diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -366,6 +366,7 @@ found: u->headers_in.status_n = 404; u->state->status = 404; + u->keepalive = 1; return NGX_OK; } @@ -426,6 +427,10 @@ ngx_http_memcached_filter(void *data, ss u->length -= bytes; ctx->rest -= bytes; + if (u->length == 0) { + u->keepalive = 1; + } + return NGX_OK; } @@ -463,6 +468,13 @@ ngx_http_memcached_filter(void *data, ss if (ngx_strncmp(last, ngx_http_memcached_end, b->last - last) != 0) { ngx_log_error(NGX_LOG_ERR, ctx->request->connection->log, 0, "memcached sent invalid trailer"); + + b->last = last; + cl->buf->last = last; + u->length = 0; + ctx->rest = 0; + + return NGX_OK; } ctx->rest -= b->last - last; @@ -470,6 +482,10 @@ ngx_http_memcached_filter(void *data, ss cl->buf->last = last; u->length = ctx->rest; + if (u->length == 0) { + u->keepalive = 1; + } + return NGX_OK; } From mdounin at mdounin.ru Sun Sep 4 11:33:57 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:57 +0400 Subject: [PATCH 10 of 15] Keepalive support in fastcgi In-Reply-To: References: Message-ID: <3c397bb4808ed5a8ec21.1315136037@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315040808 -14400 # Node ID 3c397bb4808ed5a8ec21e07eb9604b1766f723b8 # Parent 85fd18d013f15b975f7dd9cabbc5acbe02d12153 Keepalive support in fastcgi. By default follow the old behaviour, i.e. FASTCGI_KEEP_CONN flag isn't set in request and application is responsible for closing connection once request is done. To keep connections alive fastcgi_keep_conn must be activated. diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -26,6 +26,8 @@ typedef struct { ngx_hash_t headers_hash; ngx_uint_t header_params; + ngx_flag_t keep_conn; + #if (NGX_HTTP_CACHE) ngx_http_complex_value_t cache_key; #endif @@ -77,6 +79,8 @@ typedef struct { #define NGX_HTTP_FASTCGI_RESPONDER 1 +#define NGX_HTTP_FASTCGI_KEEP_CONN 1 + #define NGX_HTTP_FASTCGI_BEGIN_REQUEST 1 #define NGX_HTTP_FASTCGI_ABORT_REQUEST 2 #define NGX_HTTP_FASTCGI_END_REQUEST 3 @@ -130,6 +134,7 @@ static ngx_int_t ngx_http_fastcgi_create static ngx_int_t ngx_http_fastcgi_create_request(ngx_http_request_t *r); static ngx_int_t ngx_http_fastcgi_reinit_request(ngx_http_request_t *r); static ngx_int_t ngx_http_fastcgi_process_header(ngx_http_request_t *r); +static ngx_int_t ngx_http_fastcgi_input_filter_init(void *data); static ngx_int_t ngx_http_fastcgi_input_filter(ngx_event_pipe_t *p, ngx_buf_t *buf); static ngx_int_t ngx_http_fastcgi_process_record(ngx_http_request_t *r, @@ -437,6 +442,13 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, catch_stderr), NULL }, + { ngx_string("fastcgi_keep_conn"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, keep_conn), + NULL }, + ngx_null_command }; @@ -600,6 +612,8 @@ ngx_http_fastcgi_handler(ngx_http_reques u->pipe->input_filter = ngx_http_fastcgi_input_filter; u->pipe->input_ctx = r; + u->input_filter_init = ngx_http_fastcgi_input_filter_init; + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -841,6 +855,9 @@ ngx_http_fastcgi_create_request(ngx_http cl->buf = b; + ngx_http_fastcgi_request_start.br.flags = + flcf->keep_conn ? NGX_HTTP_FASTCGI_KEEP_CONN : 0; + ngx_memcpy(b->pos, &ngx_http_fastcgi_request_start, sizeof(ngx_http_fastcgi_request_start_t)); @@ -1574,14 +1591,30 @@ ngx_http_fastcgi_process_header(ngx_http static ngx_int_t +ngx_http_fastcgi_input_filter_init(void *data) +{ + ngx_http_request_t *r = data; + ngx_http_fastcgi_loc_conf_t *flcf; + + flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module); + + r->upstream->pipe->length = flcf->keep_conn ? + (off_t) sizeof(ngx_http_fastcgi_header_t) : -1; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_fastcgi_input_filter(ngx_event_pipe_t *p, ngx_buf_t *buf) { - u_char *m, *msg; - ngx_int_t rc; - ngx_buf_t *b, **prev; - ngx_chain_t *cl; - ngx_http_request_t *r; - ngx_http_fastcgi_ctx_t *f; + u_char *m, *msg; + ngx_int_t rc; + ngx_buf_t *b, **prev; + ngx_chain_t *cl; + ngx_http_request_t *r; + ngx_http_fastcgi_ctx_t *f; + ngx_http_fastcgi_loc_conf_t *flcf; if (buf->pos == buf->last) { return NGX_OK; @@ -1589,6 +1622,7 @@ ngx_http_fastcgi_input_filter(ngx_event_ r = p->input_ctx; f = ngx_http_get_module_ctx(r, ngx_http_fastcgi_module); + flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module); b = NULL; prev = &buf->shadow; @@ -1611,7 +1645,10 @@ ngx_http_fastcgi_input_filter(ngx_event_ if (f->type == NGX_HTTP_FASTCGI_STDOUT && f->length == 0) { f->state = ngx_http_fastcgi_st_version; - p->upstream_done = 1; + + if (!flcf->keep_conn) { + p->upstream_done = 1; + } ngx_log_debug0(NGX_LOG_DEBUG_HTTP, p->log, 0, "http fastcgi closed stdout"); @@ -1623,6 +1660,10 @@ ngx_http_fastcgi_input_filter(ngx_event_ f->state = ngx_http_fastcgi_st_version; p->upstream_done = 1; + if (flcf->keep_conn) { + r->upstream->keepalive = 1; + } + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, p->log, 0, "http fastcgi sent end request"); @@ -1781,6 +1822,23 @@ ngx_http_fastcgi_input_filter(ngx_event_ } + if (flcf->keep_conn) { + + /* set p->length, minimal amount of data we want to see */ + + if (f->state < ngx_http_fastcgi_st_data) { + p->length = 1; + + } else if (f->state == ngx_http_fastcgi_st_padding) { + p->length = f->padding; + + } else { + /* ngx_http_fastcgi_st_data */ + + p->length = f->length; + } + } + if (b) { b->shadow = buf; b->last_shadow = 1; @@ -2011,6 +2069,8 @@ ngx_http_fastcgi_create_loc_conf(ngx_con conf->catch_stderr = NGX_CONF_UNSET_PTR; + conf->keep_conn = NGX_CONF_UNSET; + ngx_str_set(&conf->upstream.module, "fastcgi"); return conf; @@ -2254,6 +2314,8 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_conf_merge_ptr_value(conf->catch_stderr, prev->catch_stderr, NULL); + ngx_conf_merge_value(conf->keep_conn, prev->keep_conn, 0); + ngx_conf_merge_str_value(conf->index, prev->index, ""); From mdounin at mdounin.ru Sun Sep 4 11:33:58 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:58 +0400 Subject: [PATCH 11 of 15] Upstream: process Transfer-Encoding header and detect chunked one In-Reply-To: References: Message-ID: <326308f7e409d9d5e922.1315136038@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1297818715 -10800 # Node ID 326308f7e409d9d5e92265ee98be2b3bbd4ba39e # Parent 3c397bb4808ed5a8ec21e07eb9604b1766f723b8 Upstream: process Transfer-Encoding header and detect chunked one. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -91,6 +91,9 @@ static ngx_int_t ngx_http_upstream_proce ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_charset(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); +static ngx_int_t + ngx_http_upstream_process_transfer_encoding(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_copy_header_line(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t @@ -247,6 +250,10 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_process_charset, 0, ngx_http_upstream_copy_header_line, 0, 0 }, + { ngx_string("Transfer-Encoding"), + ngx_http_upstream_process_transfer_encoding, 0, + ngx_http_upstream_ignore_header_line, 0, 0 }, + #if (NGX_HTTP_GZIP) { ngx_string("Content-Encoding"), ngx_http_upstream_process_header_line, @@ -3365,6 +3372,23 @@ ngx_http_upstream_process_charset(ngx_ht static ngx_int_t +ngx_http_upstream_process_transfer_encoding(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset) +{ + r->upstream->headers_in.transfer_encoding = h; + + if (ngx_strlcasestrn(h->value.data, h->value.data + h->value.len, + (u_char *) "chunked", 7 - 1) + != NULL) + { + r->upstream->headers_in.chunked = 1; + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_upstream_copy_header_line(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -217,6 +217,7 @@ typedef struct { ngx_table_elt_t *location; ngx_table_elt_t *accept_ranges; ngx_table_elt_t *www_authenticate; + ngx_table_elt_t *transfer_encoding; #if (NGX_HTTP_GZIP) ngx_table_elt_t *content_encoding; @@ -225,6 +226,8 @@ typedef struct { off_t content_length_n; ngx_array_t cache_control; + + unsigned chunked:1; } ngx_http_upstream_headers_in_t; From mdounin at mdounin.ru Sun Sep 4 11:33:59 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:33:59 +0400 Subject: [PATCH 12 of 15] Upstream: process Connection header and detect close token In-Reply-To: References: Message-ID: <0e57952574064b354ee3.1315136039@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315040808 -14400 # Node ID 0e57952574064b354ee3084b6c8b3cac80941c33 # Parent 326308f7e409d9d5e92265ee98be2b3bbd4ba39e Upstream: process Connection header and detect close token. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -91,6 +91,8 @@ static ngx_int_t ngx_http_upstream_proce ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_charset(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); +static ngx_int_t ngx_http_upstream_process_connection(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_transfer_encoding(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); @@ -218,7 +220,7 @@ ngx_http_upstream_header_t ngx_http_ups offsetof(ngx_http_headers_out_t, accept_ranges), 1 }, { ngx_string("Connection"), - ngx_http_upstream_ignore_header_line, 0, + ngx_http_upstream_process_connection, 0, ngx_http_upstream_ignore_header_line, 0, 0 }, { ngx_string("Keep-Alive"), @@ -3372,6 +3374,23 @@ ngx_http_upstream_process_charset(ngx_ht static ngx_int_t +ngx_http_upstream_process_connection(ngx_http_request_t *r, ngx_table_elt_t *h, + ngx_uint_t offset) +{ + r->upstream->headers_in.connection = h; + + if (ngx_strlcasestrn(h->value.data, h->value.data + h->value.len, + (u_char *) "close", 5 - 1) + != NULL) + { + r->upstream->headers_in.connection_close = 1; + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_upstream_process_transfer_encoding(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -227,6 +227,7 @@ typedef struct { ngx_array_t cache_control; + unsigned connection_close:1; unsigned chunked:1; } ngx_http_upstream_headers_in_t; From mdounin at mdounin.ru Sun Sep 4 11:34:00 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:34:00 +0400 Subject: [PATCH 13 of 15] Protocol version parsing in ngx_http_parse_status_line() In-Reply-To: References: Message-ID: <9b75a0e6fc95d5fbd827.1315136040@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315040808 -14400 # Node ID 9b75a0e6fc95d5fbd827e4589826e9be488eefc4 # Parent 0e57952574064b354ee3084b6c8b3cac80941c33 Protocol version parsing in ngx_http_parse_status_line(). Once we know protocol version, set u->headers_in.connection_close to indicate implicitly assumed connection close with HTTP before 1.1. diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -1210,6 +1210,7 @@ ngx_http_proxy_process_status_line(ngx_h r->http_version = NGX_HTTP_VERSION_9; u->state->status = NGX_HTTP_OK; + u->headers_in.connection_close = 1; return NGX_OK; } @@ -1234,6 +1235,10 @@ ngx_http_proxy_process_status_line(ngx_h "http proxy status %ui \"%V\"", u->headers_in.status_n, &u->headers_in.status_line); + if (ctx->status.http_version < NGX_HTTP_VERSION_11) { + u->headers_in.connection_close = 1; + } + u->process_header = ngx_http_proxy_process_header; return ngx_http_proxy_process_header(r); diff --git a/src/http/ngx_http.h b/src/http/ngx_http.h --- a/src/http/ngx_http.h +++ b/src/http/ngx_http.h @@ -52,6 +52,7 @@ struct ngx_http_log_ctx_s { typedef struct { + ngx_uint_t http_version; ngx_uint_t code; ngx_uint_t count; u_char *start; diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c +++ b/src/http/ngx_http_parse.c @@ -1403,6 +1403,7 @@ ngx_http_parse_status_line(ngx_http_requ return NGX_ERROR; } + r->http_major = ch - '0'; state = sw_major_digit; break; @@ -1417,6 +1418,7 @@ ngx_http_parse_status_line(ngx_http_requ return NGX_ERROR; } + r->http_major = r->http_major * 10 + ch - '0'; break; /* the first digit of minor HTTP version */ @@ -1425,6 +1427,7 @@ ngx_http_parse_status_line(ngx_http_requ return NGX_ERROR; } + r->http_minor = ch - '0'; state = sw_minor_digit; break; @@ -1439,6 +1442,7 @@ ngx_http_parse_status_line(ngx_http_requ return NGX_ERROR; } + r->http_minor = r->http_minor * 10 + ch - '0'; break; /* HTTP status code */ @@ -1516,6 +1520,7 @@ done: status->end = p; } + status->http_version = r->http_major * 1000 + r->http_minor; r->state = sw_start; return NGX_OK; From mdounin at mdounin.ru Sun Sep 4 11:34:01 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:34:01 +0400 Subject: [PATCH 14 of 15] Proxy: basic HTTP/1.1 support (including keepalive) In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315040808 -14400 # Node ID ec4be59d7b9c579474dd79fd5a3270e8fb9eb70b # Parent 9b75a0e6fc95d5fbd827e4589826e9be488eefc4 Proxy: basic HTTP/1.1 support (including keepalive). By default we still send requests using HTTP/1.0. This may be changed with new proxy_http_version directive. diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -71,6 +71,8 @@ typedef struct { ngx_flag_t redirect; + ngx_uint_t http_version; + ngx_uint_t headers_hash_max_size; ngx_uint_t headers_hash_bucket_size; } ngx_http_proxy_loc_conf_t; @@ -80,6 +82,12 @@ typedef struct { ngx_http_status_t status; ngx_http_proxy_vars_t vars; size_t internal_body_length; + + ngx_uint_t state; + off_t size; + off_t length; + + ngx_uint_t head; /* unsigned head:1 */ } ngx_http_proxy_ctx_t; @@ -92,6 +100,15 @@ static ngx_int_t ngx_http_proxy_create_r static ngx_int_t ngx_http_proxy_reinit_request(ngx_http_request_t *r); static ngx_int_t ngx_http_proxy_process_status_line(ngx_http_request_t *r); static ngx_int_t ngx_http_proxy_process_header(ngx_http_request_t *r); +static ngx_int_t ngx_http_proxy_input_filter_init(void *data); +static ngx_int_t ngx_http_proxy_copy_filter(ngx_event_pipe_t *p, + ngx_buf_t *buf); +static ngx_int_t ngx_http_proxy_chunked_filter(ngx_event_pipe_t *p, + ngx_buf_t *buf); +static ngx_int_t ngx_http_proxy_non_buffered_copy_filter(void *data, + ssize_t bytes); +static ngx_int_t ngx_http_proxy_non_buffered_chunked_filter(void *data, + ssize_t bytes); static void ngx_http_proxy_abort_request(ngx_http_request_t *r); static void ngx_http_proxy_finalize_request(ngx_http_request_t *r, ngx_int_t rc); @@ -157,6 +174,13 @@ static ngx_conf_bitmask_t ngx_http_prox }; +static ngx_conf_enum_t ngx_http_proxy_http_version[] = { + { ngx_string("1.0"), NGX_HTTP_VERSION_10 }, + { ngx_string("1.1"), NGX_HTTP_VERSION_11 }, + { ngx_null_string, 0 } +}; + + ngx_module_t ngx_http_proxy_module; @@ -432,6 +456,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.ignore_headers), &ngx_http_upstream_ignore_headers_masks }, + { ngx_string("proxy_http_version"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, http_version), + &ngx_http_proxy_http_version }, + #if (NGX_HTTP_SSL) { ngx_string("proxy_ssl_session_reuse"), @@ -479,6 +510,7 @@ ngx_module_t ngx_http_proxy_module = { static char ngx_http_proxy_version[] = " HTTP/1.0" CRLF; +static char ngx_http_proxy_version_11[] = " HTTP/1.1" CRLF; static ngx_keyval_t ngx_http_proxy_headers[] = { @@ -486,6 +518,7 @@ static ngx_keyval_t ngx_http_proxy_head { ngx_string("Connection"), ngx_string("close") }, { ngx_string("Keep-Alive"), ngx_string("") }, { ngx_string("Expect"), ngx_string("") }, + { ngx_string("Upgrade"), ngx_string("") }, { ngx_null_string, ngx_null_string } }; @@ -610,7 +643,12 @@ ngx_http_proxy_handler(ngx_http_request_ return NGX_HTTP_INTERNAL_SERVER_ERROR; } - u->pipe->input_filter = ngx_event_pipe_copy_input_filter; + u->pipe->input_filter = ngx_http_proxy_copy_filter; + u->pipe->input_ctx = r; + + u->input_filter_init = ngx_http_proxy_input_filter_init; + u->input_filter = ngx_http_proxy_non_buffered_copy_filter; + u->input_filter_ctx = r; u->accel = 1; @@ -866,14 +904,20 @@ ngx_http_proxy_create_request(ngx_http_r method.len++; } + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + if (method.len == 5 + && ngx_strncasecmp(method.data, (u_char *) "HEAD ", 5) == 0) + { + ctx->head = 1; + } + len = method.len + sizeof(ngx_http_proxy_version) - 1 + sizeof(CRLF) - 1; escape = 0; loc_len = 0; unparsed_uri = 0; - ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); - if (plcf->proxy_lengths) { uri_len = ctx->vars.uri.len; @@ -1009,8 +1053,14 @@ ngx_http_proxy_create_request(ngx_http_r u->uri.len = b->last - u->uri.data; - b->last = ngx_cpymem(b->last, ngx_http_proxy_version, - sizeof(ngx_http_proxy_version) - 1); + if (plcf->http_version == NGX_HTTP_VERSION_11) { + b->last = ngx_cpymem(b->last, ngx_http_proxy_version_11, + sizeof(ngx_http_proxy_version_11) - 1); + + } else { + b->last = ngx_cpymem(b->last, ngx_http_proxy_version, + sizeof(ngx_http_proxy_version) - 1); + } ngx_memzero(&e, sizeof(ngx_http_script_engine_t)); @@ -1158,8 +1208,11 @@ ngx_http_proxy_reinit_request(ngx_http_r ctx->status.count = 0; ctx->status.start = NULL; ctx->status.end = NULL; + ctx->state = 0; r->upstream->process_header = ngx_http_proxy_process_status_line; + r->upstream->pipe->input_filter = ngx_http_proxy_copy_filter; + r->upstream->input_filter = ngx_http_proxy_non_buffered_copy_filter; r->state = 0; return NGX_OK; @@ -1250,6 +1303,8 @@ ngx_http_proxy_process_header(ngx_http_r { ngx_int_t rc; ngx_table_elt_t *h; + ngx_http_upstream_t *u; + ngx_http_proxy_ctx_t *ctx; ngx_http_upstream_header_t *hh; ngx_http_upstream_main_conf_t *umcf; @@ -1345,6 +1400,30 @@ ngx_http_proxy_process_header(ngx_http_r h->lowcase_key = (u_char *) "date"; } + /* clear content length if response is chunked */ + + u = r->upstream; + + if (u->headers_in.chunked) { + u->headers_in.content_length_n = -1; + } + + /* + * set u->keepalive if response has no body; this allows to keep + * connections alive in case of r->header_only or X-Accel-Redirect + */ + + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + if (u->headers_in.status_n == NGX_HTTP_NO_CONTENT + || u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED + || ctx->head + || (!u->headers_in.chunked + && u->headers_in.content_length_n == 0)) + { + u->keepalive = !u->headers_in.connection_close; + } + return NGX_OK; } @@ -1362,6 +1441,690 @@ ngx_http_proxy_process_header(ngx_http_r } +static ngx_int_t +ngx_http_proxy_input_filter_init(void *data) +{ + ngx_http_request_t *r = data; + ngx_http_upstream_t *u; + ngx_http_proxy_ctx_t *ctx; + + u = r->upstream; + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + ngx_log_debug4(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http proxy filter init s:%d h:%d c:%d l:%O", + u->headers_in.status_n, ctx->head, u->headers_in.chunked, + u->headers_in.content_length_n); + + /* as per RFC2616, 4.4 Message Length */ + + if (u->headers_in.status_n == NGX_HTTP_NO_CONTENT + || u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED + || ctx->head) + { + /* 1xx, 204, and 304 and replies to HEAD requests */ + /* no 1xx since we don't send Expect and Upgrade */ + + u->pipe->length = 0; + u->length = 0; + u->keepalive = !u->headers_in.connection_close; + + } else if (u->headers_in.chunked) { + /* chunked */ + + u->pipe->input_filter = ngx_http_proxy_chunked_filter; + u->pipe->length = 3; /* "0" LF LF */ + + u->input_filter = ngx_http_proxy_non_buffered_chunked_filter; + u->length = -1; + + } else if (u->headers_in.content_length_n == 0) { + /* empty body: special case as filter won't be called */ + + u->pipe->length = 0; + u->length = 0; + u->keepalive = !u->headers_in.connection_close; + + } else { + /* content length or connection close */ + + u->pipe->length = u->headers_in.content_length_n; + u->length = u->headers_in.content_length_n; + } + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_proxy_copy_filter(ngx_event_pipe_t *p, ngx_buf_t *buf) +{ + ngx_buf_t *b; + ngx_chain_t *cl; + ngx_http_request_t *r; + + if (buf->pos == buf->last) { + return NGX_OK; + } + + if (p->free) { + cl = p->free; + b = cl->buf; + p->free = cl->next; + ngx_free_chain(p->pool, cl); + + } else { + b = ngx_alloc_buf(p->pool); + if (b == NULL) { + return NGX_ERROR; + } + } + + ngx_memcpy(b, buf, sizeof(ngx_buf_t)); + b->shadow = buf; + b->tag = p->tag; + b->last_shadow = 1; + b->recycled = 1; + buf->shadow = b; + + cl = ngx_alloc_chain_link(p->pool); + if (cl == NULL) { + return NGX_ERROR; + } + + cl->buf = b; + cl->next = NULL; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, p->log, 0, "input buf #%d", b->num); + + if (p->in) { + *p->last_in = cl; + } else { + p->in = cl; + } + p->last_in = &cl->next; + + if (p->length == -1) { + return NGX_OK; + } + + p->length -= b->last - b->pos; + + if (p->length == 0) { + r = p->input_ctx; + p->upstream_done = 1; + r->upstream->keepalive = !r->upstream->headers_in.connection_close; + + } else if (p->length < 0) { + r = p->input_ctx; + p->upstream_done = 1; + + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "upstream sent too many data"); + } + + return NGX_OK; +} + + +static ngx_inline ngx_int_t +ngx_http_proxy_parse_chunked(ngx_http_request_t *r, ngx_buf_t *buf) +{ + u_char *pos, ch, c; + ngx_int_t rc; + ngx_http_proxy_ctx_t *ctx; + enum { + sw_chunk_start = 0, + sw_chunk_size, + sw_chunk_extension, + sw_chunk_extension_almost_done, + sw_chunk_data, + sw_after_data, + sw_after_data_almost_done, + sw_last_chunk_extension, + sw_last_chunk_extension_almost_done, + sw_trailer, + sw_trailer_almost_done, + sw_trailer_header, + sw_trailer_header_almost_done + } state; + + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + state = ctx->state; + + if (state == sw_chunk_data && ctx->size == 0) { + state = sw_after_data; + } + + rc = NGX_AGAIN; + + for (pos = buf->pos; pos < buf->last; pos++) { + + ch = *pos; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http proxy chunked byte: %02Xd s:%d", ch, state); + + switch (state) { + + case sw_chunk_start: + if (ch >= '0' && ch <= '9') { + state = sw_chunk_size; + ctx->size = ch - '0'; + break; + } + + c = (u_char) (ch | 0x20); + + if (c >= 'a' && c <= 'f') { + state = sw_chunk_size; + ctx->size = c - 'a' + 10; + break; + } + + goto invalid; + + case sw_chunk_size: + if (ch >= '0' && ch <= '9') { + ctx->size = ctx->size * 16 + (ch - '0'); + break; + } + + c = (u_char) (ch | 0x20); + + if (c >= 'a' && c <= 'f') { + ctx->size = ctx->size * 16 + (c - 'a' + 10); + break; + } + + if (ctx->size == 0) { + + switch (ch) { + case CR: + state = sw_last_chunk_extension_almost_done; + break; + case LF: + state = sw_trailer; + break; + case ';': + state = sw_last_chunk_extension; + break; + default: + goto invalid; + } + + break; + } + + switch (ch) { + case CR: + state = sw_chunk_extension_almost_done; + break; + case LF: + state = sw_chunk_data; + break; + case ';': + state = sw_chunk_extension; + break; + default: + goto invalid; + } + + break; + + case sw_chunk_extension: + switch (ch) { + case CR: + state = sw_chunk_extension_almost_done; + break; + case LF: + state = sw_chunk_data; + } + break; + + case sw_chunk_extension_almost_done: + if (ch == LF) { + state = sw_chunk_data; + break; + } + goto invalid; + + case sw_chunk_data: + rc = NGX_OK; + goto data; + + case sw_after_data: + switch (ch) { + case CR: + state = sw_after_data_almost_done; + break; + case LF: + state = sw_chunk_start; + } + break; + + case sw_after_data_almost_done: + if (ch == LF) { + state = sw_chunk_start; + break; + } + goto invalid; + + case sw_last_chunk_extension: + switch (ch) { + case CR: + state = sw_last_chunk_extension_almost_done; + break; + case LF: + state = sw_trailer; + } + break; + + case sw_last_chunk_extension_almost_done: + if (ch == LF) { + state = sw_trailer; + break; + } + goto invalid; + + case sw_trailer: + switch (ch) { + case CR: + state = sw_trailer_almost_done; + break; + case LF: + goto done; + default: + state = sw_trailer_header; + } + break; + + case sw_trailer_almost_done: + if (ch == LF) { + goto done; + } + goto invalid; + + case sw_trailer_header: + switch (ch) { + case CR: + state = sw_trailer_header_almost_done; + break; + case LF: + state = sw_trailer; + } + break; + + case sw_trailer_header_almost_done: + if (ch == LF) { + state = sw_trailer; + break; + } + goto invalid; + + } + } + +data: + + ctx->state = state; + buf->pos = pos; + + switch (state) { + + case sw_chunk_start: + ctx->length = 3 /* "0" LF LF */; + break; + case sw_chunk_size: + ctx->length = 2 /* LF LF */ + + (ctx->size ? ctx->size + 4 /* LF "0" LF LF */ : 0); + break; + case sw_chunk_extension: + case sw_chunk_extension_almost_done: + ctx->length = 1 /* LF */ + ctx->size + 4 /* LF "0" LF LF */; + break; + case sw_chunk_data: + ctx->length = ctx->size + 4 /* LF "0" LF LF */; + break; + case sw_after_data: + case sw_after_data_almost_done: + ctx->length = 4 /* LF "0" LF LF */; + break; + case sw_last_chunk_extension: + case sw_last_chunk_extension_almost_done: + ctx->length = 2 /* LF LF */; + break; + case sw_trailer: + case sw_trailer_almost_done: + ctx->length = 1 /* LF */; + break; + case sw_trailer_header: + case sw_trailer_header_almost_done: + ctx->length = 2 /* LF LF */; + break; + + } + + return rc; + +done: + + return NGX_DONE; + +invalid: + + ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + "upstream sent invalid chunked response"); + + return NGX_ERROR; +} + + +static ngx_int_t +ngx_http_proxy_chunked_filter(ngx_event_pipe_t *p, ngx_buf_t *buf) +{ + ngx_int_t rc; + ngx_buf_t *b, **prev; + ngx_chain_t *cl; + ngx_http_request_t *r; + ngx_http_proxy_ctx_t *ctx; + + if (buf->pos == buf->last) { + return NGX_OK; + } + + r = p->input_ctx; + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + b = NULL; + prev = &buf->shadow; + + for ( ;; ) { + + rc = ngx_http_proxy_parse_chunked(r, buf); + + if (rc == NGX_OK) { + + /* a chunk has been parsed successfully */ + + if (p->free) { + cl = p->free; + b = cl->buf; + p->free = cl->next; + ngx_free_chain(p->pool, cl); + + } else { + b = ngx_alloc_buf(p->pool); + if (b == NULL) { + return NGX_ERROR; + } + } + + ngx_memzero(b, sizeof(ngx_buf_t)); + + b->pos = buf->pos; + b->start = buf->start; + b->end = buf->end; + b->tag = p->tag; + b->temporary = 1; + b->recycled = 1; + + *prev = b; + prev = &b->shadow; + + cl = ngx_alloc_chain_link(p->pool); + if (cl == NULL) { + return NGX_ERROR; + } + + cl->buf = b; + cl->next = NULL; + + if (p->in) { + *p->last_in = cl; + } else { + p->in = cl; + } + p->last_in = &cl->next; + + /* STUB */ b->num = buf->num; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, p->log, 0, + "input buf #%d %p", b->num, b->pos); + + if (buf->last - buf->pos >= ctx->size) { + + buf->pos += ctx->size; + b->last = buf->pos; + ctx->size = 0; + + continue; + } + + ctx->size -= buf->last - buf->pos; + buf->pos = buf->last; + b->last = buf->last; + + continue; + } + + if (rc == NGX_DONE) { + + /* a whole response has been parsed successfully */ + + p->upstream_done = 1; + r->upstream->keepalive = !r->upstream->headers_in.connection_close; + + break; + } + + if (rc == NGX_AGAIN) { + + /* set p->length, minimal amount of data we want to see */ + + p->length = ctx->length; + + break; + } + + /* invalid response */ + + ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + "upstream sent invalid chunked response"); + + return NGX_ERROR; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http proxy chunked state %d, length %d", + ctx->state, p->length); + + if (b) { + b->shadow = buf; + b->last_shadow = 1; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, p->log, 0, + "input buf %p %z", b->pos, b->last - b->pos); + + return NGX_OK; + } + + /* there is no data record in the buf, add it to free chain */ + + if (ngx_event_pipe_add_free_buf(p, buf) != NGX_OK) { + return NGX_ERROR; + } + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_proxy_non_buffered_copy_filter(void *data, ssize_t bytes) +{ + ngx_http_request_t *r = data; + + ngx_buf_t *b; + ngx_chain_t *cl, **ll; + ngx_http_upstream_t *u; + + u = r->upstream; + + for (cl = u->out_bufs, ll = &u->out_bufs; cl; cl = cl->next) { + ll = &cl->next; + } + + cl = ngx_chain_get_free_buf(r->pool, &u->free_bufs); + if (cl == NULL) { + return NGX_ERROR; + } + + *ll = cl; + + cl->buf->flush = 1; + cl->buf->memory = 1; + + b = &u->buffer; + + cl->buf->pos = b->last; + b->last += bytes; + cl->buf->last = b->last; + cl->buf->tag = u->output.tag; + + if (u->length == -1) { + return NGX_OK; + } + + u->length -= bytes; + + if (u->length == 0) { + u->keepalive = !u->headers_in.connection_close; + } + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_proxy_non_buffered_chunked_filter(void *data, ssize_t bytes) +{ + ngx_http_request_t *r = data; + + ngx_int_t rc; + ngx_buf_t *b, *buf; + ngx_chain_t *cl, **ll; + ngx_http_upstream_t *u; + ngx_http_proxy_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + u = r->upstream; + buf = &u->buffer; + + buf->pos = buf->last; + buf->last += bytes; + + for (cl = u->out_bufs, ll = &u->out_bufs; cl; cl = cl->next) { + ll = &cl->next; + } + + for ( ;; ) { + + rc = ngx_http_proxy_parse_chunked(r, buf); + + if (rc == NGX_OK) { + + /* a chunk has been parsed successfully */ + + cl = ngx_chain_get_free_buf(r->pool, &u->free_bufs); + if (cl == NULL) { + return NGX_ERROR; + } + + *ll = cl; + ll = &cl->next; + + b = cl->buf; + + b->flush = 1; + b->memory = 1; + + b->pos = buf->pos; + b->tag = u->output.tag; + + if (buf->last - buf->pos >= ctx->size) { + buf->pos += ctx->size; + b->last = buf->pos; + ctx->size = 0; + + } else { + ctx->size -= buf->last - buf->pos; + buf->pos = buf->last; + b->last = buf->last; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http proxy out buf %p %z", + b->pos, b->last - b->pos); + + continue; + } + + if (rc == NGX_DONE) { + + /* a whole response has been parsed successfully */ + + u->keepalive = !u->headers_in.connection_close; + u->length = 0; + + break; + } + + if (rc == NGX_AGAIN) { + break; + } + + /* invalid response */ + + ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + "upstream sent invalid chunked response"); + + return NGX_ERROR; + } + + /* provide continuous buffer for subrequests in memory */ + + if (r->subrequest_in_memory) { + + cl = u->out_bufs; + + if (cl) { + buf->pos = cl->buf->pos; + } + + buf->last = buf->pos; + + for (cl = u->out_bufs; cl; cl = cl->next) { + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http proxy in memory %p-%p %uz", + cl->buf->pos, cl->buf->last, ngx_buf_size(cl->buf)); + + if (buf->last == cl->buf->pos) { + buf->last = cl->buf->last; + continue; + } + + buf->last = ngx_movemem(buf->last, cl->buf->pos, + cl->buf->last - cl->buf->pos); + + cl->buf->pos = buf->last - (cl->buf->last - cl->buf->pos); + cl->buf->last = buf->last; + } + } + + return NGX_OK; +} + + static void ngx_http_proxy_abort_request(ngx_http_request_t *r) { @@ -1710,6 +2473,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->redirect = NGX_CONF_UNSET; conf->upstream.change_buffering = 1; + conf->http_version = NGX_CONF_UNSET_UINT; + conf->headers_hash_max_size = NGX_CONF_UNSET_UINT; conf->headers_hash_bucket_size = NGX_CONF_UNSET_UINT; @@ -2013,6 +2778,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t } #endif + ngx_conf_merge_uint_value(conf->http_version, prev->http_version, + NGX_HTTP_VERSION_10); + ngx_conf_merge_uint_value(conf->headers_hash_max_size, prev->headers_hash_max_size, 512); From mdounin at mdounin.ru Sun Sep 4 11:34:02 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 04 Sep 2011 15:34:02 +0400 Subject: [PATCH 15 of 15] Upstream keepalive module In-Reply-To: References: Message-ID: <4e00f01d22602ad43f77.1315136042@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315134383 -14400 # Node ID 4e00f01d22602ad43f7768dc10a9f431673acc09 # Parent ec4be59d7b9c579474dd79fd5a3270e8fb9eb70b Upstream keepalive module. diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -339,6 +339,11 @@ if [ $HTTP_UPSTREAM_IP_HASH = YES ]; the HTTP_SRCS="$HTTP_SRCS $HTTP_UPSTREAM_IP_HASH_SRCS" fi +if [ $HTTP_UPSTREAM_KEEPALIVE = YES ]; then + HTTP_MODULES="$HTTP_MODULES $HTTP_UPSTREAM_KEEPALIVE_MODULE" + HTTP_SRCS="$HTTP_SRCS $HTTP_UPSTREAM_KEEPALIVE_SRCS" +fi + if [ $HTTP_STUB_STATUS = YES ]; then have=NGX_STAT_STUB . auto/have HTTP_MODULES="$HTTP_MODULES ngx_http_stub_status_module" diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -94,6 +94,7 @@ HTTP_DEGRADATION=NO HTTP_FLV=NO HTTP_GZIP_STATIC=NO HTTP_UPSTREAM_IP_HASH=YES +HTTP_UPSTREAM_KEEPALIVE=YES # STUB HTTP_STUB_STATUS=NO @@ -229,6 +230,7 @@ do --without-http_empty_gif_module) HTTP_EMPTY_GIF=NO ;; --without-http_browser_module) HTTP_BROWSER=NO ;; --without-http_upstream_ip_hash_module) HTTP_UPSTREAM_IP_HASH=NO ;; + --without-http_upstream_keepalive_module) HTTP_UPSTREAM_KEEPALIVE=NO ;; --with-http_perl_module) HTTP_PERL=YES ;; --with-perl_modules_path=*) NGX_PERL_MODULES="$value" ;; diff --git a/auto/sources b/auto/sources --- a/auto/sources +++ b/auto/sources @@ -471,6 +471,11 @@ HTTP_UPSTREAM_IP_HASH_MODULE=ngx_http_up HTTP_UPSTREAM_IP_HASH_SRCS=src/http/modules/ngx_http_upstream_ip_hash_module.c +HTTP_UPSTREAM_KEEPALIVE_MODULE=ngx_http_upstream_keepalive_module +HTTP_UPSTREAM_KEEPALIVE_SRCS=" \ + src/http/modules/ngx_http_upstream_keepalive_module.c" + + MAIL_INCS="src/mail" MAIL_DEPS="src/mail/ngx_mail.h" diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c b/src/http/modules/ngx_http_upstream_keepalive_module.c new file mode 100644 --- /dev/null +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c @@ -0,0 +1,566 @@ + +/* + * Copyright (C) Maxim Dounin + */ + + +#include +#include +#include + + +typedef struct { + ngx_uint_t max_cached; + ngx_uint_t single; /* unsigned:1 */ + + ngx_queue_t cache; + ngx_queue_t free; + + ngx_http_upstream_init_pt original_init_upstream; + ngx_http_upstream_init_peer_pt original_init_peer; + +} ngx_http_upstream_keepalive_srv_conf_t; + + +typedef struct { + ngx_http_upstream_keepalive_srv_conf_t *conf; + + ngx_http_upstream_t *upstream; + + void *data; + + ngx_event_get_peer_pt original_get_peer; + ngx_event_free_peer_pt original_free_peer; + +#if (NGX_HTTP_SSL) + ngx_event_set_peer_session_pt original_set_session; + ngx_event_save_peer_session_pt original_save_session; +#endif + + ngx_uint_t failed; /* unsigned:1 */ + +} ngx_http_upstream_keepalive_peer_data_t; + + +typedef struct { + ngx_http_upstream_keepalive_srv_conf_t *conf; + + ngx_queue_t queue; + ngx_connection_t *connection; + + socklen_t socklen; + u_char sockaddr[NGX_SOCKADDRLEN]; + +} ngx_http_upstream_keepalive_cache_t; + + +static ngx_int_t ngx_http_upstream_init_keepalive_peer(ngx_http_request_t *r, + ngx_http_upstream_srv_conf_t *us); +static ngx_int_t ngx_http_upstream_get_keepalive_peer(ngx_peer_connection_t *pc, + void *data); +static void ngx_http_upstream_free_keepalive_peer(ngx_peer_connection_t *pc, + void *data, ngx_uint_t state); + +static void ngx_http_upstream_keepalive_dummy_handler(ngx_event_t *ev); +static void ngx_http_upstream_keepalive_close_handler(ngx_event_t *ev); +static void ngx_http_upstream_keepalive_close(ngx_connection_t *c); + + +#if (NGX_HTTP_SSL) +static ngx_int_t ngx_http_upstream_keepalive_set_session( + ngx_peer_connection_t *pc, void *data); +static void ngx_http_upstream_keepalive_save_session(ngx_peer_connection_t *pc, + void *data); +#endif + +static void *ngx_http_upstream_keepalive_create_conf(ngx_conf_t *cf); +static char *ngx_http_upstream_keepalive(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); + + +static ngx_command_t ngx_http_upstream_keepalive_commands[] = { + + { ngx_string("keepalive"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, + ngx_http_upstream_keepalive, + 0, + 0, + NULL }, + + ngx_null_command +}; + + +static ngx_http_module_t ngx_http_upstream_keepalive_module_ctx = { + NULL, /* preconfiguration */ + NULL, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + ngx_http_upstream_keepalive_create_conf, /* create server configuration */ + NULL, /* merge server configuration */ + + NULL, /* create location configuration */ + NULL /* merge location configuration */ +}; + + +ngx_module_t ngx_http_upstream_keepalive_module = { + NGX_MODULE_V1, + &ngx_http_upstream_keepalive_module_ctx, /* module context */ + ngx_http_upstream_keepalive_commands, /* module directives */ + NGX_HTTP_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_int_t +ngx_http_upstream_init_keepalive(ngx_conf_t *cf, + ngx_http_upstream_srv_conf_t *us) +{ + ngx_uint_t i; + ngx_http_upstream_keepalive_srv_conf_t *kcf; + ngx_http_upstream_keepalive_cache_t *cached; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, cf->log, 0, + "init keepalive"); + + kcf = ngx_http_conf_upstream_srv_conf(us, + ngx_http_upstream_keepalive_module); + + if (kcf->original_init_upstream(cf, us) != NGX_OK) { + return NGX_ERROR; + } + + kcf->original_init_peer = us->peer.init; + + us->peer.init = ngx_http_upstream_init_keepalive_peer; + + /* allocate cache items and add to free queue */ + + cached = ngx_pcalloc(cf->pool, + sizeof(ngx_http_upstream_keepalive_cache_t) * kcf->max_cached); + if (cached == NULL) { + return NGX_ERROR; + } + + ngx_queue_init(&kcf->cache); + ngx_queue_init(&kcf->free); + + for (i = 0; i < kcf->max_cached; i++) { + ngx_queue_insert_head(&kcf->free, &cached[i].queue); + cached[i].conf = kcf; + } + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_upstream_init_keepalive_peer(ngx_http_request_t *r, + ngx_http_upstream_srv_conf_t *us) +{ + ngx_http_upstream_keepalive_peer_data_t *kp; + ngx_http_upstream_keepalive_srv_conf_t *kcf; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "init keepalive peer"); + + kcf = ngx_http_conf_upstream_srv_conf(us, + ngx_http_upstream_keepalive_module); + + kp = ngx_palloc(r->pool, sizeof(ngx_http_upstream_keepalive_peer_data_t)); + if (kp == NULL) { + return NGX_ERROR; + } + + if (kcf->original_init_peer(r, us) != NGX_OK) { + return NGX_ERROR; + } + + kp->conf = kcf; + kp->upstream = r->upstream; + kp->data = r->upstream->peer.data; + kp->original_get_peer = r->upstream->peer.get; + kp->original_free_peer = r->upstream->peer.free; + + r->upstream->peer.data = kp; + r->upstream->peer.get = ngx_http_upstream_get_keepalive_peer; + r->upstream->peer.free = ngx_http_upstream_free_keepalive_peer; + +#if (NGX_HTTP_SSL) + kp->original_set_session = r->upstream->peer.set_session; + kp->original_save_session = r->upstream->peer.save_session; + r->upstream->peer.set_session = ngx_http_upstream_keepalive_set_session; + r->upstream->peer.save_session = ngx_http_upstream_keepalive_save_session; +#endif + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_upstream_get_keepalive_peer(ngx_peer_connection_t *pc, void *data) +{ + ngx_http_upstream_keepalive_peer_data_t *kp = data; + ngx_http_upstream_keepalive_cache_t *item; + + ngx_int_t rc; + ngx_queue_t *q, *cache; + ngx_connection_t *c; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get keepalive peer"); + + kp->failed = 0; + + /* single pool of cached connections */ + + if (kp->conf->single && !ngx_queue_empty(&kp->conf->cache)) { + + q = ngx_queue_head(&kp->conf->cache); + + item = ngx_queue_data(q, ngx_http_upstream_keepalive_cache_t, queue); + c = item->connection; + + ngx_queue_remove(q); + ngx_queue_insert_head(&kp->conf->free, q); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get keepalive peer: using connection %p", c); + + c->idle = 0; + c->log = pc->log; + c->read->log = pc->log; + c->write->log = pc->log; + c->pool->log = pc->log; + + pc->connection = c; + pc->cached = 1; + + return NGX_DONE; + } + + rc = kp->original_get_peer(pc, kp->data); + + if (kp->conf->single || rc != NGX_OK) { + return rc; + } + + /* search cache for suitable connection */ + + cache = &kp->conf->cache; + + for (q = ngx_queue_head(cache); + q != ngx_queue_sentinel(cache); + q = ngx_queue_next(q)) + { + item = ngx_queue_data(q, ngx_http_upstream_keepalive_cache_t, queue); + c = item->connection; + + if (ngx_memn2cmp((u_char *) &item->sockaddr, (u_char *) pc->sockaddr, + item->socklen, pc->socklen) + == 0) + { + ngx_queue_remove(q); + ngx_queue_insert_head(&kp->conf->free, q); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get keepalive peer: using connection %p", c); + + c->idle = 0; + c->log = pc->log; + c->read->log = pc->log; + c->write->log = pc->log; + c->pool->log = pc->log; + + pc->connection = c; + pc->cached = 1; + + return NGX_DONE; + } + } + + return NGX_OK; +} + + +static void +ngx_http_upstream_free_keepalive_peer(ngx_peer_connection_t *pc, void *data, + ngx_uint_t state) +{ + ngx_http_upstream_keepalive_peer_data_t *kp = data; + ngx_http_upstream_keepalive_cache_t *item; + + ngx_queue_t *q; + ngx_connection_t *c; + ngx_http_upstream_t *u; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "free keepalive peer"); + + /* remember failed state - peer.free() may be called more than once */ + + if (state & NGX_PEER_FAILED) { + kp->failed = 1; + } + + /* cache valid connections */ + + u = kp->upstream; + c = pc->connection; + + if (kp->failed + || c == NULL + || c->read->eof + || c->read->ready + || c->read->error + || c->read->timedout + || c->write->error + || c->write->timedout) + { + goto invalid; + } + + if (!u->keepalive) { + goto invalid; + } + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + goto invalid; + } + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "free keepalive peer: saving connection %p", c); + + if (ngx_queue_empty(&kp->conf->free)) { + + q = ngx_queue_last(&kp->conf->cache); + ngx_queue_remove(q); + + item = ngx_queue_data(q, ngx_http_upstream_keepalive_cache_t, queue); + + ngx_http_upstream_keepalive_close(item->connection); + + } else { + q = ngx_queue_head(&kp->conf->free); + ngx_queue_remove(q); + + item = ngx_queue_data(q, ngx_http_upstream_keepalive_cache_t, queue); + } + + item->connection = c; + ngx_queue_insert_head(&kp->conf->cache, q); + + pc->connection = NULL; + + if (c->read->timer_set) { + ngx_del_timer(c->read); + } + if (c->write->timer_set) { + ngx_del_timer(c->write); + } + + c->write->handler = ngx_http_upstream_keepalive_dummy_handler; + c->read->handler = ngx_http_upstream_keepalive_close_handler; + + c->data = item; + c->idle = 1; + c->log = ngx_cycle->log; + c->read->log = ngx_cycle->log; + c->write->log = ngx_cycle->log; + c->pool->log = ngx_cycle->log; + + item->socklen = pc->socklen; + ngx_memcpy(&item->sockaddr, pc->sockaddr, pc->socklen); + +invalid: + + kp->original_free_peer(pc, kp->data, state); +} + + +static void +ngx_http_upstream_keepalive_dummy_handler(ngx_event_t *ev) +{ + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0, + "keepalive dummy handler"); +} + + +static void +ngx_http_upstream_keepalive_close_handler(ngx_event_t *ev) +{ + ngx_http_upstream_keepalive_srv_conf_t *conf; + ngx_http_upstream_keepalive_cache_t *item; + + int n; + char buf[1]; + ngx_connection_t *c; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0, + "keepalive close handler"); + + c = ev->data; + + if (c->close) { + goto close; + } + + n = recv(c->fd, buf, 1, MSG_PEEK); + + if (n == -1 && ngx_socket_errno == NGX_EAGAIN) { + /* stale event */ + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + goto close; + } + + return; + } + +close: + + item = c->data; + conf = item->conf; + + ngx_http_upstream_keepalive_close(c); + + ngx_queue_remove(&item->queue); + ngx_queue_insert_head(&conf->free, &item->queue); +} + + +static void +ngx_http_upstream_keepalive_close(ngx_connection_t *c) +{ + +#if (NGX_HTTP_SSL) + + if (c->ssl) { + c->ssl->no_wait_shutdown = 1; + c->ssl->no_send_shutdown = 1; + + if (ngx_ssl_shutdown(c) == NGX_AGAIN) { + c->ssl->handler = ngx_http_upstream_keepalive_close; + return; + } + } + +#endif + + ngx_destroy_pool(c->pool); + ngx_close_connection(c); +} + + +#if (NGX_HTTP_SSL) + +static ngx_int_t +ngx_http_upstream_keepalive_set_session(ngx_peer_connection_t *pc, void *data) +{ + ngx_http_upstream_keepalive_peer_data_t *kp = data; + + return kp->original_set_session(pc, kp->data); +} + + +static void +ngx_http_upstream_keepalive_save_session(ngx_peer_connection_t *pc, void *data) +{ + ngx_http_upstream_keepalive_peer_data_t *kp = data; + + kp->original_save_session(pc, kp->data); + return; +} + +#endif + + +static void * +ngx_http_upstream_keepalive_create_conf(ngx_conf_t *cf) +{ + ngx_http_upstream_keepalive_srv_conf_t *conf; + + conf = ngx_pcalloc(cf->pool, + sizeof(ngx_http_upstream_keepalive_srv_conf_t)); + if (conf == NULL) { + return NULL; + } + + /* + * set by ngx_pcalloc(): + * + * conf->original_init_upstream = NULL; + * conf->original_init_peer = NULL; + */ + + conf->max_cached = 1; + + return conf; +} + + +static char * +ngx_http_upstream_keepalive(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_upstream_srv_conf_t *uscf; + ngx_http_upstream_keepalive_srv_conf_t *kcf; + + ngx_int_t n; + ngx_str_t *value; + ngx_uint_t i; + + uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module); + + kcf = ngx_http_conf_upstream_srv_conf(uscf, + ngx_http_upstream_keepalive_module); + + kcf->original_init_upstream = uscf->peer.init_upstream + ? uscf->peer.init_upstream + : ngx_http_upstream_init_round_robin; + + uscf->peer.init_upstream = ngx_http_upstream_init_keepalive; + + /* read options */ + + value = cf->args->elts; + + n = ngx_atoi(value[1].data, value[1].len); + + if (n == NGX_ERROR || n == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid value \"%V\" in \"%V\" directive", + &value[1], &cmd->name); + return NGX_CONF_ERROR; + } + + kcf->max_cached = n; + + for (i = 2; i < cf->args->nelts; i++) { + + if (ngx_strcmp(value[i].data, "single") == 0) { + kcf->single = 1; + continue; + } + + goto invalid; + } + + return NGX_CONF_OK; + +invalid: + + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid parameter \"%V\"", &value[i]); + + return NGX_CONF_ERROR; +} From 20062972ding at 163.com Mon Sep 5 02:53:23 2011 From: 20062972ding at 163.com (=?GBK?B?tqHT8b3c?=) Date: Mon, 5 Sep 2011 10:53:23 +0800 (CST) Subject: bug reports ! Message-ID: <56865e2e.4478.13237812677.Coremail.20062972ding@163.com> It's cause nginx hang when only config "backup" server in upstream block! such as: upstream test { server xxx.xxx.xxx.xxx:80 backup; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 5 07:09:23 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 05 Sep 2011 11:09:23 +0400 Subject: [PATCH] Unbreak proxy_ignore_client_abort Message-ID: <98f7c018dede66890d85.1315206563@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315206147 -14400 # Node ID 98f7c018dede66890d85883650367471a3c87d37 # Parent be879690193107c294b7ecdca9e0c996ea96a763 Unbreak proxy_ignore_client_abort. Not blocking read event after request body has been read is incorrect and causes segmentation fault if proxy_ignore_client_abort used. diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -378,6 +378,8 @@ ngx_http_do_read_client_request_body(ngx rb->bufs = rb->bufs->next; } + r->read_event_handler = ngx_http_block_reading; + rb->post_handler(r); return NGX_OK; From mdounin at mdounin.ru Mon Sep 5 17:54:23 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Sep 2011 21:54:23 +0400 Subject: [PATCH 00 of 15] upstream keepalive patch queue In-Reply-To: References: Message-ID: <20110905175423.GK1137@mdounin.ru> Hello! On Sun, Sep 04, 2011 at 03:33:47PM +0400, Maxim Dounin wrote: > Hello! > > Here is the keepalive patch queue, posting it here for further > review and testing. Note this series is for nginx 1.1.1, first 2 > patches were already committed into trunk (just skip them if you are > working with svn trunk). > > This series includes multiple fixes for problems found during > testing since last post: > > - https connection caching support; > - better detection of connections which can't be cached; > - cpu hog in round-robin balancer; > - segmentation fault when using with proxy_cache/fastcgi_cache; > > FastCGI keepalive support now requires "fastcgi_keep_conn on;" in config. > Without the directive previous behaviour is preserved to make patches less > intrusive. > > Upstream keepalive module is included as a last patch, it's compiled in by > default. > > All patches may be found here: > http://nginx.org/patches/patch-nginx-keepalive-full-4.txt Just for convenience, cumulative patch for 1.1.2 is available here: http://nginx.org/patches/patch-nginx-keepalive-full-5.txt It is the same as previous one, but doesn't include first 2 patches already present in 1.1.2. Maxim Dounin From mat999 at gmail.com Tue Sep 6 05:45:53 2011 From: mat999 at gmail.com (SplitIce) Date: Tue, 6 Sep 2011 15:45:53 +1000 Subject: [PATCH 00 of 15] upstream keepalive patch queue In-Reply-To: <20110905175423.GK1137@mdounin.ru> References: <20110905175423.GK1137@mdounin.ru> Message-ID: :) Hoping this gets included in 1.1.3 On Tue, Sep 6, 2011 at 3:54 AM, Maxim Dounin wrote: > Hello! > > On Sun, Sep 04, 2011 at 03:33:47PM +0400, Maxim Dounin wrote: > > > Hello! > > > > Here is the keepalive patch queue, posting it here for further > > review and testing. Note this series is for nginx 1.1.1, first 2 > > patches were already committed into trunk (just skip them if you are > > working with svn trunk). > > > > This series includes multiple fixes for problems found during > > testing since last post: > > > > - https connection caching support; > > - better detection of connections which can't be cached; > > - cpu hog in round-robin balancer; > > - segmentation fault when using with proxy_cache/fastcgi_cache; > > > > FastCGI keepalive support now requires "fastcgi_keep_conn on;" in config. > > Without the directive previous behaviour is preserved to make patches > less > > intrusive. > > > > Upstream keepalive module is included as a last patch, it's compiled in > by > > default. > > > > All patches may be found here: > > http://nginx.org/patches/patch-nginx-keepalive-full-4.txt > > Just for convenience, cumulative patch for 1.1.2 is available > here: > > http://nginx.org/patches/patch-nginx-keepalive-full-5.txt > > It is the same as previous one, but doesn't include first 2 > patches already present in 1.1.2. > > Maxim Dounin > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 6 11:56:32 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 15:56:32 +0400 Subject: [PATCH] Handling of If-Range with add_header Last-Modified Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315291748 -14400 # Node ID e51619385db9694030b9614c833a2b2504b377c9 # Parent 014764a85840606c90317e9f44f2b9fa139cbc8b Handling of If-Range with add_header Last-Modified. diff --git a/src/http/modules/ngx_http_headers_filter_module.c b/src/http/modules/ngx_http_headers_filter_module.c --- a/src/http/modules/ngx_http_headers_filter_module.c +++ b/src/http/modules/ngx_http_headers_filter_module.c @@ -369,7 +369,8 @@ ngx_http_set_last_modified(ngx_http_requ old = NULL; } - r->headers_out.last_modified_time = -1; + r->headers_out.last_modified_time = ngx_http_parse_time(value->data, + value->len); if (old == NULL || *old == NULL) { @@ -382,6 +383,8 @@ ngx_http_set_last_modified(ngx_http_requ return NGX_ERROR; } + *old = h; + } else { h = *old; diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c +++ b/src/http/modules/ngx_http_range_filter_module.c @@ -173,7 +173,11 @@ ngx_http_range_header_filter(ngx_http_re goto next_filter; } - if (r->headers_in.if_range && r->headers_out.last_modified_time != -1) { + if (r->headers_in.if_range) { + + if (r->headers_out.last_modified_time == (time_t) -1) { + goto next_filter; + } if_range = ngx_http_parse_time(r->headers_in.if_range->value.data, r->headers_in.if_range->value.len); From mdounin at mdounin.ru Tue Sep 6 15:57:57 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:57:57 +0400 Subject: [PATCH 00 of 25] generic patch queue Message-ID: Hello! Here is my generic patch queue. Posting it here for furher review (mostly by Igor before he approves commits, but if you have something to say - please do so). Most of the patches were already posted previously, though there are some new ones. Maxim Dounin From mdounin at mdounin.ru Tue Sep 6 15:57:58 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:57:58 +0400 Subject: [PATCH 01 of 25] Handling of If-Range with add_header Last-Modified In-Reply-To: References: Message-ID: <08a723d1784ec0734114.1315324678@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID 08a723d1784ec0734114f669626189f3de69ae2d # Parent 014764a85840606c90317e9f44f2b9fa139cbc8b Handling of If-Range with add_header Last-Modified. diff --git a/src/http/modules/ngx_http_headers_filter_module.c b/src/http/modules/ngx_http_headers_filter_module.c --- a/src/http/modules/ngx_http_headers_filter_module.c +++ b/src/http/modules/ngx_http_headers_filter_module.c @@ -369,7 +369,8 @@ ngx_http_set_last_modified(ngx_http_requ old = NULL; } - r->headers_out.last_modified_time = -1; + r->headers_out.last_modified_time = ngx_http_parse_time(value->data, + value->len); if (old == NULL || *old == NULL) { @@ -382,6 +383,8 @@ ngx_http_set_last_modified(ngx_http_requ return NGX_ERROR; } + *old = h; + } else { h = *old; diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c +++ b/src/http/modules/ngx_http_range_filter_module.c @@ -173,7 +173,11 @@ ngx_http_range_header_filter(ngx_http_re goto next_filter; } - if (r->headers_in.if_range && r->headers_out.last_modified_time != -1) { + if (r->headers_in.if_range) { + + if (r->headers_out.last_modified_time == (time_t) -1) { + goto next_filter; + } if_range = ngx_http_parse_time(r->headers_in.if_range->value.data, r->headers_in.if_range->value.len); From mdounin at mdounin.ru Tue Sep 6 15:57:59 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:57:59 +0400 Subject: [PATCH 02 of 25] Fix for incorrect 201 replies from dav module In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID d90e06868b1d53b1da9280c81c113255a3982c92 # Parent 08a723d1784ec0734114f669626189f3de69ae2d Fix for incorrect 201 replies from dav module. Replies with 201 code contain body, and we should clearly indicate it's empty if it's empty. Before 0.8.32 chunked was explicitly disabled for 201 replies and as a result empty body was indicated by connection close (not perfect, but worked). Since 0.8.32 chunked is enabled, and this causes incorrect responses from dav module when HTTP/1.1 is used: with "Transfer-Encoding: chunked" but no chunks at all. Fix is to actually return empty body in special response handler instead of abusing r->header_only flag. See here for initial report: http://nginx.org/pipermail/nginx-ru/2010-October/037535.html diff --git a/src/http/ngx_http_special_response.c b/src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c +++ b/src/http/ngx_http_special_response.c @@ -421,7 +421,6 @@ ngx_http_special_response_handler(ngx_ht if (error == NGX_HTTP_CREATED) { /* 201 */ err = 0; - r->header_only = 1; } else if (error == NGX_HTTP_NO_CONTENT) { /* 204 */ @@ -636,7 +635,7 @@ ngx_http_send_special_response(ngx_http_ r->headers_out.content_type_lowcase = NULL; } else { - r->headers_out.content_length_n = -1; + r->headers_out.content_length_n = 0; } if (r->headers_out.content_length) { @@ -654,7 +653,7 @@ ngx_http_send_special_response(ngx_http_ } if (ngx_http_error_pages[err].len == 0) { - return NGX_OK; + return ngx_http_send_special(r, NGX_HTTP_LAST); } b = ngx_calloc_buf(r->pool); From mdounin at mdounin.ru Tue Sep 6 15:58:00 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:00 +0400 Subject: [PATCH 03 of 25] Fix for double content when return is used in error_page handler In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID f0829a9c0e8d49e74b43900aff0b6c3846b12b30 # Parent d90e06868b1d53b1da9280c81c113255a3982c92 Fix for double content when return is used in error_page handler. Test case: location / { error_page 405 /nope; return 405; } location /nope { return 200; } This is expected to return 405 with empty body, but in 0.8.42+ will return builtin 405 error page as well (though not counted in Content-Length, thus breaking protocol). Fix is to use status provided by rewrite script execution in case it's less than NGX_HTTP_BAD_REQUEST even if r->error_status set. This check is in line with one in ngx_http_script_return_code(). Note that this patch also changes behaviour for "return 302 ..." and "rewrite ... redirect" used as error handler. E.g. location / { error_page 405 /redirect; return 405; } location /redirect { rewrite ^ http://example.com/; } will actually return redirect to "http://example.com/" instead of builtin 405 error page with meaningless Location header. This looks like correct change and it's in line with what happens on e.g. directory redirects in error handlers. diff --git a/src/http/modules/ngx_http_rewrite_module.c b/src/http/modules/ngx_http_rewrite_module.c --- a/src/http/modules/ngx_http_rewrite_module.c +++ b/src/http/modules/ngx_http_rewrite_module.c @@ -167,8 +167,8 @@ ngx_http_rewrite_handler(ngx_http_reques code(e); } - if (e->status == NGX_DECLINED) { - return NGX_DECLINED; + if (e->status < NGX_HTTP_BAD_REQUEST) { + return e->status; } if (r->err_status == 0) { From mdounin at mdounin.ru Tue Sep 6 15:58:01 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:01 +0400 Subject: [PATCH 04 of 25] Fix for "return 202" not discarding body In-Reply-To: References: Message-ID: <72e3bfd09f58be13e6d6.1315324681@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID 72e3bfd09f58be13e6d6ca736730eaf4ab994ca9 # Parent f0829a9c0e8d49e74b43900aff0b6c3846b12b30 Fix for "return 202" not discarding body. Big POST (not fully preread) to a location / { return 202; } resulted in incorrect behaviour due to "return" code path not calling ngx_http_discard_request_body(). The same applies to all "return" used with 2xx/3xx codes except 201 and 204, and to all "return ... text" uses. Fix is to add ngx_http_discard_request_body() call to ngx_http_send_response() function where it looks appropriate. Discard body call from emtpy gif module removed as it's now redundant. Reported by Pyry Hakulinen, see http://mailman.nginx.org/pipermail/nginx/2011-August/028503.html diff --git a/src/http/modules/ngx_http_empty_gif_module.c b/src/http/modules/ngx_http_empty_gif_module.c --- a/src/http/modules/ngx_http_empty_gif_module.c +++ b/src/http/modules/ngx_http_empty_gif_module.c @@ -111,19 +111,12 @@ static ngx_str_t ngx_http_gif_type = ng static ngx_int_t ngx_http_empty_gif_handler(ngx_http_request_t *r) { - ngx_int_t rc; ngx_http_complex_value_t cv; if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { return NGX_HTTP_NOT_ALLOWED; } - rc = ngx_http_discard_request_body(r); - - if (rc != NGX_OK) { - return rc; - } - ngx_memzero(&cv, sizeof(ngx_http_complex_value_t)); cv.value.len = sizeof(ngx_empty_gif); diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -1784,6 +1784,10 @@ ngx_http_send_response(ngx_http_request_ ngx_buf_t *b; ngx_chain_t out; + if (ngx_http_discard_request_body(r) != NGX_OK) { + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + r->headers_out.status = status; if (status == NGX_HTTP_NO_CONTENT) { From mdounin at mdounin.ru Tue Sep 6 15:58:02 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:02 +0400 Subject: [PATCH 05 of 25] Incorrect special case for "return 204" removed In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID e165452c1be8865046f3b498c49285e459b8a0c3 # Parent 72e3bfd09f58be13e6d6ca736730eaf4ab994ca9 Incorrect special case for "return 204" removed. The special case in question leads to replies without body in configuration like location / { error_page 404 /zero; return 404; } location /zero { return 204; } while replies with empty body are expected per protocol specs. Correct one will look like if (status == NGX_HTTP_NO_CONTENT) { rc = ngx_http_send_header(r); if (rc == NGX_ERROR || r->header_only) { return rc; } return ngx_http_send_special(r, NGX_HTTP_LAST); } though it looks like it's better to drop this special case at all. diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -1790,11 +1790,6 @@ ngx_http_send_response(ngx_http_request_ r->headers_out.status = status; - if (status == NGX_HTTP_NO_CONTENT) { - r->header_only = 1; - return ngx_http_send_header(r); - } - if (ngx_http_complex_value(r, cv, &val) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } From mdounin at mdounin.ru Tue Sep 6 15:58:03 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:03 +0400 Subject: [PATCH 06 of 25] Clear old Location header (if any) while adding new one In-Reply-To: References: Message-ID: <69e2e11aa42566306380.1315324683@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID 69e2e11aa425663063801a333a29b305593a6925 # Parent e165452c1be8865046f3b498c49285e459b8a0c3 Clear old Location header (if any) while adding new one. This prevents incorrect behaviour when another redirect is issues within error_page 302 handler. diff --git a/src/http/modules/ngx_http_static_module.c b/src/http/modules/ngx_http_static_module.c --- a/src/http/modules/ngx_http_static_module.c +++ b/src/http/modules/ngx_http_static_module.c @@ -139,6 +139,10 @@ ngx_http_static_handler(ngx_http_request ngx_log_debug0(NGX_LOG_DEBUG_HTTP, log, 0, "http dir"); + if (r->headers_out.location) { + r->headers_out.location->hash = 0; + } + r->headers_out.location = ngx_palloc(r->pool, sizeof(ngx_table_elt_t)); if (r->headers_out.location == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -983,6 +983,10 @@ ngx_http_core_find_config_phase(ngx_http } if (rc == NGX_DONE) { + if (r->headers_out.location) { + r->headers_out.location->hash = 0; + } + r->headers_out.location = ngx_list_push(&r->headers_out.headers); if (r->headers_out.location == NULL) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); @@ -1796,6 +1800,10 @@ ngx_http_send_response(ngx_http_request_ if (status >= NGX_HTTP_MOVED_PERMANENTLY && status <= NGX_HTTP_SEE_OTHER) { + if (r->headers_out.location) { + r->headers_out.location->hash = 0; + } + r->headers_out.location = ngx_list_push(&r->headers_out.headers); if (r->headers_out.location == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; diff --git a/src/http/ngx_http_script.c b/src/http/ngx_http_script.c --- a/src/http/ngx_http_script.c +++ b/src/http/ngx_http_script.c @@ -1106,6 +1106,10 @@ ngx_http_script_regex_end_code(ngx_http_ "rewritten redirect: \"%V\"", &e->buf); } + if (r->headers_out.location) { + r->headers_out.location->hash = 0; + } + r->headers_out.location = ngx_list_push(&r->headers_out.headers); if (r->headers_out.location == NULL) { e->ip = ngx_http_script_exit; diff --git a/src/http/ngx_http_special_response.c b/src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c +++ b/src/http/ngx_http_special_response.c @@ -582,6 +582,10 @@ ngx_http_send_error_page(ngx_http_reques ngx_str_set(&location->key, "Location"); location->value = uri; + if (r->headers_out.location) { + r->headers_out.location->hash = 0; + } + r->headers_out.location = location; clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); From mdounin at mdounin.ru Tue Sep 6 15:58:04 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:04 +0400 Subject: [PATCH 07 of 25] Better handling of late upstream creation In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID d603ce98fada855f0100b422b7b5672fd22fabea # Parent 69e2e11aa425663063801a333a29b305593a6925 Better handling of late upstream creation. Configuration with duplicate upstream blocks defined after first use, i.e. like server { ... location / { proxy_pass http://backend; } } upstream backend { ... } upstream backend { ... } now correctly results in "duplicate upstream" error. Additionally, upstream blocks defined after first use now handle various server directive parameters ("weight", "max_fails", etc.). Previously configuration like server { ... location / { proxy_pass http://backend; } } upstream backend { server 127.0.0.1 max_fails=5; } incorrectly resulted in "invalid parameter "max_fails=5"" error. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4256,6 +4256,10 @@ ngx_http_upstream_add(ngx_conf_t *cf, ng continue; } + if (flags & NGX_HTTP_UPSTREAM_CREATE) { + uscfp[i]->flags = flags; + } + return uscfp[i]; } From mdounin at mdounin.ru Tue Sep 6 15:58:05 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:05 +0400 Subject: [PATCH 08 of 25] Gzip filter: handle empty flush buffers In-Reply-To: References: Message-ID: <4cf0af103bc382a78f89.1315324685@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID 4cf0af103bc382a78f894302d1706929a79df4bb # Parent d603ce98fada855f0100b422b7b5672fd22fabea Gzip filter: handle empty flush buffers. Empty flush buffers are legitimate and may happen e.g. due to $r->flush() calls in embedded perl. If there are no data buffered in zlib deflate() will return Z_BUF_ERROR (i.e. no progress possible) without adding anything to output. Don't treat Z_BUF_ERROR as fatal and correctly send empty flush buffer if we have no data in output at all. See this thread for details: http://mailman.nginx.org/pipermail/nginx/2010-November/023693.html diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c +++ b/src/http/modules/ngx_http_gzip_filter_module.c @@ -758,6 +758,7 @@ static ngx_int_t ngx_http_gzip_filter_deflate(ngx_http_request_t *r, ngx_http_gzip_ctx_t *ctx) { int rc; + ngx_buf_t *b; ngx_chain_t *cl; ngx_http_gzip_conf_t *conf; @@ -769,7 +770,7 @@ ngx_http_gzip_filter_deflate(ngx_http_re rc = deflate(&ctx->zstream, ctx->flush); - if (rc != Z_OK && rc != Z_STREAM_END) { + if (rc != Z_OK && rc != Z_STREAM_END && rc != Z_BUF_ERROR) { ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, "deflate() failed: %d, %d", ctx->flush, rc); return NGX_ERROR; @@ -818,8 +819,6 @@ ngx_http_gzip_filter_deflate(ngx_http_re if (ctx->flush == Z_SYNC_FLUSH) { - ctx->zstream.avail_out = 0; - ctx->out_buf->flush = 1; ctx->flush = Z_NO_FLUSH; cl = ngx_alloc_chain_link(r->pool); @@ -827,7 +826,22 @@ ngx_http_gzip_filter_deflate(ngx_http_re return NGX_ERROR; } - cl->buf = ctx->out_buf; + b = ctx->out_buf; + + if (ngx_buf_size(b) == 0) { + + b = ngx_calloc_buf(ctx->request->pool); + if (b == NULL) { + return NGX_ERROR; + } + + } else { + ctx->zstream.avail_out = 0; + } + + b->flush = 1; + + cl->buf = b; cl->next = NULL; *ctx->last_out = cl; ctx->last_out = &cl->next; From mdounin at mdounin.ru Tue Sep 6 15:58:06 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:06 +0400 Subject: [PATCH 09 of 25] Fix for connection drops with AIO In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324342 -14400 # Node ID e8c97615dd5c845ba1dc76681c687be3fe71130d # Parent 4cf0af103bc382a78f894302d1706929a79df4bb Fix for connection drops with AIO. Connections serving content with AIO to fast clients were dropped with "client timed out" messages after send_timeout from response start. diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2274,7 +2274,7 @@ ngx_http_writer(ngx_http_request_t *r) if (r->buffered || r->postponed || (r == r->main && c->buffered)) { - if (!wev->ready && !wev->delayed) { + if (!wev->delayed) { ngx_add_timer(wev, clcf->send_timeout); } From mdounin at mdounin.ru Tue Sep 6 15:58:07 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:07 +0400 Subject: [PATCH 10 of 25] Fix for socket leak with "aio sendfile" and "limit_rate" In-Reply-To: References: Message-ID: <8e75ca21ad556974aeee.1315324687@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324497 -14400 # Node ID 8e75ca21ad556974aeee9156a6534ebe4d12146b # Parent e8c97615dd5c845ba1dc76681c687be3fe71130d Fix for socket leak with "aio sendfile" and "limit_rate". Second aio post happened when timer set by limit_rate expired while we have aio request in flight, resulting in "second aio post" alert and socket leak. The patch adds actual protection from aio calls with r->aio already set to aio sendfile code in ngx_http_copy_filter(). This should fix other cases as well, e.g. when sending buffered to disk upstream replies while still talking to upstream. The ngx_http_writer() is also fixed to handle the above case (though it's mostly optimization now). Reported by Oleksandr V. Typlyns'kyi. diff --git a/src/http/ngx_http_copy_filter_module.c b/src/http/ngx_http_copy_filter_module.c --- a/src/http/ngx_http_copy_filter_module.c +++ b/src/http/ngx_http_copy_filter_module.c @@ -158,6 +158,11 @@ ngx_http_copy_filter(ngx_http_request_t ngx_file_t *file; ngx_http_ephemeral_t *e; + if (r->aio) { + c->busy_sendfile = NULL; + return rc; + } + file = c->busy_sendfile->file; offset = c->busy_sendfile->file_pos; diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2248,17 +2248,17 @@ ngx_http_writer(ngx_http_request_t *r) return; } - } else { - if (wev->delayed || r->aio) { - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, wev->log, 0, - "http writer delayed"); - - if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - ngx_http_close_request(r, 0); - } - - return; + } + + if (wev->delayed || r->aio) { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, wev->log, 0, + "http writer delayed"); + + if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { + ngx_http_close_request(r, 0); } + + return; } rc = ngx_http_output_filter(r, NULL); From mdounin at mdounin.ru Tue Sep 6 15:58:08 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:08 +0400 Subject: [PATCH 11 of 25] Handling of Content-Encoding set from perl In-Reply-To: References: Message-ID: <4a6e9b96868cb07b2bf5.1315324688@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324515 -14400 # Node ID 4a6e9b96868cb07b2bf5f10d3244994fd6019952 # Parent 8e75ca21ad556974aeee9156a6534ebe4d12146b Handling of Content-Encoding set from perl. This fixes double gzipping in case of gzip filter being enabled while perl returns already gzipped response. diff --git a/src/http/modules/perl/nginx.xs b/src/http/modules/perl/nginx.xs --- a/src/http/modules/perl/nginx.xs +++ b/src/http/modules/perl/nginx.xs @@ -474,6 +474,13 @@ header_out(r, key, value) r->headers_out.content_length = header; } + if (header->key.len == sizeof("Content-Encoding") - 1 + && ngx_strncasecmp(header->key.data, "Content-Encoding", + sizeof("Content-Encoding") - 1) == 0) + { + r->headers_out.content_encoding = header; + } + void filename(r) From mdounin at mdounin.ru Tue Sep 6 15:58:09 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:09 +0400 Subject: [PATCH 12 of 25] Gzip static: "always" parameter in "gzip_static" directive In-Reply-To: References: Message-ID: <5b48df4b396bf8428c01.1315324689@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324515 -14400 # Node ID 5b48df4b396bf8428c01a0b4de8f2bd14419452c # Parent 4a6e9b96868cb07b2bf5f10d3244994fd6019952 Gzip static: "always" parameter in "gzip_static" directive. With "always" gzip static returns gzipped content in all cases, without checking if client supports it. It is usefull if you has no gunzipped files on disk anyway. diff --git a/src/http/modules/ngx_http_gzip_static_module.c b/src/http/modules/ngx_http_gzip_static_module.c --- a/src/http/modules/ngx_http_gzip_static_module.c +++ b/src/http/modules/ngx_http_gzip_static_module.c @@ -9,8 +9,13 @@ #include +#define NGX_HTTP_GZIP_STATIC_OFF 0 +#define NGX_HTTP_GZIP_STATIC_ON 1 +#define NGX_HTTP_GZIP_STATIC_ALWAYS 2 + + typedef struct { - ngx_flag_t enable; + ngx_uint_t enable; } ngx_http_gzip_static_conf_t; @@ -21,14 +26,22 @@ static char *ngx_http_gzip_static_merge_ static ngx_int_t ngx_http_gzip_static_init(ngx_conf_t *cf); +static ngx_conf_enum_t ngx_http_gzip_static[] = { + { ngx_string("off"), NGX_HTTP_GZIP_STATIC_OFF }, + { ngx_string("on"), NGX_HTTP_GZIP_STATIC_ON }, + { ngx_string("always"), NGX_HTTP_GZIP_STATIC_ALWAYS }, + { ngx_null_string, 0 } +}; + + static ngx_command_t ngx_http_gzip_static_commands[] = { { ngx_string("gzip_static"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, - ngx_conf_set_flag_slot, + ngx_conf_set_enum_slot, NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_gzip_static_conf_t, enable), - NULL }, + &ngx_http_gzip_static }, ngx_null_command }; @@ -91,11 +104,17 @@ ngx_http_gzip_static_handler(ngx_http_re gzcf = ngx_http_get_module_loc_conf(r, ngx_http_gzip_static_module); - if (!gzcf->enable) { + if (gzcf->enable == NGX_HTTP_GZIP_STATIC_OFF) { return NGX_DECLINED; } - rc = ngx_http_gzip_ok(r); + if (gzcf->enable == NGX_HTTP_GZIP_STATIC_ON) { + rc = ngx_http_gzip_ok(r); + + } else { + /* always */ + rc = NGX_OK; + } clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); @@ -160,10 +179,12 @@ ngx_http_gzip_static_handler(ngx_http_re return NGX_DECLINED; } - r->gzip_vary = 1; + if (gzcf->enable == NGX_HTTP_GZIP_STATIC_ON) { + r->gzip_vary = 1; - if (rc != NGX_OK) { - return NGX_DECLINED; + if (rc != NGX_OK) { + return NGX_DECLINED; + } } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, log, 0, "http static fd: %d", of.fd); @@ -261,7 +282,7 @@ ngx_http_gzip_static_create_conf(ngx_con return NULL; } - conf->enable = NGX_CONF_UNSET; + conf->enable = NGX_CONF_UNSET_UINT; return conf; } @@ -273,7 +294,8 @@ ngx_http_gzip_static_merge_conf(ngx_conf ngx_http_gzip_static_conf_t *prev = parent; ngx_http_gzip_static_conf_t *conf = child; - ngx_conf_merge_value(conf->enable, prev->enable, 0); + ngx_conf_merge_uint_value(conf->enable, prev->enable, + NGX_HTTP_GZIP_STATIC_OFF); return NGX_CONF_OK; } From mdounin at mdounin.ru Tue Sep 6 15:58:10 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:10 +0400 Subject: [PATCH 13 of 25] Memcached: memcached_gzip_flag directive In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID e4da5e68a62f423479785add85c341f832811222 # Parent 5b48df4b396bf8428c01a0b4de8f2bd14419452c Memcached: memcached_gzip_flag directive. This directive allows to test desired flag as returned by memcached and sets Content-Encoding to gzip if one found. This is reimplementation of patch by Tomash Brechko as available on http://openhack.ru/. It should be a bit more correct though (at least I think so). In particular, it doesn't try to detect if we are able to gunzip data, but instead just sets correct Content-Encoding. diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -12,6 +12,7 @@ typedef struct { ngx_http_upstream_conf_t upstream; ngx_int_t index; + ngx_uint_t gzip_flag; } ngx_http_memcached_loc_conf_t; @@ -100,6 +101,13 @@ static ngx_command_t ngx_http_memcached offsetof(ngx_http_memcached_loc_conf_t, upstream.next_upstream), &ngx_http_memcached_next_upstream_masks }, + { ngx_string("memcached_gzip_flag"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_memcached_loc_conf_t, gzip_flag), + NULL }, + ngx_null_command }; @@ -280,10 +288,13 @@ ngx_http_memcached_reinit_request(ngx_ht static ngx_int_t ngx_http_memcached_process_header(ngx_http_request_t *r) { - u_char *p, *len; - ngx_str_t line; - ngx_http_upstream_t *u; - ngx_http_memcached_ctx_t *ctx; + u_char *p, *start; + ngx_str_t line; + ngx_uint_t flags; + ngx_table_elt_t *h; + ngx_http_upstream_t *u; + ngx_http_memcached_ctx_t *ctx; + ngx_http_memcached_loc_conf_t *mlcf; u = r->upstream; @@ -308,6 +319,7 @@ found: p = u->buffer.pos; ctx = ngx_http_get_module_ctx(r, ngx_http_memcached_module); + mlcf = ngx_http_get_module_loc_conf(r, ngx_http_memcached_module); if (ngx_strncmp(p, "VALUE ", sizeof("VALUE ") - 1) == 0) { @@ -328,23 +340,56 @@ found: goto no_valid; } - /* skip flags */ + /* flags */ + + start = p; while (*p) { if (*p++ == ' ') { - goto length; + if (mlcf->gzip_flag) { + goto flags; + } else { + goto length; + } } } goto no_valid; + flags: + + flags = ngx_atoi(start, p - start - 1); + + if (flags == (ngx_uint_t) NGX_ERROR) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "memcached sent invalid flags in response \"%V\" " + "for key \"%V\"", + &line, &ctx->key); + return NGX_HTTP_UPSTREAM_INVALID_HEADER; + } + + if (flags & mlcf->gzip_flag) { + h = ngx_list_push(&r->headers_out.headers); + if (h == NULL) { + return NGX_ERROR; + } + + h->hash = 1; + h->key.len = sizeof("Content-Encoding") - 1; + h->key.data = (u_char *) "Content-Encoding"; + h->value.len = sizeof("gzip") - 1; + h->value.data = (u_char *) "gzip"; + + r->headers_out.content_encoding = h; + } + length: - len = p; + start = p; while (*p && *p++ != CR) { /* void */ } - r->headers_out.content_length_n = ngx_atoof(len, p - len - 1); + r->headers_out.content_length_n = ngx_atoof(start, p - start - 1); if (r->headers_out.content_length_n == -1) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "memcached sent invalid length in response \"%V\" " @@ -533,6 +578,7 @@ ngx_http_memcached_create_loc_conf(ngx_c conf->upstream.pass_request_body = 0; conf->index = NGX_CONF_UNSET; + conf->gzip_flag = NGX_CONF_UNSET_UINT; return conf; } @@ -576,6 +622,8 @@ ngx_http_memcached_merge_loc_conf(ngx_co conf->index = prev->index; } + ngx_conf_merge_uint_value(conf->gzip_flag, prev->gzip_flag, 0); + return NGX_CONF_OK; } From mdounin at mdounin.ru Tue Sep 6 15:58:11 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:11 +0400 Subject: [PATCH 14 of 25] Mail: handle smtp multiline replies In-Reply-To: References: Message-ID: <28b35237ca588d407cbd.1315324691@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 28b35237ca588d407cbdd17d9535aa72ba589cff # Parent e4da5e68a62f423479785add85c341f832811222 Mail: handle smtp multiline replies. See here for details: http://nginx.org/pipermail/nginx/2010-August/021713.html http://nginx.org/pipermail/nginx/2010-August/021784.html http://nginx.org/pipermail/nginx/2010-August/021785.html diff --git a/src/mail/ngx_mail_proxy_module.c b/src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c +++ b/src/mail/ngx_mail_proxy_module.c @@ -701,7 +701,7 @@ ngx_mail_proxy_dummy_handler(ngx_event_t static ngx_int_t ngx_mail_proxy_read_response(ngx_mail_session_t *s, ngx_uint_t state) { - u_char *p; + u_char *p, *m; ssize_t n; ngx_buf_t *b; ngx_mail_proxy_conf_t *pcf; @@ -778,6 +778,25 @@ ngx_mail_proxy_read_response(ngx_mail_se break; default: /* NGX_MAIL_SMTP_PROTOCOL */ + + if (p[3] == '-') { + /* multiline reply, check if we got last line */ + + m = b->last - (sizeof(CRLF "200" CRLF) - 1); + + while (m > p) { + if (m[0] == CR && m[1] == LF) { + break; + } + + m--; + } + + if (m <= p || m[5] == '-') { + return NGX_AGAIN; + } + } + switch (state) { case ngx_smtp_start: From mdounin at mdounin.ru Tue Sep 6 15:58:12 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:12 +0400 Subject: [PATCH 15 of 25] Additional headers for proxy_ignore_headers/fastcgi_ignore_headers In-Reply-To: References: Message-ID: <437a4ad9102ddd776bd8.1315324692@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 437a4ad9102ddd776bd866e3b2ff18553104814d # Parent 28b35237ca588d407cbdd17d9535aa72ba589cff Additional headers for proxy_ignore_headers/fastcgi_ignore_headers. Now the following headers may be ignored as well: X-Accel-Limit-Rate, X-Accel-Buffering, X-Accel-Charset. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -360,6 +360,9 @@ ngx_conf_bitmask_t ngx_http_upstream_ca ngx_conf_bitmask_t ngx_http_upstream_ignore_headers_masks[] = { { ngx_string("X-Accel-Redirect"), NGX_HTTP_UPSTREAM_IGN_XA_REDIRECT }, { ngx_string("X-Accel-Expires"), NGX_HTTP_UPSTREAM_IGN_XA_EXPIRES }, + { ngx_string("X-Accel-Limit-Rate"), NGX_HTTP_UPSTREAM_IGN_XA_LIMIT_RATE }, + { ngx_string("X-Accel-Buffering"), NGX_HTTP_UPSTREAM_IGN_XA_BUFFERING }, + { ngx_string("X-Accel-Charset"), NGX_HTTP_UPSTREAM_IGN_XA_CHARSET }, { ngx_string("Expires"), NGX_HTTP_UPSTREAM_IGN_EXPIRES }, { ngx_string("Cache-Control"), NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL }, { ngx_string("Set-Cookie"), NGX_HTTP_UPSTREAM_IGN_SET_COOKIE }, @@ -3265,9 +3268,15 @@ static ngx_int_t ngx_http_upstream_process_limit_rate(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { - ngx_int_t n; - - r->upstream->headers_in.x_accel_limit_rate = h; + ngx_int_t n; + ngx_http_upstream_t *u; + + u = r->upstream; + u->headers_in.x_accel_limit_rate = h; + + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_LIMIT_RATE) { + return NGX_OK; + } n = ngx_atoi(h->value.data, h->value.len); @@ -3283,16 +3292,23 @@ static ngx_int_t ngx_http_upstream_process_buffering(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { - u_char c0, c1, c2; - - if (r->upstream->conf->change_buffering) { + u_char c0, c1, c2; + ngx_http_upstream_t *u; + + u = r->upstream; + + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_BUFFERING) { + return NGX_OK; + } + + if (u->conf->change_buffering) { if (h->value.len == 2) { c0 = ngx_tolower(h->value.data[0]); c1 = ngx_tolower(h->value.data[1]); if (c0 == 'n' && c1 == 'o') { - r->upstream->buffering = 0; + u->buffering = 0; } } else if (h->value.len == 3) { @@ -3301,7 +3317,7 @@ ngx_http_upstream_process_buffering(ngx_ c2 = ngx_tolower(h->value.data[2]); if (c0 == 'y' && c1 == 'e' && c2 == 's') { - r->upstream->buffering = 1; + u->buffering = 1; } } } @@ -3314,6 +3330,10 @@ static ngx_int_t ngx_http_upstream_process_charset(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { + if (r->upstream->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_CHARSET) { + return NGX_OK; + } + r->headers_out.override_charset = &h->value; return NGX_OK; diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -44,6 +44,9 @@ #define NGX_HTTP_UPSTREAM_IGN_EXPIRES 0x00000008 #define NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL 0x00000010 #define NGX_HTTP_UPSTREAM_IGN_SET_COOKIE 0x00000020 +#define NGX_HTTP_UPSTREAM_IGN_XA_LIMIT_RATE 0x00000040 +#define NGX_HTTP_UPSTREAM_IGN_XA_BUFFERING 0x00000080 +#define NGX_HTTP_UPSTREAM_IGN_XA_CHARSET 0x00000100 typedef struct { From mdounin at mdounin.ru Tue Sep 6 15:58:13 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:13 +0400 Subject: [PATCH 16 of 25] Fix for proxy_store leaving temporary files for subrequests In-Reply-To: References: Message-ID: <844f42dbe78794bad203.1315324693@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 844f42dbe78794bad20366b51b19ee9b9ed32853 # Parent 437a4ad9102ddd776bd866e3b2ff18553104814d Fix for proxy_store leaving temporary files for subrequests. Temporary files might not be removed if the "proxy_store" or "fastcgi_store" directives were used for subrequests (e.g. ssi includes) and client closed prematurely connection. Non-active subrequests are finalized out of the control of the upstream module when client closes connection. As a result, code to remove unfinished temporary files in ngx_http_upstream_process_request() wasn't executed. Fix is to move relevant code into ngx_http_upstream_finalize_request() which is called in all cases, either directly or via cleanup handler. Problem was originally noted here: http://nginx.org/pipermail/nginx-ru/2009-April/024597.html Patch was originally posted here (no changes since then): http://nginx.org/pipermail/nginx-ru/2009-May/024766.html Test case is here: http://mdounin.ru/hg/nginx-tests/rev/1d3c82227a05 http://mdounin.ru/hg/nginx-tests/file/tip/proxy-store.t diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2617,7 +2617,6 @@ ngx_http_upstream_process_upstream(ngx_h static void ngx_http_upstream_process_request(ngx_http_request_t *r) { - ngx_uint_t del; ngx_temp_file_t *tf; ngx_event_pipe_t *p; ngx_http_upstream_t *u; @@ -2629,30 +2628,16 @@ ngx_http_upstream_process_request(ngx_ht if (u->store) { - del = p->upstream_error; - - tf = u->pipe->temp_file; - if (p->upstream_eof || p->upstream_done) { + tf = u->pipe->temp_file; + if (u->headers_in.status_n == NGX_HTTP_OK && (u->headers_in.content_length_n == -1 || (u->headers_in.content_length_n == tf->offset))) { ngx_http_upstream_store(r, u); - - } else { - del = 1; - } - } - - if (del && tf->file.fd != NGX_INVALID_FILE) { - - if (ngx_delete_file(tf->file.name.data) == NGX_FILE_ERROR) { - - ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno, - ngx_delete_file_n " \"%s\" failed", - u->pipe->temp_file->file.name.data); + u->store = 0; } } } @@ -2994,6 +2979,18 @@ ngx_http_upstream_finalize_request(ngx_h u->pipe->temp_file->file.fd); } + if (u->store && u->pipe && u->pipe->temp_file + && u->pipe->temp_file->file.fd != NGX_INVALID_FILE) + { + if (ngx_delete_file(u->pipe->temp_file->file.name.data) + == NGX_FILE_ERROR) + { + ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno, + ngx_delete_file_n " \"%s\" failed", + u->pipe->temp_file->file.name.data); + } + } + #if (NGX_HTTP_CACHE) if (r->cache) { From mdounin at mdounin.ru Tue Sep 6 15:58:14 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:14 +0400 Subject: [PATCH 17 of 25] Cache: fix for sending of empty responses In-Reply-To: References: Message-ID: <2a8a9625e90d91f3a35a.1315324694@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 2a8a9625e90d91f3a35a55472ad0770f8e02d96b # Parent 844f42dbe78794bad20366b51b19ee9b9ed32853 Cache: fix for sending of empty responses. Revert wrong fix for empty responses introduced in 0.8.31 and apply new one, rewritten to match things done by static module as close as possible. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -853,6 +853,10 @@ ngx_http_cache_send(ngx_http_request_t * ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http file cache send: %s", c->file.name.data); + if (r != r->main && c->length - c->body_start == 0) { + return ngx_http_send_header(r); + } + /* we need to allocate all before the header would be sent */ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); @@ -865,8 +869,6 @@ ngx_http_cache_send(ngx_http_request_t * return NGX_HTTP_INTERNAL_SERVER_ERROR; } - r->header_only = (c->length - c->body_start) == 0; - rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { @@ -876,7 +878,7 @@ ngx_http_cache_send(ngx_http_request_t * b->file_pos = c->body_start; b->file_last = c->length; - b->in_file = 1; + b->in_file = (c->length - c->body_start) ? 1: 0; b->last_buf = (r == r->main) ? 1: 0; b->last_in_chain = 1; From mdounin at mdounin.ru Tue Sep 6 15:58:15 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:15 +0400 Subject: [PATCH 18 of 25] Cache: fix for sending of stale responses In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID a5f19d575163e79c38bdf7e67dc22a72ebb3a7dd # Parent 2a8a9625e90d91f3a35a55472ad0770f8e02d96b Cache: fix for sending of stale responses. For normal cached responses ngx_http_cache_send() sends last buffer and then request finalized via ngx_http_finalize_request() call, i.e. everything is ok. But for stale responses (i.e. when upstream died, but we have something in cache) the same ngx_http_cache_send() sends last buffer, but then in ngx_http_upstream_finalize_request() another last buffer is send. This causes duplicate final chunk to appear if chunked encoding is used (and resulting problems with keepalive connections and so on). Fix this by not sending in ngx_http_upstream_finalize_request() another last buffer if we know response was from cache. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3027,7 +3027,12 @@ ngx_http_upstream_finalize_request(ngx_h r->connection->log->action = "sending to client"; - if (rc == 0) { + if (rc == 0 +#if (NGX_HTTP_CACHE) + && !r->cached +#endif + ) + { rc = ngx_http_send_special(r, NGX_HTTP_LAST); } From mdounin at mdounin.ru Tue Sep 6 15:58:16 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:16 +0400 Subject: [PATCH 19 of 25] Variables: honor no_cacheable for not_found variables In-Reply-To: References: Message-ID: <3b75127939ee38809e08.1315324696@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 3b75127939ee38809e0847f09de5c139f241053e # Parent a5f19d575163e79c38bdf7e67dc22a72ebb3a7dd Variables: honor no_cacheable for not_found variables. Variables with not_found flag set follow the same rules as ones with valid flag set. Make sure ngx_http_get_flushed_variable() will flush non-cacheable variables with not_found flag set. This fixes at least one known problem with $args not available in subrequest (with args) when there were no args in main request and $args variable was queried in main request (reported by Laurence Rowe aka elro on irc). Also this eliminates unneeded call to ngx_http_get_indexed_variable() in cacheable case (as it will return cached value anyway). diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c +++ b/src/http/ngx_http_variables.c @@ -427,7 +427,7 @@ ngx_http_get_flushed_variable(ngx_http_r v = &r->variables[index]; - if (v->valid) { + if (v->valid || v->not_found) { if (!v->no_cacheable) { return v; } From mdounin at mdounin.ru Tue Sep 6 15:58:17 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:17 +0400 Subject: [PATCH 20 of 25] Core: protection from subrequest loops In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID e854e5abda69f3bdee0f87425f9167dc8cd6adca # Parent 3b75127939ee38809e0847f09de5c139f241053e Core: protection from subrequest loops. Without protection subrequest loop results in r->count overflow and SIGSEGV. Protection was broken in 0.7.25. Note that this also limits number of parallel subrequests. This wasn't exactly the case before 0.7.25 as local subrequests were completed directly. See here for details: http://nginx.org/pipermail/nginx-ru/2010-February/032184.html diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -2455,7 +2455,6 @@ ngx_http_subrequest(ngx_http_request_t * sr->start_sec = tp->sec; sr->start_msec = tp->msec; - r->main->subrequests++; r->main->count++; *psr = sr; diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -1978,6 +1978,7 @@ ngx_http_finalize_request(ngx_http_reque if (r == c->data) { r->main->count--; + r->main->subrequests++; if (!r->logged) { From mdounin at mdounin.ru Tue Sep 6 15:58:18 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:18 +0400 Subject: [PATCH 21 of 25] Core: protection from cycles with named locations and post_action In-Reply-To: References: Message-ID: <1c8c48040004bee990fc.1315324698@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 1c8c48040004bee990fc2dd984d27e49ca80b017 # Parent e854e5abda69f3bdee0f87425f9167dc8cd6adca Core: protection from cycles with named locations and post_action. Now redirects to named locations are counted against normal uri changes limit, and post_action respect this limit as well. As a result at least the following (bad) configurations no longer trigger infinite cycles: 1. Post action which recursively triggers post action: location / { post_action /index.html; } 2. Post action pointing to nonexistent named location: location / { post_action @nonexistent; } 3. Recursive error page for 500 (Internal Server Error) pointing to a nonexistent named location: location / { recursive_error_pages on; error_page 500 @nonexistent; return 500; } diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -2525,6 +2525,16 @@ ngx_http_named_location(ngx_http_request ngx_http_core_main_conf_t *cmcf; r->main->count++; + r->uri_changes--; + + if (r->uri_changes == 0) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "rewrite or internal redirection cycle " + "while redirect to named location \"%V\"", name); + + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return NGX_DONE; + } cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2896,6 +2896,10 @@ ngx_http_post_action(ngx_http_request_t return NGX_DECLINED; } + if (r->post_action && r->uri_changes == 0) { + return NGX_DECLINED; + } + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "post action: \"%V\"", &clcf->post_action); From mdounin at mdounin.ru Tue Sep 6 15:58:19 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:19 +0400 Subject: [PATCH 22 of 25] Autoindex: escape '?' in file names In-Reply-To: References: Message-ID: <1faaec031ff21b88f184.1315324699@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 1faaec031ff21b88f1845bd415dcb61251d7443b # Parent 1c8c48040004bee990fc2dd984d27e49ca80b017 Autoindex: escape '?' in file names. For files with '?' in their names autoindex generated links with '?' not escaped. This resulted in effectively truncated links as '?' indicates query string start. This is updated version of the patch originally posted at [1]. It introduces generic NGX_ESCAPE_URI_COMPONENT which escapes everything but unreseved characters as per RFC 3986. This aproach also renders unneeded special colon processing (as colon is percent-encoded now), it's dropped accordingly. [1] http://nginx.org/pipermail/nginx-devel/2010-February/000112.html Reported by Konstantin Leonov. diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c --- a/src/core/ngx_string.c +++ b/src/core/ngx_string.c @@ -1380,6 +1380,26 @@ ngx_escape_uri(u_char *dst, u_char *src, 0xffffffff /* 1111 1111 1111 1111 1111 1111 1111 1111 */ }; + /* not ALPHA, DIGIT, "-", ".", "_", "~" */ + + static uint32_t uri_component[] = { + 0xffffffff, /* 1111 1111 1111 1111 1111 1111 1111 1111 */ + + /* ?>=< ;:98 7654 3210 /.-, +*)( '&%$ #"! */ + 0xfc009fff, /* 1111 1100 0000 0000 1001 1111 1111 1111 */ + + /* _^]\ [ZYX WVUT SRQP ONML KJIH GFED CBA@ */ + 0x78000001, /* 0111 1000 0000 0000 0000 0000 0000 0001 */ + + /* ~}| {zyx wvut srqp onml kjih gfed cba` */ + 0xb8000001, /* 1011 1000 0000 0000 0000 0000 0000 0001 */ + + 0xffffffff, /* 1111 1111 1111 1111 1111 1111 1111 1111 */ + 0xffffffff, /* 1111 1111 1111 1111 1111 1111 1111 1111 */ + 0xffffffff, /* 1111 1111 1111 1111 1111 1111 1111 1111 */ + 0xffffffff /* 1111 1111 1111 1111 1111 1111 1111 1111 */ + }; + /* " ", "#", """, "%", "'", %00-%1F, %7F-%FF */ static uint32_t html[] = { @@ -1443,7 +1463,7 @@ ngx_escape_uri(u_char *dst, u_char *src, /* mail_auth is the same as memcached */ static uint32_t *map[] = - { uri, args, html, refresh, memcached, memcached }; + { uri, args, uri_component, html, refresh, memcached, memcached }; escape = map[type]; diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h --- a/src/core/ngx_string.h +++ b/src/core/ngx_string.h @@ -189,12 +189,13 @@ size_t ngx_utf8_length(u_char *p, size_t u_char *ngx_utf8_cpystrn(u_char *dst, u_char *src, size_t n, size_t len); -#define NGX_ESCAPE_URI 0 -#define NGX_ESCAPE_ARGS 1 -#define NGX_ESCAPE_HTML 2 -#define NGX_ESCAPE_REFRESH 3 -#define NGX_ESCAPE_MEMCACHED 4 -#define NGX_ESCAPE_MAIL_AUTH 5 +#define NGX_ESCAPE_URI 0 +#define NGX_ESCAPE_ARGS 1 +#define NGX_ESCAPE_URI_COMPONENT 2 +#define NGX_ESCAPE_HTML 3 +#define NGX_ESCAPE_REFRESH 4 +#define NGX_ESCAPE_MEMCACHED 5 +#define NGX_ESCAPE_MAIL_AUTH 6 #define NGX_UNESCAPE_URI 1 #define NGX_UNESCAPE_REDIRECT 2 diff --git a/src/http/modules/ngx_http_autoindex_module.c b/src/http/modules/ngx_http_autoindex_module.c --- a/src/http/modules/ngx_http_autoindex_module.c +++ b/src/http/modules/ngx_http_autoindex_module.c @@ -28,7 +28,6 @@ typedef struct { size_t escape; unsigned dir:1; - unsigned colon:1; time_t mtime; off_t size; @@ -338,7 +337,7 @@ ngx_http_autoindex_handler(ngx_http_requ ngx_cpystrn(entry->name.data, ngx_de_name(&dir), len + 1); entry->escape = 2 * ngx_escape_uri(NULL, ngx_de_name(&dir), len, - NGX_ESCAPE_HTML); + NGX_ESCAPE_URI_COMPONENT); if (utf8) { entry->utf_len = ngx_utf8_length(entry->name.data, entry->name.len); @@ -346,8 +345,6 @@ ngx_http_autoindex_handler(ngx_http_requ entry->utf_len = len; } - entry->colon = (ngx_strchr(entry->name.data, ':') != NULL); - entry->dir = ngx_de_is_dir(&dir); entry->mtime = ngx_de_mtime(&dir); entry->size = ngx_de_size(&dir); @@ -373,7 +370,7 @@ ngx_http_autoindex_handler(ngx_http_requ + entry[i].name.len + entry[i].escape + 1 /* 1 is for "/" */ + sizeof("\">") - 1 - + entry[i].name.len - entry[i].utf_len + entry[i].colon * 2 + + entry[i].name.len - entry[i].utf_len + NGX_HTTP_AUTOINDEX_NAME_LEN + sizeof(">") - 2 + sizeof("") - 1 + sizeof(" 28-Sep-1970 12:00 ") - 1 @@ -406,14 +403,9 @@ ngx_http_autoindex_handler(ngx_http_requ for (i = 0; i < entries.nelts; i++) { b->last = ngx_cpymem(b->last, "last++ = '.'; - *b->last++ = '/'; - } - if (entry[i].escape) { ngx_escape_uri(b->last, entry[i].name.data, entry[i].name.len, - NGX_ESCAPE_HTML); + NGX_ESCAPE_URI_COMPONENT); b->last += entry[i].name.len + entry[i].escape; From mdounin at mdounin.ru Tue Sep 6 15:58:20 2011 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 06 Sep 2011 19:58:20 +0400 Subject: [PATCH 23 of 25] Autoindex: escape html in file names In-Reply-To: References: Message-ID: <937024a7294be79e4d64.1315324700@vm-bsd.mdounin.ru> # HG changeset patch # User Maxim Dounin # Date 1315324516 -14400 # Node ID 937024a7294be79e4d64ecb0579884945cebcd0a # Parent 1faaec031ff21b88f1845bd415dcb61251d7443b Autoindex: escape html in file names. diff --git a/src/http/modules/ngx_http_autoindex_module.c b/src/http/modules/ngx_http_autoindex_module.c --- a/src/http/modules/ngx_http_autoindex_module.c +++ b/src/http/modules/ngx_http_autoindex_module.c @@ -26,6 +26,7 @@ typedef struct { ngx_str_t name; size_t utf_len; size_t escape; + size_t escape_html; unsigned dir:1; @@ -137,7 +138,7 @@ ngx_http_autoindex_handler(ngx_http_requ { u_char *last, *filename, scale; off_t length; - size_t len, utf_len, allocated, root; + size_t len, char_len, escape_html, allocated, root; ngx_tm_t tm; ngx_err_t err; ngx_buf_t *b; @@ -339,6 +340,9 @@ ngx_http_autoindex_handler(ngx_http_requ entry->escape = 2 * ngx_escape_uri(NULL, ngx_de_name(&dir), len, NGX_ESCAPE_URI_COMPONENT); + entry->escape_html = ngx_escape_html(NULL, entry->name.data, + entry->name.len); + if (utf8) { entry->utf_len = ngx_utf8_length(entry->name.data, entry->name.len); } else { @@ -355,10 +359,12 @@ ngx_http_autoindex_handler(ngx_http_requ ngx_close_dir_n " \"%s\" failed", &path); } + escape_html = ngx_escape_html(NULL, r->uri.data, r->uri.len); + len = sizeof(title) - 1 - + r->uri.len + + r->uri.len + escape_html + sizeof(header) - 1 - + r->uri.len + + r->uri.len + escape_html + sizeof("") - 1 + sizeof("
../" CRLF) - 1
           + sizeof("

") - 1 @@ -371,6 +377,7 @@ ngx_http_autoindex_handler(ngx_http_requ + 1 /* 1 is for "/" */ + sizeof("\">") - 1 + entry[i].name.len - entry[i].utf_len + + entry[i].escape_html + NGX_HTTP_AUTOINDEX_NAME_LEN + sizeof(">") - 2 + sizeof("") - 1 + sizeof(" 28-Sep-1970 12:00 ") - 1 @@ -390,9 +397,18 @@ ngx_http_autoindex_handler(ngx_http_requ } b->last = ngx_cpymem(b->last, title, sizeof(title) - 1); - b->last = ngx_cpymem(b->last, r->uri.data, r->uri.len); - b->last = ngx_cpymem(b->last, header, sizeof(header) - 1); - b->last = ngx_cpymem(b->last, r->uri.data, r->uri.len); + + if (escape_html) { + b->last = (u_char *) ngx_escape_html(b->last, r->uri.data, r->uri.len); + b->last = ngx_cpymem(b->last, header, sizeof(header) - 1); + b->last = (u_char *) ngx_escape_html(b->last, r->uri.data, r->uri.len); + + } else { + b->last = ngx_cpymem(b->last, r->uri.data, r->uri.len); + b->last = ngx_cpymem(b->last, header, sizeof(header) - 1); + b->last = ngx_cpymem(b->last, r->uri.data, r->uri.len); + } + b->last = ngx_cpymem(b->last, "", sizeof("") - 1); b->last = ngx_cpymem(b->last, "
../" CRLF,
@@ -425,20 +441,41 @@ ngx_http_autoindex_handler(ngx_http_requ
 
         if (entry[i].name.len != len) {
             if (len > NGX_HTTP_AUTOINDEX_NAME_LEN) {
-                utf_len = NGX_HTTP_AUTOINDEX_NAME_LEN - 3 + 1;
+                char_len = NGX_HTTP_AUTOINDEX_NAME_LEN - 3 + 1;
 
             } else {
-                utf_len = NGX_HTTP_AUTOINDEX_NAME_LEN + 1;
+                char_len = NGX_HTTP_AUTOINDEX_NAME_LEN + 1;
             }
 
+            last = b->last;
             b->last = ngx_utf8_cpystrn(b->last, entry[i].name.data,
-                                       utf_len, entry[i].name.len + 1);
+                                       char_len, entry[i].name.len + 1);
+
+            if (entry[i].escape_html) {
+                b->last = (u_char *) ngx_escape_html(last, entry[i].name.data,
+                                                     b->last - last);
+            }
+
             last = b->last;
 
         } else {
-            b->last = ngx_cpystrn(b->last, entry[i].name.data,
-                                  NGX_HTTP_AUTOINDEX_NAME_LEN + 1);
-            last = b->last - 3;
+            if (entry[i].escape_html) {
+                if (len > NGX_HTTP_AUTOINDEX_NAME_LEN) {
+                    char_len = NGX_HTTP_AUTOINDEX_NAME_LEN - 3;
+
+                } else {
+                    char_len = len;
+                }
+
+                b->last = (u_char *) ngx_escape_html(b->last,
+                                                  entry[i].name.data, char_len);
+                last = b->last;
+
+            } else {
+                b->last = ngx_cpystrn(b->last, entry[i].name.data,
+                                      NGX_HTTP_AUTOINDEX_NAME_LEN + 1);
+                last = b->last - 3;
+            }
         }
 
         if (len > NGX_HTTP_AUTOINDEX_NAME_LEN) {


From mdounin at mdounin.ru  Tue Sep  6 15:58:21 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 06 Sep 2011 19:58:21 +0400
Subject: [PATCH 24 of 25] Unbreak build with embedded perl and --with-openssl
In-Reply-To: 
References: 
Message-ID: 

# HG changeset patch
# User Maxim Dounin 
# Date 1315324516 -14400
# Node ID bace4194977872078bfa7586303dc3565cda481a
# Parent  937024a7294be79e4d64ecb0579884945cebcd0a
Unbreak build with embedded perl and --with-openssl.

diff --git a/auto/lib/perl/make b/auto/lib/perl/make
--- a/auto/lib/perl/make
+++ b/auto/lib/perl/make
@@ -27,6 +27,7 @@ cat << END                              
 		&& NGX_PM_CFLAGS="\$(NGX_PM_CFLAGS) -g $NGX_CC_OPT"	\
 			NGX_PCRE=$PCRE					\
 			NGX_OBJS=$NGX_OBJS				\
+			NGX_OPENSSL=$OPENSSL				\
 		$NGX_PERL Makefile.PL					\
 			LIB=$NGX_PERL_MODULES				\
 			INSTALLSITEMAN3DIR=$NGX_PERL_MODULES_MAN
diff --git a/src/http/modules/perl/Makefile.PL b/src/http/modules/perl/Makefile.PL
--- a/src/http/modules/perl/Makefile.PL
+++ b/src/http/modules/perl/Makefile.PL
@@ -24,7 +24,11 @@ WriteMakefile(
                          "-I ../../../../../$ENV{NGX_OBJS} " .
                          ($ENV{NGX_PCRE} =~ /^(YES|NO)/ ? "" :
                              ($ENV{NGX_PCRE} =~ m#^/# ? "-I $ENV{NGX_PCRE} " :
-                                  "-I ../../../../../$ENV{NGX_PCRE} ")),
+                                  "-I ../../../../../$ENV{NGX_PCRE} ")) .
+                         ($ENV{NGX_OPENSSL} =~ /^(YES|NO)/ ? "" :
+                             ($ENV{NGX_OPENSSL} =~ m#^/# ?
+                                  "-I $ENV{NGX_OPENSSL}/.openssl/include " :
+                      "-I ../../../../../$ENV{NGX_OPENSSL}/.openssl/include ")),
 
     depend => {
         'nginx.c'     =>


From mdounin at mdounin.ru  Tue Sep  6 15:58:22 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 06 Sep 2011 19:58:22 +0400
Subject: [PATCH 25 of 25] Time parsing cleanup
In-Reply-To: 
References: 
Message-ID: <6d7dc429ad2bbb9ffe09.1315324702@vm-bsd.mdounin.ru>

# HG changeset patch
# User Maxim Dounin 
# Date 1315324516 -14400
# Node ID 6d7dc429ad2bbb9ffe095fe970f0c6c7b8242397
# Parent  bace4194977872078bfa7586303dc3565cda481a
Time parsing cleanup.

Nuke NGX_PARSE_LARGE_TIME, it's not used since 0.6.30.  The only error
ngx_parse_time() can currently return is NGX_ERROR, check it explicitly
and make sure to cast it to appropriate type (either time_t or ngx_msec_t)
to avoid signedness warnings on platforms with unsigned time_t (notably QNX).

diff --git a/src/core/ngx_conf_file.c b/src/core/ngx_conf_file.c
--- a/src/core/ngx_conf_file.c
+++ b/src/core/ngx_conf_file.c
@@ -1294,10 +1294,6 @@ ngx_conf_set_msec_slot(ngx_conf_t *cf, n
         return "invalid value";
     }
 
-    if (*msp == (ngx_msec_t) NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 597 hours";
-    }
-
     if (cmd->post) {
         post = cmd->post;
         return post->post_handler(cf, post, msp);
@@ -1325,14 +1321,10 @@ ngx_conf_set_sec_slot(ngx_conf_t *cf, ng
     value = cf->args->elts;
 
     *sp = ngx_parse_time(&value[1], 1);
-    if (*sp == NGX_ERROR) {
+    if (*sp == (time_t) NGX_ERROR) {
         return "invalid value";
     }
 
-    if (*sp == NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 68 years";
-    }
-
     if (cmd->post) {
         post = cmd->post;
         return post->post_handler(cf, post, sp);
diff --git a/src/core/ngx_parse.h b/src/core/ngx_parse.h
--- a/src/core/ngx_parse.h
+++ b/src/core/ngx_parse.h
@@ -12,9 +12,6 @@
 #include 
 
 
-#define NGX_PARSE_LARGE_TIME  -2
-
-
 ssize_t ngx_parse_size(ngx_str_t *line);
 off_t ngx_parse_offset(ngx_str_t *line);
 ngx_int_t ngx_parse_time(ngx_str_t *line, ngx_uint_t sec);
diff --git a/src/http/modules/ngx_http_headers_filter_module.c b/src/http/modules/ngx_http_headers_filter_module.c
--- a/src/http/modules/ngx_http_headers_filter_module.c
+++ b/src/http/modules/ngx_http_headers_filter_module.c
@@ -531,7 +531,7 @@ ngx_http_headers_expires(ngx_conf_t *cf,
 
     hcf->expires_time = ngx_parse_time(&value[n], 1);
 
-    if (hcf->expires_time == NGX_ERROR) {
+    if (hcf->expires_time == (time_t) NGX_ERROR) {
         return "invalid value";
     }
 
@@ -541,10 +541,6 @@ ngx_http_headers_expires(ngx_conf_t *cf,
         return "daily time value must be less than 24 hours";
     }
 
-    if (hcf->expires_time == NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 68 years";
-    }
-
     if (minus) {
         hcf->expires_time = - hcf->expires_time;
     }
diff --git a/src/http/modules/ngx_http_log_module.c b/src/http/modules/ngx_http_log_module.c
--- a/src/http/modules/ngx_http_log_module.c
+++ b/src/http/modules/ngx_http_log_module.c
@@ -1242,7 +1242,7 @@ ngx_http_log_open_file_cache(ngx_conf_t 
             s.data = value[i].data + 9;
 
             inactive = ngx_parse_time(&s, 1);
-            if (inactive < 0) {
+            if (inactive == (time_t) NGX_ERROR) {
                 goto failed;
             }
 
@@ -1265,7 +1265,7 @@ ngx_http_log_open_file_cache(ngx_conf_t 
             s.data = value[i].data + 6;
 
             valid = ngx_parse_time(&s, 1);
-            if (valid < 0) {
+            if (valid == (time_t) NGX_ERROR) {
                 goto failed;
             }
 
diff --git a/src/http/modules/ngx_http_userid_filter_module.c b/src/http/modules/ngx_http_userid_filter_module.c
--- a/src/http/modules/ngx_http_userid_filter_module.c
+++ b/src/http/modules/ngx_http_userid_filter_module.c
@@ -773,14 +773,10 @@ ngx_http_userid_expires(ngx_conf_t *cf, 
     }
 
     ucf->expires = ngx_parse_time(&value[1], 1);
-    if (ucf->expires == NGX_ERROR) {
+    if (ucf->expires == (time_t) NGX_ERROR) {
         return "invalid value";
     }
 
-    if (ucf->expires == NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 68 years";
-    }
-
     return NGX_CONF_OK;
 }
 
diff --git a/src/http/ngx_http_busy_lock.c b/src/http/ngx_http_busy_lock.c
--- a/src/http/ngx_http_busy_lock.c
+++ b/src/http/ngx_http_busy_lock.c
@@ -273,7 +273,7 @@ char *ngx_http_set_busy_lock_slot(ngx_co
             line.data = value[i].data + 2;
 
             bl->timeout = ngx_parse_time(&line, 1);
-            if (bl->timeout == NGX_ERROR) {
+            if (bl->timeout == (time_t) NGX_ERROR) {
                 invalid = 1;
                 break;
             }
diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c
--- a/src/http/ngx_http_core_module.c
+++ b/src/http/ngx_http_core_module.c
@@ -4418,7 +4418,7 @@ ngx_http_core_open_file_cache(ngx_conf_t
             s.data = value[i].data + 9;
 
             inactive = ngx_parse_time(&s, 1);
-            if (inactive < 0) {
+            if (inactive == (time_t) NGX_ERROR) {
                 goto failed;
             }
 
@@ -4505,24 +4505,16 @@ ngx_http_core_keepalive(ngx_conf_t *cf, 
         return "invalid value";
     }
 
-    if (clcf->keepalive_timeout == (ngx_msec_t) NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 597 hours";
-    }
-
     if (cf->args->nelts == 2) {
         return NGX_CONF_OK;
     }
 
     clcf->keepalive_header = ngx_parse_time(&value[2], 1);
 
-    if (clcf->keepalive_header == NGX_ERROR) {
+    if (clcf->keepalive_header == (time_t) NGX_ERROR) {
         return "invalid value";
     }
 
-    if (clcf->keepalive_header == NGX_PARSE_LARGE_TIME) {
-        return "value must be less than 68 years";
-    }
-
     return NGX_CONF_OK;
 }
 
diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c
+++ b/src/http/ngx_http_file_cache.c
@@ -1458,7 +1458,8 @@ ngx_http_file_cache_set_slot(ngx_conf_t 
     time_t                  inactive;
     ssize_t                 size;
     ngx_str_t               s, name, *value;
-    ngx_int_t               loader_files, loader_sleep, loader_threshold;
+    ngx_int_t               loader_files;
+    ngx_msec_t              loader_sleep, loader_threshold;
     ngx_uint_t              i, n;
     ngx_http_file_cache_t  *cache;
 
@@ -1565,7 +1566,7 @@ ngx_http_file_cache_set_slot(ngx_conf_t 
             s.data = value[i].data + 9;
 
             inactive = ngx_parse_time(&s, 1);
-            if (inactive < 0) {
+            if (inactive == (time_t) NGX_ERROR) {
                 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                                    "invalid inactive value \"%V\"", &value[i]);
                 return NGX_CONF_ERROR;
@@ -1607,7 +1608,7 @@ ngx_http_file_cache_set_slot(ngx_conf_t 
             s.data = value[i].data + 13;
 
             loader_sleep = ngx_parse_time(&s, 0);
-            if (loader_sleep < 0) {
+            if (loader_sleep == (ngx_msec_t) NGX_ERROR) {
                 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                            "invalid loader_sleep value \"%V\"", &value[i]);
                 return NGX_CONF_ERROR;
@@ -1622,7 +1623,7 @@ ngx_http_file_cache_set_slot(ngx_conf_t 
             s.data = value[i].data + 17;
 
             loader_threshold = ngx_parse_time(&s, 0);
-            if (loader_threshold < 0) {
+            if (loader_threshold == (ngx_msec_t) NGX_ERROR) {
                 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                            "invalid loader_threshold value \"%V\"", &value[i]);
                 return NGX_CONF_ERROR;
@@ -1649,8 +1650,8 @@ ngx_http_file_cache_set_slot(ngx_conf_t 
     cache->path->conf_file = cf->conf_file->file.name.data;
     cache->path->line = cf->conf_file->line;
     cache->loader_files = loader_files;
-    cache->loader_sleep = (ngx_msec_t) loader_sleep;
-    cache->loader_threshold = (ngx_msec_t) loader_threshold;
+    cache->loader_sleep = loader_sleep;
+    cache->loader_threshold = loader_threshold;
 
     if (ngx_add_path(cf, &cache->path) != NGX_OK) {
         return NGX_CONF_ERROR;
@@ -1704,7 +1705,7 @@ ngx_http_file_cache_valid_set_slot(ngx_c
     n = cf->args->nelts - 1;
 
     valid = ngx_parse_time(&value[n], 1);
-    if (valid < 0) {
+    if (valid == (time_t) NGX_ERROR) {
         ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                            "invalid time value \"%V\"", &value[n]);
         return NGX_CONF_ERROR;
diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -4163,7 +4163,7 @@ ngx_http_upstream_server(ngx_conf_t *cf,
 
             fail_timeout = ngx_parse_time(&s, 1);
 
-            if (fail_timeout == NGX_ERROR) {
+            if (fail_timeout == (time_t) NGX_ERROR) {
                 goto invalid;
             }
 


From mdounin at mdounin.ru  Fri Sep  9 10:32:41 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 09 Sep 2011 14:32:41 +0400
Subject: [PATCH 0 of 2] reduce memory footprint for long-lived requests
Message-ID: 

Hello!

The following 2 patches address the issue seen with long lived requests: 
they tend to consume lots of memory over time if chunked encoding is used, 
and the memory is only freed after request termination.

First patch introduce buffers reuse in chunked filter.  It mostly resolves
the problem, though there is still small "leak" related to chain links 
(ngx_chain_t structures, 2 pointers) not always being reused.

Second patch introduce proper reuse of chain links, though it requires 
API change in ngx_chain_update_chains() function.  We may introduce
another function to preserve API, but I'm not really sure we want to.

Maxim Dounin


From mdounin at mdounin.ru  Fri Sep  9 10:32:42 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 09 Sep 2011 14:32:42 +0400
Subject: [PATCH 1 of 2] Buffers reuse in chunked filter
In-Reply-To: 
References: 
Message-ID: 

# HG changeset patch
# User Maxim Dounin 
# Date 1315564269 -14400
# Node ID b667ed67c0b9046b94291fa6f52e850006011718
# Parent  014764a85840606c90317e9f44f2b9fa139cbc8b
Buffers reuse in chunked filter.

There were 2 buffers allocated on each buffer chain sent through chunked
filter (one buffer for chunk size, another one for trailing CRLF, about
120 bytes in total on 32-bit platforms).  This resulted in large memory
consumption with long-lived requests sending many buffer chains.  Usual
example of problematic scenario is streaming though proxy with
proxy_buffering set to off.

Introduced buffers reuse reduces memory consumption in the above problematic
scenario.

See here for initial report:
http://mailman.nginx.org/pipermail/nginx/2010-April/019814.html

diff --git a/src/http/modules/ngx_http_chunked_filter_module.c b/src/http/modules/ngx_http_chunked_filter_module.c
--- a/src/http/modules/ngx_http_chunked_filter_module.c
+++ b/src/http/modules/ngx_http_chunked_filter_module.c
@@ -9,6 +9,12 @@
 #include 
 
 
+typedef struct {
+    ngx_chain_t         *free;
+    ngx_chain_t         *busy;
+} ngx_http_chunked_filter_ctx_t;
+
+
 static ngx_int_t ngx_http_chunked_filter_init(ngx_conf_t *cf);
 
 
@@ -50,7 +56,8 @@ static ngx_http_output_body_filter_pt   
 static ngx_int_t
 ngx_http_chunked_header_filter(ngx_http_request_t *r)
 {
-    ngx_http_core_loc_conf_t  *clcf;
+    ngx_http_core_loc_conf_t       *clcf;
+    ngx_http_chunked_filter_ctx_t  *ctx;
 
     if (r->headers_out.status == NGX_HTTP_NOT_MODIFIED
         || r->headers_out.status == NGX_HTTP_NO_CONTENT
@@ -70,6 +77,14 @@ ngx_http_chunked_header_filter(ngx_http_
             if (clcf->chunked_transfer_encoding) {
                 r->chunked = 1;
 
+                ctx = ngx_pcalloc(r->pool,
+                                  sizeof(ngx_http_chunked_filter_ctx_t));
+                if (ctx == NULL) {
+                    return NGX_ERROR;
+                }
+
+                ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
+
             } else {
                 r->keepalive = 0;
             }
@@ -83,17 +98,21 @@ ngx_http_chunked_header_filter(ngx_http_
 static ngx_int_t
 ngx_http_chunked_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
 {
-    u_char       *chunk;
-    off_t         size;
-    ngx_buf_t    *b;
-    ngx_chain_t   out, tail, *cl, *tl, **ll;
+    u_char                         *chunk;
+    off_t                           size;
+    ngx_int_t                       rc;
+    ngx_buf_t                      *b;
+    ngx_chain_t                    *out, *cl, *tl, **ll;
+    ngx_http_chunked_filter_ctx_t  *ctx;
 
     if (in == NULL || !r->chunked || r->header_only) {
         return ngx_http_next_body_filter(r, in);
     }
 
-    out.buf = NULL;
-    ll = &out.next;
+    ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
+
+    out = NULL;
+    ll = &out;
 
     size = 0;
     cl = in;
@@ -127,31 +146,46 @@ ngx_http_chunked_body_filter(ngx_http_re
     }
 
     if (size) {
-        b = ngx_calloc_buf(r->pool);
-        if (b == NULL) {
+        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
+        if (tl == NULL) {
             return NGX_ERROR;
         }
 
-        /* the "0000000000000000" is 64-bit hexadimal string */
+        b = tl->buf;
+        chunk = b->start;
 
-        chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1);
         if (chunk == NULL) {
-            return NGX_ERROR;
+            /* the "0000000000000000" is 64-bit hexadecimal string */
+
+            chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1);
+            if (chunk == NULL) {
+                return NGX_ERROR;
+            }
+
+            b->start = chunk;
+            b->end = chunk + sizeof("0000000000000000" CRLF) - 1;
         }
 
+        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
+        b->memory = 0;
         b->temporary = 1;
         b->pos = chunk;
         b->last = ngx_sprintf(chunk, "%xO" CRLF, size);
 
-        out.buf = b;
+        tl->next = out;
+        out = tl;
     }
 
     if (cl->buf->last_buf) {
-        b = ngx_calloc_buf(r->pool);
-        if (b == NULL) {
+        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
+        if (tl == NULL) {
             return NGX_ERROR;
         }
+ 
+        b = tl->buf;
 
+        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
+        b->temporary = 0;
         b->memory = 1;
         b->last_buf = 1;
         b->pos = (u_char *) CRLF "0" CRLF CRLF;
@@ -159,35 +193,38 @@ ngx_http_chunked_body_filter(ngx_http_re
 
         cl->buf->last_buf = 0;
 
+        *ll = tl;
+
         if (size == 0) {
             b->pos += 2;
-            out.buf = b;
-            out.next = NULL;
-
-            return ngx_http_next_body_filter(r, &out);
         }
 
-    } else {
-        if (size == 0) {
-            *ll = NULL;
-            return ngx_http_next_body_filter(r, out.next);
-        }
-
-        b = ngx_calloc_buf(r->pool);
-        if (b == NULL) {
+    } else if (size > 0) {
+        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
+        if (tl == NULL) {
             return NGX_ERROR;
         }
 
+        b = tl->buf;
+
+        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
+        b->temporary = 0;
         b->memory = 1;
         b->pos = (u_char *) CRLF;
         b->last = b->pos + 2;
+
+        *ll = tl;
+
+    } else {
+        *ll = NULL;
     }
 
-    tail.buf = b;
-    tail.next = NULL;
-    *ll = &tail;
+    rc = ngx_http_next_body_filter(r, out);
 
-    return ngx_http_next_body_filter(r, &out);
+    ngx_chain_update_chains(&ctx->free, &ctx->busy, &out,
+                            (ngx_buf_tag_t) &ngx_http_chunked_filter_module);
+
+    return rc;
 }
 
 


From mdounin at mdounin.ru  Fri Sep  9 10:32:43 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 09 Sep 2011 14:32:43 +0400
Subject: [PATCH 2 of 2] API change: ngx_chain_update_chains() now requires pool
In-Reply-To: 
References: 
Message-ID: <75a67f1c7e4d8affb16d.1315564363@vm-bsd.mdounin.ru>

# HG changeset patch
# User Maxim Dounin 
# Date 1315564280 -14400
# Node ID 75a67f1c7e4d8affb16d3f4f029757a8e0d3b455
# Parent  b667ed67c0b9046b94291fa6f52e850006011718
API change: ngx_chain_update_chains() now requires pool.

The ngx_chain_update_chains() needs pool to free chain links used for buffers
with non-matching tags.  Providing one helps to reduce memory consumption
for long-lived requests.

diff --git a/src/core/ngx_buf.c b/src/core/ngx_buf.c
--- a/src/core/ngx_buf.c
+++ b/src/core/ngx_buf.c
@@ -180,7 +180,7 @@ ngx_chain_get_free_buf(ngx_pool_t *p, ng
 
 
 void
-ngx_chain_update_chains(ngx_chain_t **free, ngx_chain_t **busy,
+ngx_chain_update_chains(ngx_pool_t *p, ngx_chain_t **free, ngx_chain_t **busy,
     ngx_chain_t **out, ngx_buf_tag_t tag)
 {
     ngx_chain_t  *cl;
@@ -197,19 +197,21 @@ ngx_chain_update_chains(ngx_chain_t **fr
     *out = NULL;
 
     while (*busy) {
-        if (ngx_buf_size((*busy)->buf) != 0) {
+        cl = *busy;
+
+        if (ngx_buf_size(cl->buf) != 0) {
             break;
         }
 
-        if ((*busy)->buf->tag != tag) {
-            *busy = (*busy)->next;
+        if (cl->buf->tag != tag) {
+            *busy = cl->next;
+            ngx_free_chain(p, cl);
             continue;
         }
 
-        (*busy)->buf->pos = (*busy)->buf->start;
-        (*busy)->buf->last = (*busy)->buf->start;
+        cl->buf->pos = cl->buf->start;
+        cl->buf->last = cl->buf->start;
 
-        cl = *busy;
         *busy = cl->next;
         cl->next = *free;
         *free = cl;
diff --git a/src/core/ngx_buf.h b/src/core/ngx_buf.h
--- a/src/core/ngx_buf.h
+++ b/src/core/ngx_buf.h
@@ -154,8 +154,8 @@ ngx_int_t ngx_chain_writer(void *ctx, ng
 ngx_int_t ngx_chain_add_copy(ngx_pool_t *pool, ngx_chain_t **chain,
     ngx_chain_t *in);
 ngx_chain_t *ngx_chain_get_free_buf(ngx_pool_t *p, ngx_chain_t **free);
-void ngx_chain_update_chains(ngx_chain_t **free, ngx_chain_t **busy,
-    ngx_chain_t **out, ngx_buf_tag_t tag);
+void ngx_chain_update_chains(ngx_pool_t *p, ngx_chain_t **free,
+    ngx_chain_t **busy, ngx_chain_t **out, ngx_buf_tag_t tag);
 
 
 #endif /* _NGX_BUF_H_INCLUDED_ */
diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c
--- a/src/core/ngx_output_chain.c
+++ b/src/core/ngx_output_chain.c
@@ -208,7 +208,8 @@ ngx_output_chain(ngx_output_chain_ctx_t 
             return last;
         }
 
-        ngx_chain_update_chains(&ctx->free, &ctx->busy, &out, ctx->tag);
+        ngx_chain_update_chains(ctx->pool, &ctx->free, &ctx->busy, &out,
+                                ctx->tag);
         last_out = &out;
     }
 }
diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c
--- a/src/event/ngx_event_pipe.c
+++ b/src/event/ngx_event_pipe.c
@@ -638,7 +638,7 @@ ngx_event_pipe_write_to_downstream(ngx_e
             return ngx_event_pipe_drain_chains(p);
         }
 
-        ngx_chain_update_chains(&p->free, &p->busy, &out, p->tag);
+        ngx_chain_update_chains(p->pool, &p->free, &p->busy, &out, p->tag);
 
         for (cl = p->free; cl; cl = cl->next) {
 
diff --git a/src/http/modules/ngx_http_chunked_filter_module.c b/src/http/modules/ngx_http_chunked_filter_module.c
--- a/src/http/modules/ngx_http_chunked_filter_module.c
+++ b/src/http/modules/ngx_http_chunked_filter_module.c
@@ -221,7 +221,7 @@ ngx_http_chunked_body_filter(ngx_http_re
 
     rc = ngx_http_next_body_filter(r, out);
 
-    ngx_chain_update_chains(&ctx->free, &ctx->busy, &out,
+    ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out,
                             (ngx_buf_tag_t) &ngx_http_chunked_filter_module);
 
     return rc;
diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c
--- a/src/http/modules/ngx_http_gzip_filter_module.c
+++ b/src/http/modules/ngx_http_gzip_filter_module.c
@@ -378,7 +378,7 @@ ngx_http_gzip_body_filter(ngx_http_reque
 
         cl = NULL;
 
-        ngx_chain_update_chains(&ctx->free, &ctx->busy, &cl,
+        ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &cl,
                                 (ngx_buf_tag_t) &ngx_http_gzip_filter_module);
         ctx->nomem = 0;
     }
@@ -448,7 +448,7 @@ ngx_http_gzip_body_filter(ngx_http_reque
 
         ngx_http_gzip_filter_free_copy_buf(r, ctx);
 
-        ngx_chain_update_chains(&ctx->free, &ctx->busy, &ctx->out,
+        ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &ctx->out,
                                 (ngx_buf_tag_t) &ngx_http_gzip_filter_module);
         ctx->last_out = &ctx->out;
 
diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -2382,7 +2382,7 @@ ngx_http_upstream_process_non_buffered_r
                     return;
                 }
 
-                ngx_chain_update_chains(&u->free_bufs, &u->busy_bufs,
+                ngx_chain_update_chains(r->pool, &u->free_bufs, &u->busy_bufs,
                                         &u->out_bufs, u->output.tag);
             }
 


From mat999 at gmail.com  Fri Sep  9 12:01:48 2011
From: mat999 at gmail.com (SplitIce)
Date: Fri, 9 Sep 2011 22:01:48 +1000
Subject: [PATCH 1 of 2] Buffers reuse in chunked filter
In-Reply-To: 
References: 
	
Message-ID: 

wow, makes me want to rebuild all my server nodes to include this patch.

On Fri, Sep 9, 2011 at 8:32 PM, Maxim Dounin  wrote:

> # HG changeset patch
> # User Maxim Dounin 
> # Date 1315564269 -14400
> # Node ID b667ed67c0b9046b94291fa6f52e850006011718
> # Parent  014764a85840606c90317e9f44f2b9fa139cbc8b
> Buffers reuse in chunked filter.
>
> There were 2 buffers allocated on each buffer chain sent through chunked
> filter (one buffer for chunk size, another one for trailing CRLF, about
> 120 bytes in total on 32-bit platforms).  This resulted in large memory
> consumption with long-lived requests sending many buffer chains.  Usual
> example of problematic scenario is streaming though proxy with
> proxy_buffering set to off.
>
> Introduced buffers reuse reduces memory consumption in the above
> problematic
> scenario.
>
> See here for initial report:
> http://mailman.nginx.org/pipermail/nginx/2010-April/019814.html
>
> diff --git a/src/http/modules/ngx_http_chunked_filter_module.c
> b/src/http/modules/ngx_http_chunked_filter_module.c
> --- a/src/http/modules/ngx_http_chunked_filter_module.c
> +++ b/src/http/modules/ngx_http_chunked_filter_module.c
> @@ -9,6 +9,12 @@
>  #include 
>
>
> +typedef struct {
> +    ngx_chain_t         *free;
> +    ngx_chain_t         *busy;
> +} ngx_http_chunked_filter_ctx_t;
> +
> +
>  static ngx_int_t ngx_http_chunked_filter_init(ngx_conf_t *cf);
>
>
> @@ -50,7 +56,8 @@ static ngx_http_output_body_filter_pt
>  static ngx_int_t
>  ngx_http_chunked_header_filter(ngx_http_request_t *r)
>  {
> -    ngx_http_core_loc_conf_t  *clcf;
> +    ngx_http_core_loc_conf_t       *clcf;
> +    ngx_http_chunked_filter_ctx_t  *ctx;
>
>     if (r->headers_out.status == NGX_HTTP_NOT_MODIFIED
>         || r->headers_out.status == NGX_HTTP_NO_CONTENT
> @@ -70,6 +77,14 @@ ngx_http_chunked_header_filter(ngx_http_
>             if (clcf->chunked_transfer_encoding) {
>                 r->chunked = 1;
>
> +                ctx = ngx_pcalloc(r->pool,
> +                                  sizeof(ngx_http_chunked_filter_ctx_t));
> +                if (ctx == NULL) {
> +                    return NGX_ERROR;
> +                }
> +
> +                ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
> +
>             } else {
>                 r->keepalive = 0;
>             }
> @@ -83,17 +98,21 @@ ngx_http_chunked_header_filter(ngx_http_
>  static ngx_int_t
>  ngx_http_chunked_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
>  {
> -    u_char       *chunk;
> -    off_t         size;
> -    ngx_buf_t    *b;
> -    ngx_chain_t   out, tail, *cl, *tl, **ll;
> +    u_char                         *chunk;
> +    off_t                           size;
> +    ngx_int_t                       rc;
> +    ngx_buf_t                      *b;
> +    ngx_chain_t                    *out, *cl, *tl, **ll;
> +    ngx_http_chunked_filter_ctx_t  *ctx;
>
>     if (in == NULL || !r->chunked || r->header_only) {
>         return ngx_http_next_body_filter(r, in);
>     }
>
> -    out.buf = NULL;
> -    ll = &out.next;
> +    ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
> +
> +    out = NULL;
> +    ll = &out;
>
>     size = 0;
>     cl = in;
> @@ -127,31 +146,46 @@ ngx_http_chunked_body_filter(ngx_http_re
>     }
>
>     if (size) {
> -        b = ngx_calloc_buf(r->pool);
> -        if (b == NULL) {
> +        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> +        if (tl == NULL) {
>             return NGX_ERROR;
>         }
>
> -        /* the "0000000000000000" is 64-bit hexadimal string */
> +        b = tl->buf;
> +        chunk = b->start;
>
> -        chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1);
>         if (chunk == NULL) {
> -            return NGX_ERROR;
> +            /* the "0000000000000000" is 64-bit hexadecimal string */
> +
> +            chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) -
> 1);
> +            if (chunk == NULL) {
> +                return NGX_ERROR;
> +            }
> +
> +            b->start = chunk;
> +            b->end = chunk + sizeof("0000000000000000" CRLF) - 1;
>         }
>
> +        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> +        b->memory = 0;
>         b->temporary = 1;
>         b->pos = chunk;
>         b->last = ngx_sprintf(chunk, "%xO" CRLF, size);
>
> -        out.buf = b;
> +        tl->next = out;
> +        out = tl;
>     }
>
>     if (cl->buf->last_buf) {
> -        b = ngx_calloc_buf(r->pool);
> -        if (b == NULL) {
> +        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> +        if (tl == NULL) {
>             return NGX_ERROR;
>         }
> +
> +        b = tl->buf;
>
> +        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> +        b->temporary = 0;
>         b->memory = 1;
>         b->last_buf = 1;
>         b->pos = (u_char *) CRLF "0" CRLF CRLF;
> @@ -159,35 +193,38 @@ ngx_http_chunked_body_filter(ngx_http_re
>
>         cl->buf->last_buf = 0;
>
> +        *ll = tl;
> +
>         if (size == 0) {
>             b->pos += 2;
> -            out.buf = b;
> -            out.next = NULL;
> -
> -            return ngx_http_next_body_filter(r, &out);
>         }
>
> -    } else {
> -        if (size == 0) {
> -            *ll = NULL;
> -            return ngx_http_next_body_filter(r, out.next);
> -        }
> -
> -        b = ngx_calloc_buf(r->pool);
> -        if (b == NULL) {
> +    } else if (size > 0) {
> +        tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> +        if (tl == NULL) {
>             return NGX_ERROR;
>         }
>
> +        b = tl->buf;
> +
> +        b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> +        b->temporary = 0;
>         b->memory = 1;
>         b->pos = (u_char *) CRLF;
>         b->last = b->pos + 2;
> +
> +        *ll = tl;
> +
> +    } else {
> +        *ll = NULL;
>     }
>
> -    tail.buf = b;
> -    tail.next = NULL;
> -    *ll = &tail;
> +    rc = ngx_http_next_body_filter(r, out);
>
> -    return ngx_http_next_body_filter(r, &out);
> +    ngx_chain_update_chains(&ctx->free, &ctx->busy, &out,
> +                            (ngx_buf_tag_t)
> &ngx_http_chunked_filter_module);
> +
> +    return rc;
>  }
>
>
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From neil at krnl.hu  Fri Sep  9 12:36:55 2011
From: neil at krnl.hu (=?ISO-8859-1?Q?Korn=E9l_Schadl?=)
Date: Fri, 9 Sep 2011 14:36:55 +0200
Subject: [PATCH 1 of 2] Buffers reuse in chunked filter
In-Reply-To: 
References: 
	
	
Message-ID: 

are these patches included in svn?

On Fri, Sep 9, 2011 at 2:01 PM, SplitIce  wrote:
> wow, makes me want to rebuild all my server nodes to include this patch.
>
> On Fri, Sep 9, 2011 at 8:32 PM, Maxim Dounin  wrote:
>>
>> # HG changeset patch
>> # User Maxim Dounin 
>> # Date 1315564269 -14400
>> # Node ID b667ed67c0b9046b94291fa6f52e850006011718
>> # Parent ?014764a85840606c90317e9f44f2b9fa139cbc8b
>> Buffers reuse in chunked filter.
>>
>> There were 2 buffers allocated on each buffer chain sent through chunked
>> filter (one buffer for chunk size, another one for trailing CRLF, about
>> 120 bytes in total on 32-bit platforms). ?This resulted in large memory
>> consumption with long-lived requests sending many buffer chains. ?Usual
>> example of problematic scenario is streaming though proxy with
>> proxy_buffering set to off.
>>
>> Introduced buffers reuse reduces memory consumption in the above
>> problematic
>> scenario.
>>
>> See here for initial report:
>> http://mailman.nginx.org/pipermail/nginx/2010-April/019814.html
>>
>> diff --git a/src/http/modules/ngx_http_chunked_filter_module.c
>> b/src/http/modules/ngx_http_chunked_filter_module.c
>> --- a/src/http/modules/ngx_http_chunked_filter_module.c
>> +++ b/src/http/modules/ngx_http_chunked_filter_module.c
>> @@ -9,6 +9,12 @@
>> ?#include 
>>
>>
>> +typedef struct {
>> + ? ?ngx_chain_t ? ? ? ? *free;
>> + ? ?ngx_chain_t ? ? ? ? *busy;
>> +} ngx_http_chunked_filter_ctx_t;
>> +
>> +
>> ?static ngx_int_t ngx_http_chunked_filter_init(ngx_conf_t *cf);
>>
>>
>> @@ -50,7 +56,8 @@ static ngx_http_output_body_filter_pt
>> ?static ngx_int_t
>> ?ngx_http_chunked_header_filter(ngx_http_request_t *r)
>> ?{
>> - ? ?ngx_http_core_loc_conf_t ?*clcf;
>> + ? ?ngx_http_core_loc_conf_t ? ? ? *clcf;
>> + ? ?ngx_http_chunked_filter_ctx_t ?*ctx;
>>
>> ? ? if (r->headers_out.status == NGX_HTTP_NOT_MODIFIED
>> ? ? ? ? || r->headers_out.status == NGX_HTTP_NO_CONTENT
>> @@ -70,6 +77,14 @@ ngx_http_chunked_header_filter(ngx_http_
>> ? ? ? ? ? ? if (clcf->chunked_transfer_encoding) {
>> ? ? ? ? ? ? ? ? r->chunked = 1;
>>
>> + ? ? ? ? ? ? ? ?ctx = ngx_pcalloc(r->pool,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?sizeof(ngx_http_chunked_filter_ctx_t));
>> + ? ? ? ? ? ? ? ?if (ctx == NULL) {
>> + ? ? ? ? ? ? ? ? ? ?return NGX_ERROR;
>> + ? ? ? ? ? ? ? ?}
>> +
>> + ? ? ? ? ? ? ? ?ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
>> +
>> ? ? ? ? ? ? } else {
>> ? ? ? ? ? ? ? ? r->keepalive = 0;
>> ? ? ? ? ? ? }
>> @@ -83,17 +98,21 @@ ngx_http_chunked_header_filter(ngx_http_
>> ?static ngx_int_t
>> ?ngx_http_chunked_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
>> ?{
>> - ? ?u_char ? ? ? *chunk;
>> - ? ?off_t ? ? ? ? size;
>> - ? ?ngx_buf_t ? ?*b;
>> - ? ?ngx_chain_t ? out, tail, *cl, *tl, **ll;
>> + ? ?u_char ? ? ? ? ? ? ? ? ? ? ? ? *chunk;
>> + ? ?off_t ? ? ? ? ? ? ? ? ? ? ? ? ? size;
>> + ? ?ngx_int_t ? ? ? ? ? ? ? ? ? ? ? rc;
>> + ? ?ngx_buf_t ? ? ? ? ? ? ? ? ? ? ?*b;
>> + ? ?ngx_chain_t ? ? ? ? ? ? ? ? ? ?*out, *cl, *tl, **ll;
>> + ? ?ngx_http_chunked_filter_ctx_t ?*ctx;
>>
>> ? ? if (in == NULL || !r->chunked || r->header_only) {
>> ? ? ? ? return ngx_http_next_body_filter(r, in);
>> ? ? }
>>
>> - ? ?out.buf = NULL;
>> - ? ?ll = &out.next;
>> + ? ?ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
>> +
>> + ? ?out = NULL;
>> + ? ?ll = &out;
>>
>> ? ? size = 0;
>> ? ? cl = in;
>> @@ -127,31 +146,46 @@ ngx_http_chunked_body_filter(ngx_http_re
>> ? ? }
>>
>> ? ? if (size) {
>> - ? ? ? ?b = ngx_calloc_buf(r->pool);
>> - ? ? ? ?if (b == NULL) {
>> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
>> + ? ? ? ?if (tl == NULL) {
>> ? ? ? ? ? ? return NGX_ERROR;
>> ? ? ? ? }
>>
>> - ? ? ? ?/* the "0000000000000000" is 64-bit hexadimal string */
>> + ? ? ? ?b = tl->buf;
>> + ? ? ? ?chunk = b->start;
>>
>> - ? ? ? ?chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1);
>> ? ? ? ? if (chunk == NULL) {
>> - ? ? ? ? ? ?return NGX_ERROR;
>> + ? ? ? ? ? ?/* the "0000000000000000" is 64-bit hexadecimal string */
>> +
>> + ? ? ? ? ? ?chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) -
>> 1);
>> + ? ? ? ? ? ?if (chunk == NULL) {
>> + ? ? ? ? ? ? ? ?return NGX_ERROR;
>> + ? ? ? ? ? ?}
>> +
>> + ? ? ? ? ? ?b->start = chunk;
>> + ? ? ? ? ? ?b->end = chunk + sizeof("0000000000000000" CRLF) - 1;
>> ? ? ? ? }
>>
>> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
>> + ? ? ? ?b->memory = 0;
>> ? ? ? ? b->temporary = 1;
>> ? ? ? ? b->pos = chunk;
>> ? ? ? ? b->last = ngx_sprintf(chunk, "%xO" CRLF, size);
>>
>> - ? ? ? ?out.buf = b;
>> + ? ? ? ?tl->next = out;
>> + ? ? ? ?out = tl;
>> ? ? }
>>
>> ? ? if (cl->buf->last_buf) {
>> - ? ? ? ?b = ngx_calloc_buf(r->pool);
>> - ? ? ? ?if (b == NULL) {
>> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
>> + ? ? ? ?if (tl == NULL) {
>> ? ? ? ? ? ? return NGX_ERROR;
>> ? ? ? ? }
>> +
>> + ? ? ? ?b = tl->buf;
>>
>> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
>> + ? ? ? ?b->temporary = 0;
>> ? ? ? ? b->memory = 1;
>> ? ? ? ? b->last_buf = 1;
>> ? ? ? ? b->pos = (u_char *) CRLF "0" CRLF CRLF;
>> @@ -159,35 +193,38 @@ ngx_http_chunked_body_filter(ngx_http_re
>>
>> ? ? ? ? cl->buf->last_buf = 0;
>>
>> + ? ? ? ?*ll = tl;
>> +
>> ? ? ? ? if (size == 0) {
>> ? ? ? ? ? ? b->pos += 2;
>> - ? ? ? ? ? ?out.buf = b;
>> - ? ? ? ? ? ?out.next = NULL;
>> -
>> - ? ? ? ? ? ?return ngx_http_next_body_filter(r, &out);
>> ? ? ? ? }
>>
>> - ? ?} else {
>> - ? ? ? ?if (size == 0) {
>> - ? ? ? ? ? ?*ll = NULL;
>> - ? ? ? ? ? ?return ngx_http_next_body_filter(r, out.next);
>> - ? ? ? ?}
>> -
>> - ? ? ? ?b = ngx_calloc_buf(r->pool);
>> - ? ? ? ?if (b == NULL) {
>> + ? ?} else if (size > 0) {
>> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
>> + ? ? ? ?if (tl == NULL) {
>> ? ? ? ? ? ? return NGX_ERROR;
>> ? ? ? ? }
>>
>> + ? ? ? ?b = tl->buf;
>> +
>> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
>> + ? ? ? ?b->temporary = 0;
>> ? ? ? ? b->memory = 1;
>> ? ? ? ? b->pos = (u_char *) CRLF;
>> ? ? ? ? b->last = b->pos + 2;
>> +
>> + ? ? ? ?*ll = tl;
>> +
>> + ? ?} else {
>> + ? ? ? ?*ll = NULL;
>> ? ? }
>>
>> - ? ?tail.buf = b;
>> - ? ?tail.next = NULL;
>> - ? ?*ll = &tail;
>> + ? ?rc = ngx_http_next_body_filter(r, out);
>>
>> - ? ?return ngx_http_next_body_filter(r, &out);
>> + ? ?ngx_chain_update_chains(&ctx->free, &ctx->busy, &out,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ?(ngx_buf_tag_t)
>> &ngx_http_chunked_filter_module);
>> +
>> + ? ?return rc;
>> ?}
>>
>>
>>
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>


From mdounin at mdounin.ru  Fri Sep  9 12:42:40 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 9 Sep 2011 16:42:40 +0400
Subject: [PATCH 1 of 2] Buffers reuse in chunked filter
In-Reply-To: 
References: 
	
	
	
Message-ID: <20110909124240.GQ1137@mdounin.ru>

Hello!

On Fri, Sep 09, 2011 at 02:36:55PM +0200, Korn?l Schadl wrote:

> are these patches included in svn?

Not yet.  I need Igor's approval to do so, the patches are posted 
here as a part of review process.

Maxim Dounin

> 
> On Fri, Sep 9, 2011 at 2:01 PM, SplitIce  wrote:
> > wow, makes me want to rebuild all my server nodes to include this patch.
> >
> > On Fri, Sep 9, 2011 at 8:32 PM, Maxim Dounin  wrote:
> >>
> >> # HG changeset patch
> >> # User Maxim Dounin 
> >> # Date 1315564269 -14400
> >> # Node ID b667ed67c0b9046b94291fa6f52e850006011718
> >> # Parent ?014764a85840606c90317e9f44f2b9fa139cbc8b
> >> Buffers reuse in chunked filter.
> >>
> >> There were 2 buffers allocated on each buffer chain sent through chunked
> >> filter (one buffer for chunk size, another one for trailing CRLF, about
> >> 120 bytes in total on 32-bit platforms). ?This resulted in large memory
> >> consumption with long-lived requests sending many buffer chains. ?Usual
> >> example of problematic scenario is streaming though proxy with
> >> proxy_buffering set to off.
> >>
> >> Introduced buffers reuse reduces memory consumption in the above
> >> problematic
> >> scenario.
> >>
> >> See here for initial report:
> >> http://mailman.nginx.org/pipermail/nginx/2010-April/019814.html
> >>
> >> diff --git a/src/http/modules/ngx_http_chunked_filter_module.c
> >> b/src/http/modules/ngx_http_chunked_filter_module.c
> >> --- a/src/http/modules/ngx_http_chunked_filter_module.c
> >> +++ b/src/http/modules/ngx_http_chunked_filter_module.c
> >> @@ -9,6 +9,12 @@
> >> ?#include 
> >>
> >>
> >> +typedef struct {
> >> + ? ?ngx_chain_t ? ? ? ? *free;
> >> + ? ?ngx_chain_t ? ? ? ? *busy;
> >> +} ngx_http_chunked_filter_ctx_t;
> >> +
> >> +
> >> ?static ngx_int_t ngx_http_chunked_filter_init(ngx_conf_t *cf);
> >>
> >>
> >> @@ -50,7 +56,8 @@ static ngx_http_output_body_filter_pt
> >> ?static ngx_int_t
> >> ?ngx_http_chunked_header_filter(ngx_http_request_t *r)
> >> ?{
> >> - ? ?ngx_http_core_loc_conf_t ?*clcf;
> >> + ? ?ngx_http_core_loc_conf_t ? ? ? *clcf;
> >> + ? ?ngx_http_chunked_filter_ctx_t ?*ctx;
> >>
> >> ? ? if (r->headers_out.status == NGX_HTTP_NOT_MODIFIED
> >> ? ? ? ? || r->headers_out.status == NGX_HTTP_NO_CONTENT
> >> @@ -70,6 +77,14 @@ ngx_http_chunked_header_filter(ngx_http_
> >> ? ? ? ? ? ? if (clcf->chunked_transfer_encoding) {
> >> ? ? ? ? ? ? ? ? r->chunked = 1;
> >>
> >> + ? ? ? ? ? ? ? ?ctx = ngx_pcalloc(r->pool,
> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?sizeof(ngx_http_chunked_filter_ctx_t));
> >> + ? ? ? ? ? ? ? ?if (ctx == NULL) {
> >> + ? ? ? ? ? ? ? ? ? ?return NGX_ERROR;
> >> + ? ? ? ? ? ? ? ?}
> >> +
> >> + ? ? ? ? ? ? ? ?ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
> >> +
> >> ? ? ? ? ? ? } else {
> >> ? ? ? ? ? ? ? ? r->keepalive = 0;
> >> ? ? ? ? ? ? }
> >> @@ -83,17 +98,21 @@ ngx_http_chunked_header_filter(ngx_http_
> >> ?static ngx_int_t
> >> ?ngx_http_chunked_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
> >> ?{
> >> - ? ?u_char ? ? ? *chunk;
> >> - ? ?off_t ? ? ? ? size;
> >> - ? ?ngx_buf_t ? ?*b;
> >> - ? ?ngx_chain_t ? out, tail, *cl, *tl, **ll;
> >> + ? ?u_char ? ? ? ? ? ? ? ? ? ? ? ? *chunk;
> >> + ? ?off_t ? ? ? ? ? ? ? ? ? ? ? ? ? size;
> >> + ? ?ngx_int_t ? ? ? ? ? ? ? ? ? ? ? rc;
> >> + ? ?ngx_buf_t ? ? ? ? ? ? ? ? ? ? ?*b;
> >> + ? ?ngx_chain_t ? ? ? ? ? ? ? ? ? ?*out, *cl, *tl, **ll;
> >> + ? ?ngx_http_chunked_filter_ctx_t ?*ctx;
> >>
> >> ? ? if (in == NULL || !r->chunked || r->header_only) {
> >> ? ? ? ? return ngx_http_next_body_filter(r, in);
> >> ? ? }
> >>
> >> - ? ?out.buf = NULL;
> >> - ? ?ll = &out.next;
> >> + ? ?ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
> >> +
> >> + ? ?out = NULL;
> >> + ? ?ll = &out;
> >>
> >> ? ? size = 0;
> >> ? ? cl = in;
> >> @@ -127,31 +146,46 @@ ngx_http_chunked_body_filter(ngx_http_re
> >> ? ? }
> >>
> >> ? ? if (size) {
> >> - ? ? ? ?b = ngx_calloc_buf(r->pool);
> >> - ? ? ? ?if (b == NULL) {
> >> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> >> + ? ? ? ?if (tl == NULL) {
> >> ? ? ? ? ? ? return NGX_ERROR;
> >> ? ? ? ? }
> >>
> >> - ? ? ? ?/* the "0000000000000000" is 64-bit hexadimal string */
> >> + ? ? ? ?b = tl->buf;
> >> + ? ? ? ?chunk = b->start;
> >>
> >> - ? ? ? ?chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1);
> >> ? ? ? ? if (chunk == NULL) {
> >> - ? ? ? ? ? ?return NGX_ERROR;
> >> + ? ? ? ? ? ?/* the "0000000000000000" is 64-bit hexadecimal string */
> >> +
> >> + ? ? ? ? ? ?chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) -
> >> 1);
> >> + ? ? ? ? ? ?if (chunk == NULL) {
> >> + ? ? ? ? ? ? ? ?return NGX_ERROR;
> >> + ? ? ? ? ? ?}
> >> +
> >> + ? ? ? ? ? ?b->start = chunk;
> >> + ? ? ? ? ? ?b->end = chunk + sizeof("0000000000000000" CRLF) - 1;
> >> ? ? ? ? }
> >>
> >> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> >> + ? ? ? ?b->memory = 0;
> >> ? ? ? ? b->temporary = 1;
> >> ? ? ? ? b->pos = chunk;
> >> ? ? ? ? b->last = ngx_sprintf(chunk, "%xO" CRLF, size);
> >>
> >> - ? ? ? ?out.buf = b;
> >> + ? ? ? ?tl->next = out;
> >> + ? ? ? ?out = tl;
> >> ? ? }
> >>
> >> ? ? if (cl->buf->last_buf) {
> >> - ? ? ? ?b = ngx_calloc_buf(r->pool);
> >> - ? ? ? ?if (b == NULL) {
> >> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> >> + ? ? ? ?if (tl == NULL) {
> >> ? ? ? ? ? ? return NGX_ERROR;
> >> ? ? ? ? }
> >> +
> >> + ? ? ? ?b = tl->buf;
> >>
> >> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> >> + ? ? ? ?b->temporary = 0;
> >> ? ? ? ? b->memory = 1;
> >> ? ? ? ? b->last_buf = 1;
> >> ? ? ? ? b->pos = (u_char *) CRLF "0" CRLF CRLF;
> >> @@ -159,35 +193,38 @@ ngx_http_chunked_body_filter(ngx_http_re
> >>
> >> ? ? ? ? cl->buf->last_buf = 0;
> >>
> >> + ? ? ? ?*ll = tl;
> >> +
> >> ? ? ? ? if (size == 0) {
> >> ? ? ? ? ? ? b->pos += 2;
> >> - ? ? ? ? ? ?out.buf = b;
> >> - ? ? ? ? ? ?out.next = NULL;
> >> -
> >> - ? ? ? ? ? ?return ngx_http_next_body_filter(r, &out);
> >> ? ? ? ? }
> >>
> >> - ? ?} else {
> >> - ? ? ? ?if (size == 0) {
> >> - ? ? ? ? ? ?*ll = NULL;
> >> - ? ? ? ? ? ?return ngx_http_next_body_filter(r, out.next);
> >> - ? ? ? ?}
> >> -
> >> - ? ? ? ?b = ngx_calloc_buf(r->pool);
> >> - ? ? ? ?if (b == NULL) {
> >> + ? ?} else if (size > 0) {
> >> + ? ? ? ?tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
> >> + ? ? ? ?if (tl == NULL) {
> >> ? ? ? ? ? ? return NGX_ERROR;
> >> ? ? ? ? }
> >>
> >> + ? ? ? ?b = tl->buf;
> >> +
> >> + ? ? ? ?b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
> >> + ? ? ? ?b->temporary = 0;
> >> ? ? ? ? b->memory = 1;
> >> ? ? ? ? b->pos = (u_char *) CRLF;
> >> ? ? ? ? b->last = b->pos + 2;
> >> +
> >> + ? ? ? ?*ll = tl;
> >> +
> >> + ? ?} else {
> >> + ? ? ? ?*ll = NULL;
> >> ? ? }
> >>
> >> - ? ?tail.buf = b;
> >> - ? ?tail.next = NULL;
> >> - ? ?*ll = &tail;
> >> + ? ?rc = ngx_http_next_body_filter(r, out);
> >>
> >> - ? ?return ngx_http_next_body_filter(r, &out);
> >> + ? ?ngx_chain_update_chains(&ctx->free, &ctx->busy, &out,
> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ?(ngx_buf_tag_t)
> >> &ngx_http_chunked_filter_module);
> >> +
> >> + ? ?return rc;
> >> ?}
> >>
> >>
> >>
> >> _______________________________________________
> >> nginx-devel mailing list
> >> nginx-devel at nginx.org
> >> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >
> >
> > _______________________________________________
> > nginx-devel mailing list
> > nginx-devel at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >
> 
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel


From andrew at andrewloe.com  Tue Sep 13 21:44:48 2011
From: andrew at andrewloe.com (W. Andrew Loe III)
Date: Tue, 13 Sep 2011 14:44:48 -0700
Subject: [PATCH] Proxy SSL Verify
Message-ID: 

This patch allows you to force OpenSSL to validate the certificate of
the server the http_proxy module is communicating with. Originally
built against 0.7.x branch, I will forward port when I can. I would
appreciate if anyone else has input on how to do this more elegantly,
my skills are rudimentary at best.


diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.c
src/event/ngx_event_openssl.c
--- ../nginx-0.7.67/src/event/ngx_event_openssl.c	2010-06-07
04:55:20.000000000 -0700
+++ src/event/ngx_event_openssl.c	2011-09-13 14:17:05.000000000 -0700
@@ -157,6 +157,12 @@
     SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_CHALLENGE_BUG);
     SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG);

+    /* verification options */
+
+    SSL_CTX_load_verify_locations(ssl->ctx, (const char
*)ssl->ca_certificate.data, NULL);
+    SSL_CTX_set_verify(ssl->ctx, ssl->verify, NULL);
+    SSL_CTX_set_verify_depth(ssl->ctx, ssl->verify_depth);
+
     /* server side options */

     SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG);
diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.h
src/event/ngx_event_openssl.h
--- ../nginx-0.7.67/src/event/ngx_event_openssl.h	2010-06-07
03:09:14.000000000 -0700
+++ src/event/ngx_event_openssl.h	2011-09-13 14:17:05.000000000 -0700
@@ -27,6 +27,9 @@
 typedef struct {
     SSL_CTX                    *ctx;
     ngx_log_t                  *log;
+    ngx_uint_t                  verify;
+    ngx_uint_t                  verify_depth;
+    ngx_str_t                   ca_certificate;
 } ngx_ssl_t;


diff -uNr ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c
src/http/modules/ngx_http_proxy_module.c
--- ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c	2010-06-07
05:23:23.000000000 -0700
+++ src/http/modules/ngx_http_proxy_module.c	2011-09-13 14:17:05.000000000 -0700
@@ -466,6 +466,27 @@
       offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
       NULL },

+      { ngx_string("proxy_ssl_verify"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_num_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),
+      NULL },
+
+      { ngx_string("proxy_ssl_verify_depth"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_num_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
+      NULL },
+
+      { ngx_string("proxy_ssl_ca_certificate"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_str_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
+      NULL },
+
 #endif

       ngx_null_command
@@ -1950,6 +1971,8 @@
     conf->upstream.intercept_errors = NGX_CONF_UNSET;
 #if (NGX_HTTP_SSL)
     conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
+    conf->upstream.ssl_verify = NGX_CONF_UNSET_UINT;
+    conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
 #endif

     /* "proxy_cyclic_temp_file" is disabled */
@@ -2196,6 +2219,22 @@
 #if (NGX_HTTP_SSL)
     ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
                               prev->upstream.ssl_session_reuse, 1);
+    ngx_conf_merge_uint_value(conf->upstream.ssl_verify,
+                              prev->upstream.ssl_verify, 0);
+    ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
+                              prev->upstream.ssl_verify_depth, 1);
+    ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
+                              prev->upstream.ssl_ca_certificate, "");
+
+    if (conf->upstream.ssl_verify) {
+      if (conf->upstream.ssl_ca_certificate.len == 0) {
+        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+            "no \"proxy_ssl_ca_certificate\" is defined for "
+            "the \"proxy_ssl_verify\" directive");
+
+        return NGX_CONF_ERROR;
+      }
+    }
 #endif

     ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
@@ -3011,6 +3050,12 @@

     plcf->upstream.ssl->log = cf->log;

+    plcf->upstream.ssl->ca_certificate.len =
plcf->upstream.ssl_ca_certificate.len;
+    plcf->upstream.ssl->ca_certificate.data =
plcf->upstream.ssl_ca_certificate.data;
+
+    plcf->upstream.ssl->verify = plcf->upstream.ssl_verify;
+    plcf->upstream.ssl->verify_depth = plcf->upstream.ssl_verify_depth;
+
     if (ngx_ssl_create(plcf->upstream.ssl,
                        NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1, NULL)
         != NGX_OK)
diff -uNr ../nginx-0.7.67/src/http/ngx_http_upstream.h
src/http/ngx_http_upstream.h
--- ../nginx-0.7.67/src/http/ngx_http_upstream.h	2010-06-07
05:23:23.000000000 -0700
+++ src/http/ngx_http_upstream.h	2011-09-13 14:17:05.000000000 -0700
@@ -173,6 +173,9 @@
 #if (NGX_HTTP_SSL)
     ngx_ssl_t                       *ssl;
     ngx_flag_t                       ssl_session_reuse;
+    ngx_uint_t                       ssl_verify;
+    ngx_uint_t                       ssl_verify_depth;
+    ngx_str_t                        ssl_ca_certificate;
 #endif

 } ngx_http_upstream_conf_t;
-------------- next part --------------
A non-text attachment was scrubbed...
Name: proxy_ssl_verify.patch
Type: application/octet-stream
Size: 4878 bytes
Desc: not available
URL: 

From mdounin at mdounin.ru  Wed Sep 14 00:17:49 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 14 Sep 2011 04:17:49 +0400
Subject: [PATCH] Proxy SSL Verify
In-Reply-To: 
References: 
Message-ID: <20110914001749.GN1137@mdounin.ru>

Hello!

On Tue, Sep 13, 2011 at 02:44:48PM -0700, W. Andrew Loe III wrote:

> This patch allows you to force OpenSSL to validate the certificate of
> the server the http_proxy module is communicating with. Originally
> built against 0.7.x branch, I will forward port when I can. I would
> appreciate if anyone else has input on how to do this more elegantly,
> my skills are rudimentary at best.
> 
> 
> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.c
> src/event/ngx_event_openssl.c
> --- ../nginx-0.7.67/src/event/ngx_event_openssl.c	2010-06-07
> 04:55:20.000000000 -0700
> +++ src/event/ngx_event_openssl.c	2011-09-13 14:17:05.000000000 -0700
> @@ -157,6 +157,12 @@
>      SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_CHALLENGE_BUG);
>      SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG);
> 
> +    /* verification options */
> +
> +    SSL_CTX_load_verify_locations(ssl->ctx, (const char
> *)ssl->ca_certificate.data, NULL);
> +    SSL_CTX_set_verify(ssl->ctx, ssl->verify, NULL);
> +    SSL_CTX_set_verify_depth(ssl->ctx, ssl->verify_depth);
> +

This should be done in separate function, similar to 
ngx_ssl_client_certificate() (actually, subset of it).  And with 
appropriate error checking.

>      /* server side options */
> 
>      SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG);
> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.h
> src/event/ngx_event_openssl.h
> --- ../nginx-0.7.67/src/event/ngx_event_openssl.h	2010-06-07
> 03:09:14.000000000 -0700
> +++ src/event/ngx_event_openssl.h	2011-09-13 14:17:05.000000000 -0700
> @@ -27,6 +27,9 @@
>  typedef struct {
>      SSL_CTX                    *ctx;
>      ngx_log_t                  *log;
> +    ngx_uint_t                  verify;
> +    ngx_uint_t                  verify_depth;
> +    ngx_str_t                   ca_certificate;
>  } ngx_ssl_t;

This shouldn't be here at all.

> 
> 
> diff -uNr ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c
> src/http/modules/ngx_http_proxy_module.c
> --- ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c	2010-06-07
> 05:23:23.000000000 -0700
> +++ src/http/modules/ngx_http_proxy_module.c	2011-09-13 14:17:05.000000000 -0700
> @@ -466,6 +466,27 @@
>        offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
>        NULL },
> 
> +      { ngx_string("proxy_ssl_verify"),
> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_num_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),
> +      NULL },

You don't want to let users to control binary arguments passed to 
openssl.

It should be either on/off switch (flag slot), or should go away 
completely, switched on by certificate file presense.

If it stays, it probably should be named as 
"proxy_ssl_verify_peer" to be in line with "ssl_verify_client" 
directive (and see below).

> +
> +      { ngx_string("proxy_ssl_verify_depth"),
> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_num_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
> +      NULL },
> +
> +      { ngx_string("proxy_ssl_ca_certificate"),

Probably "proxy_ssl_peer_certificate" would be a better directive 
name.  Not sure.

> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_str_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
> +      NULL },
> +
>  #endif
> 
>        ngx_null_command
> @@ -1950,6 +1971,8 @@
>      conf->upstream.intercept_errors = NGX_CONF_UNSET;
>  #if (NGX_HTTP_SSL)
>      conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
> +    conf->upstream.ssl_verify = NGX_CONF_UNSET_UINT;
> +    conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
>  #endif
> 
>      /* "proxy_cyclic_temp_file" is disabled */
> @@ -2196,6 +2219,22 @@
>  #if (NGX_HTTP_SSL)
>      ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
>                                prev->upstream.ssl_session_reuse, 1);
> +    ngx_conf_merge_uint_value(conf->upstream.ssl_verify,
> +                              prev->upstream.ssl_verify, 0);
> +    ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
> +                              prev->upstream.ssl_verify_depth, 1);
> +    ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
> +                              prev->upstream.ssl_ca_certificate, "");
> +
> +    if (conf->upstream.ssl_verify) {
> +      if (conf->upstream.ssl_ca_certificate.len == 0) {
> +        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
> +            "no \"proxy_ssl_ca_certificate\" is defined for "
> +            "the \"proxy_ssl_verify\" directive");
> +
> +        return NGX_CONF_ERROR;
> +      }

No 2-space indentation, please.

> +    }
>  #endif
> 
>      ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
> @@ -3011,6 +3050,12 @@
> 
>      plcf->upstream.ssl->log = cf->log;
> 
> +    plcf->upstream.ssl->ca_certificate.len =
> plcf->upstream.ssl_ca_certificate.len;
> +    plcf->upstream.ssl->ca_certificate.data =
> plcf->upstream.ssl_ca_certificate.data;
> +
> +    plcf->upstream.ssl->verify = plcf->upstream.ssl_verify;
> +    plcf->upstream.ssl->verify_depth = plcf->upstream.ssl_verify_depth;
> +
>      if (ngx_ssl_create(plcf->upstream.ssl,
>                         NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1, NULL)
>          != NGX_OK)
> diff -uNr ../nginx-0.7.67/src/http/ngx_http_upstream.h
> src/http/ngx_http_upstream.h
> --- ../nginx-0.7.67/src/http/ngx_http_upstream.h	2010-06-07
> 05:23:23.000000000 -0700
> +++ src/http/ngx_http_upstream.h	2011-09-13 14:17:05.000000000 -0700
> @@ -173,6 +173,9 @@
>  #if (NGX_HTTP_SSL)
>      ngx_ssl_t                       *ssl;
>      ngx_flag_t                       ssl_session_reuse;
> +    ngx_uint_t                       ssl_verify;
> +    ngx_uint_t                       ssl_verify_depth;
> +    ngx_str_t                        ssl_ca_certificate;
>  #endif
> 
>  } ngx_http_upstream_conf_t;

You may also want to add "proxy_ssl_crl" directive (trivial), as 
well as some form of remote CN checking.

Please also note that posting patches against 0.7.* (as well as 
0.8.*), isn't meaningful.  Development branch is 1.1.*.

Maxim Dounin


From savages at mozapps.com  Wed Sep 14 01:11:41 2011
From: savages at mozapps.com (Shaun savage)
Date: Wed, 14 Sep 2011 09:11:41 +0800
Subject: search headers_in?
Message-ID: <4E6FFF4D.8040106@mozapps.com>

I am starting to write a new module that uses a header token.  I have
searched google for the answer about "how to seach in headers for a
name.  I want to search for "X-session-token".  I am surprised that
there is not a library function that just does the search,

As I understand it now every module that wants to search for a header
writes it own search function?
Is there a "library search function"?

once I have the session token I use memcached in the PREACCESS phase to
load the session info, then in the ACCESS phase check the access permission.


From agentzh at gmail.com  Wed Sep 14 03:07:24 2011
From: agentzh at gmail.com (agentzh)
Date: Wed, 14 Sep 2011 11:07:24 +0800
Subject: search headers_in?
In-Reply-To: <4E6FFF4D.8040106@mozapps.com>
References: <4E6FFF4D.8040106@mozapps.com>
Message-ID: 

On Wed, Sep 14, 2011 at 9:11 AM, Shaun savage  wrote:
> I am starting to write a new module that uses a header token. ?I have
> searched google for the answer about "how to seach in headers for a
> name. ?I want to search for "X-session-token". ?I am surprised that
> there is not a library function that just does the search,
>

There's common code snippet that does the search :) See
nginx-1.0.6/src/http/modules/ngx_http_proxy_module.c, line 937, for
example.

The ngx_headers_more modules also has this:

https://github.com/agentzh/headers-more-nginx-module/blob/master/src/ngx_http_headers_more_headers_in.c#L184

Another way is to read the nginx variable $http_HEADER on the C land.
See http://wiki.nginx.org/HttpCoreModule#.24http_HEADER

> As I understand it now every module that wants to search for a header
> writes it own search function?

Sort of. It's simple enough.

> Is there a "library search function"?
>

Not any that I'm aware of :)

> once I have the session token I use memcached in the PREACCESS phase to
> load the session info, then in the ACCESS phase check the access permission.
>

Just be sure you use a non-blocking approach to access memcached. You
can take the ngx_srcache module for an example:

    http://wiki.nginx.org/HttpSRCacheModule

even though it can also work with other backends like ngx_redis2 and
ngx_redis :)

Regards,
-agentzh


From andrew at andrewloe.com  Thu Sep 15 00:47:11 2011
From: andrew at andrewloe.com (W. Andrew Loe III)
Date: Wed, 14 Sep 2011 17:47:11 -0700
Subject: [PATCH] Proxy SSL Verify
In-Reply-To: <20110914001749.GN1137@mdounin.ru>
References: 
	<20110914001749.GN1137@mdounin.ru>
Message-ID: 

Thank you for the help! I am now working against 1.1.2 which was the
latest release this morning. My code is available on github
(https://github.com/loe/nginx/tree/proxy_ssl_verify) but I will also
include it here.

I am having an issue with SSL_get_verify_result never failing, it
always returns 0 even with ngx_http_verify_callback logs that the
certificate has not been verified. I have inserted a few debugging
statements to illustrate the behavior. I have also included a location
you can put in the default nginx.conf that will proxy to an SSL
server.

diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c
index 259b1d8..078978b 100644
--- a/src/event/ngx_event_openssl.c
+++ b/src/event/ngx_event_openssl.c
@@ -216,13 +216,10 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t
*ssl, ngx_str_t *cert,
     return NGX_OK;
 }

-
 ngx_int_t
-ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
+ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
     ngx_int_t depth)
 {
-    STACK_OF(X509_NAME)  *list;
-
     SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER,
ngx_http_ssl_verify_callback);

     SSL_CTX_set_verify_depth(ssl->ctx, depth);
@@ -231,10 +228,6 @@ ngx_ssl_client_certificate(ngx_conf_t *cf,
ngx_ssl_t *ssl, ngx_str_t *cert,
         return NGX_OK;
     }

-    if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
-        return NGX_ERROR;
-    }
-
     if (SSL_CTX_load_verify_locations(ssl->ctx, (char *) cert->data, NULL)
         == 0)
     {
@@ -244,6 +237,23 @@ ngx_ssl_client_certificate(ngx_conf_t *cf,
ngx_ssl_t *ssl, ngx_str_t *cert,
         return NGX_ERROR;
     }

+    return NGX_OK;
+}
+
+ngx_int_t
+ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
+    ngx_int_t depth)
+{
+    STACK_OF(X509_NAME)  *list;
+
+    if (ngx_ssl_set_verify_options(ssl, cert, depth) != NGX_OK) {
+        return NGX_ERROR;
+    }
+
+    if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
+        return NGX_ERROR;
+    }
+
     list = SSL_load_client_CA_file((char *) cert->data);

     if (list == NULL) {
@@ -350,7 +360,7 @@ ngx_http_ssl_verify_callback(int ok,
X509_STORE_CTX *x509_store)
     }
 #endif

-    return 1;
+    return ok;
 }


@@ -566,7 +576,7 @@ ngx_ssl_set_session(ngx_connection_t *c,
ngx_ssl_session_t *session)
 ngx_int_t
 ngx_ssl_handshake(ngx_connection_t *c)
 {
-    int        n, sslerr;
+    int        n, sslerr, verify_err, verify_mode;
     ngx_err_t  err;

     ngx_ssl_clear_error(c->log);
@@ -577,6 +587,22 @@ ngx_ssl_handshake(ngx_connection_t *c)

     if (n == 1) {

+        if (SSL_get_peer_certificate(c->ssl->connection) != NULL)
+        {
+            ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0,
"SSL_get_peer_certificate is present");
+        }
+
+        verify_mode = SSL_get_verify_mode(c->ssl->connection);
+        ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
"SSL_get_verify_mode: %d", verify_mode);
+
+        verify_err = SSL_get_verify_result(c->ssl->connection);
+        ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
"SSL_get_verify_result: %d", verify_err);
+        if (verify_err != X509_V_OK)
+        {
+            ngx_ssl_error(NGX_LOG_ALERT, c->log, 0,
"SSL_get_verify_result() failed");
+            return NGX_ERROR;
+        }
+
         if (ngx_handle_read_event(c->read, 0) != NGX_OK) {
             return NGX_ERROR;
         }
@@ -2354,3 +2380,4 @@ ngx_openssl_exit(ngx_cycle_t *cycle)
     EVP_cleanup();
     ENGINE_cleanup();
 }
+
diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h
index 33cab7b..0aac3e8 100644
--- a/src/event/ngx_event_openssl.h
+++ b/src/event/ngx_event_openssl.h
@@ -96,6 +96,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log);
 ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data);
 ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
     ngx_str_t *cert, ngx_str_t *key);
+ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
+    ngx_int_t depth);
 ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
     ngx_str_t *cert, ngx_int_t depth);
 ngx_int_t ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl);
diff --git a/src/http/modules/ngx_http_proxy_module.c
b/src/http/modules/ngx_http_proxy_module.c
index 902cfb8..834301e 100644
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -440,7 +440,27 @@ static ngx_command_t  ngx_http_proxy_commands[] = {
       NGX_HTTP_LOC_CONF_OFFSET,
       offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
       NULL },
+
+    { ngx_string("proxy_ssl_verify_peer"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_flag_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer),
+      NULL },
+
+    { ngx_string("proxy_ssl_verify_depth"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_num_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
+      NULL },

+    { ngx_string("proxy_ssl_ca_certificate"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+      ngx_conf_set_str_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
+      NULL },
 #endif

       ngx_null_command
@@ -1697,6 +1717,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_t *cf)
     conf->upstream.intercept_errors = NGX_CONF_UNSET;
 #if (NGX_HTTP_SSL)
     conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
+    conf->upstream.ssl_verify_peer = NGX_CONF_UNSET;
+    conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
 #endif

     /* "proxy_cyclic_temp_file" is disabled */
@@ -1955,6 +1977,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf,
void *parent, void *child)
 #if (NGX_HTTP_SSL)
     ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
                               prev->upstream.ssl_session_reuse, 1);
+    ngx_conf_merge_value(conf->upstream.ssl_verify_peer,
+                              prev->upstream.ssl_verify_peer, 0);
+    ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
+                              prev->upstream.ssl_verify_depth, 1);
+    ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
+                              prev->upstream.ssl_ca_certificate, "");
+
+    if (conf->upstream.ssl_verify_peer) {
+      if (conf->upstream.ssl_ca_certificate.len == 0) {
+        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+            "no \"proxy_ssl_ca_certificate\" is defined for "
+            "the \"proxy_ssl_verify_peer\" directive");
+
+        return NGX_CONF_ERROR;
+      }
+    }
 #endif

     ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
index 29432dc..474cf0d 100644
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -1210,6 +1210,15 @@
ngx_http_upstream_ssl_init_connection(ngx_http_request_t *r,
 {
     ngx_int_t   rc;

+    if (ngx_ssl_set_verify_options(u->conf->ssl,
+          &u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth)
+        != NGX_OK)
+    {
+      ngx_http_upstream_finalize_request(r, u,
+          NGX_HTTP_INTERNAL_SERVER_ERROR);
+      return;
+    }
+
     if (ngx_ssl_create_connection(u->conf->ssl, c,
                                   NGX_SSL_BUFFER|NGX_SSL_CLIENT)
         != NGX_OK)
@@ -4527,3 +4536,4 @@ ngx_http_upstream_init_main_conf(ngx_conf_t *cf,
void *conf)

     return NGX_CONF_OK;
 }
+
diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h
index fa848c0..cc71ba9 100644
--- a/src/http/ngx_http_upstream.h
+++ b/src/http/ngx_http_upstream.h
@@ -177,6 +177,9 @@ typedef struct {
 #if (NGX_HTTP_SSL)
     ngx_ssl_t                       *ssl;
     ngx_flag_t                       ssl_session_reuse;
+    ngx_flag_t                       ssl_verify_peer;
+    ngx_uint_t                       ssl_verify_depth;
+    ngx_str_t                        ssl_ca_certificate;
 #endif

     ngx_str_t                        module;

On Tue, Sep 13, 2011 at 5:17 PM, Maxim Dounin  wrote:
> Hello!
>
> On Tue, Sep 13, 2011 at 02:44:48PM -0700, W. Andrew Loe III wrote:
>
>> This patch allows you to force OpenSSL to validate the certificate of
>> the server the http_proxy module is communicating with. Originally
>> built against 0.7.x branch, I will forward port when I can. I would
>> appreciate if anyone else has input on how to do this more elegantly,
>> my skills are rudimentary at best.
>>
>>
>> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.c
>> src/event/ngx_event_openssl.c
>> --- ../nginx-0.7.67/src/event/ngx_event_openssl.c ? ? 2010-06-07
>> 04:55:20.000000000 -0700
>> +++ src/event/ngx_event_openssl.c ? ? 2011-09-13 14:17:05.000000000 -0700
>> @@ -157,6 +157,12 @@
>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_CHALLENGE_BUG);
>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG);
>>
>> + ? ?/* verification options */
>> +
>> + ? ?SSL_CTX_load_verify_locations(ssl->ctx, (const char
>> *)ssl->ca_certificate.data, NULL);
>> + ? ?SSL_CTX_set_verify(ssl->ctx, ssl->verify, NULL);
>> + ? ?SSL_CTX_set_verify_depth(ssl->ctx, ssl->verify_depth);
>> +
>
> This should be done in separate function, similar to
> ngx_ssl_client_certificate() (actually, subset of it). ?And with
> appropriate error checking.
>
>> ? ? ?/* server side options */
>>
>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG);
>> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.h
>> src/event/ngx_event_openssl.h
>> --- ../nginx-0.7.67/src/event/ngx_event_openssl.h ? ? 2010-06-07
>> 03:09:14.000000000 -0700
>> +++ src/event/ngx_event_openssl.h ? ? 2011-09-13 14:17:05.000000000 -0700
>> @@ -27,6 +27,9 @@
>> ?typedef struct {
>> ? ? ?SSL_CTX ? ? ? ? ? ? ? ? ? ?*ctx;
>> ? ? ?ngx_log_t ? ? ? ? ? ? ? ? ?*log;
>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ?verify;
>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ?verify_depth;
>> + ? ?ngx_str_t ? ? ? ? ? ? ? ? ? ca_certificate;
>> ?} ngx_ssl_t;
>
> This shouldn't be here at all.
>
>>
>>
>> diff -uNr ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c
>> src/http/modules/ngx_http_proxy_module.c
>> --- ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c ?2010-06-07
>> 05:23:23.000000000 -0700
>> +++ src/http/modules/ngx_http_proxy_module.c ?2011-09-13 14:17:05.000000000 -0700
>> @@ -466,6 +466,27 @@
>> ? ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
>> ? ? ? ?NULL },
>>
>> + ? ? ?{ ngx_string("proxy_ssl_verify"),
>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>> + ? ? ?ngx_conf_set_num_slot,
>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),
>> + ? ? ?NULL },
>
> You don't want to let users to control binary arguments passed to
> openssl.
>
> It should be either on/off switch (flag slot), or should go away
> completely, switched on by certificate file presense.
>
> If it stays, it probably should be named as
> "proxy_ssl_verify_peer" to be in line with "ssl_verify_client"
> directive (and see below).
>
>> +
>> + ? ? ?{ ngx_string("proxy_ssl_verify_depth"),
>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>> + ? ? ?ngx_conf_set_num_slot,
>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
>> + ? ? ?NULL },
>> +
>> + ? ? ?{ ngx_string("proxy_ssl_ca_certificate"),
>
> Probably "proxy_ssl_peer_certificate" would be a better directive
> name. ?Not sure.
>
>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>> + ? ? ?ngx_conf_set_str_slot,
>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
>> + ? ? ?NULL },
>> +
>> ?#endif
>>
>> ? ? ? ?ngx_null_command
>> @@ -1950,6 +1971,8 @@
>> ? ? ?conf->upstream.intercept_errors = NGX_CONF_UNSET;
>> ?#if (NGX_HTTP_SSL)
>> ? ? ?conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
>> + ? ?conf->upstream.ssl_verify = NGX_CONF_UNSET_UINT;
>> + ? ?conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
>> ?#endif
>>
>> ? ? ?/* "proxy_cyclic_temp_file" is disabled */
>> @@ -2196,6 +2219,22 @@
>> ?#if (NGX_HTTP_SSL)
>> ? ? ?ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_session_reuse, 1);
>> + ? ?ngx_conf_merge_uint_value(conf->upstream.ssl_verify,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify, 0);
>> + ? ?ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify_depth, 1);
>> + ? ?ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_ca_certificate, "");
>> +
>> + ? ?if (conf->upstream.ssl_verify) {
>> + ? ? ?if (conf->upstream.ssl_ca_certificate.len == 0) {
>> + ? ? ? ?ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
>> + ? ? ? ? ? ?"no \"proxy_ssl_ca_certificate\" is defined for "
>> + ? ? ? ? ? ?"the \"proxy_ssl_verify\" directive");
>> +
>> + ? ? ? ?return NGX_CONF_ERROR;
>> + ? ? ?}
>
> No 2-space indentation, please.
>
>> + ? ?}
>> ?#endif
>>
>> ? ? ?ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
>> @@ -3011,6 +3050,12 @@
>>
>> ? ? ?plcf->upstream.ssl->log = cf->log;
>>
>> + ? ?plcf->upstream.ssl->ca_certificate.len =
>> plcf->upstream.ssl_ca_certificate.len;
>> + ? ?plcf->upstream.ssl->ca_certificate.data =
>> plcf->upstream.ssl_ca_certificate.data;
>> +
>> + ? ?plcf->upstream.ssl->verify = plcf->upstream.ssl_verify;
>> + ? ?plcf->upstream.ssl->verify_depth = plcf->upstream.ssl_verify_depth;
>> +
>> ? ? ?if (ngx_ssl_create(plcf->upstream.ssl,
>> ? ? ? ? ? ? ? ? ? ? ? ? NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1, NULL)
>> ? ? ? ? ?!= NGX_OK)
>> diff -uNr ../nginx-0.7.67/src/http/ngx_http_upstream.h
>> src/http/ngx_http_upstream.h
>> --- ../nginx-0.7.67/src/http/ngx_http_upstream.h ? ? ?2010-06-07
>> 05:23:23.000000000 -0700
>> +++ src/http/ngx_http_upstream.h ? ? ?2011-09-13 14:17:05.000000000 -0700
>> @@ -173,6 +173,9 @@
>> ?#if (NGX_HTTP_SSL)
>> ? ? ?ngx_ssl_t ? ? ? ? ? ? ? ? ? ? ? *ssl;
>> ? ? ?ngx_flag_t ? ? ? ? ? ? ? ? ? ? ? ssl_session_reuse;
>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify;
>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify_depth;
>> + ? ?ngx_str_t ? ? ? ? ? ? ? ? ? ? ? ?ssl_ca_certificate;
>> ?#endif
>>
>> ?} ngx_http_upstream_conf_t;
>
> You may also want to add "proxy_ssl_crl" directive (trivial), as
> well as some form of remote CN checking.
>
> Please also note that posting patches against 0.7.* (as well as
> 0.8.*), isn't meaningful. ?Development branch is 1.1.*.
>
> Maxim Dounin
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: nginx.conf
Type: application/octet-stream
Size: 4275 bytes
Desc: not available
URL: 

From orz at loli.my  Thu Sep 15 17:54:04 2011
From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=)
Date: Fri, 16 Sep 2011 01:54:04 +0800
Subject: [Patch] proxy cache for 304 Not Modified
Message-ID: 

Hello guys,
I have wrote a module to make nginx support 304 to decrease bandwidth usage.
note: I have a newbie for nginx module development, so the above module may
have some problem. Welcome to test it and feedback another problem with me.
Note:
I cannot found where can using nginx internal cache update without delete
the old cache, so I using reopen the cache file to write a new expire
header. Can anyone help me to fix this problem?

You can download full patch file from here:
http://m-b.cc/share/proxy_304.txt

# User MagicBear 
Upstream:
add $upstream_last_modified variant.
add handler for 304 Unmodified.
Proxy:
change to send If-Modified-Since header.

diff -ruN a/http/modules/ngx_http_proxy_module.c
b/http/modules/ngx_http_proxy_module.c
--- a/http/modules/ngx_http_proxy_module.c      2011-09-15
22:23:03.284431407 +0800
+++ b/http/modules/ngx_http_proxy_module.c      2011-09-16
01:41:44.654428632 +0800
@@ -543,7 +543,7 @@
     { ngx_string("Connection"), ngx_string("close") },
     { ngx_string("Keep-Alive"), ngx_string("") },
     { ngx_string("Expect"), ngx_string("") },
-    { ngx_string("If-Modified-Since"), ngx_string("") },
+    { ngx_string("If-Modified-Since"),
ngx_string("$upstream_last_modified") },
     { ngx_string("If-Unmodified-Since"), ngx_string("") },
     { ngx_string("If-None-Match"), ngx_string("") },
     { ngx_string("If-Match"), ngx_string("") },
diff -ruN a/http/ngx_http_upstream.c b/http/ngx_http_upstream.c
--- a/http/ngx_http_upstream.c  2011-09-15 22:23:03.284431407 +0800
+++ b/http/ngx_http_upstream.c  2011-09-16 01:41:44.654428632 +0800
@@ -16,6 +16,8 @@
     ngx_http_upstream_t *u);
 static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r,
     ngx_http_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_http_upstream_last_modified(ngx_http_request_t *r,
+    ngx_http_variable_value_t *v, uintptr_t data);
 #endif

 static void ngx_http_upstream_init_request(ngx_http_request_t *r);
@@ -342,6 +344,10 @@
       ngx_http_upstream_cache_status, 0,
       NGX_HTTP_VAR_NOCACHEABLE, 0 },

+    { ngx_string("upstream_last_modified"), NULL,
+      ngx_http_upstream_last_modified, 0,
+      NGX_HTTP_VAR_NOCACHEABLE, 0 },
+
 #endif

     { ngx_null_string, NULL, NULL, 0, 0, 0 }
@@ -1618,6 +1624,80 @@
             u->buffer.last = u->buffer.pos;
         }

+#if (NGX_HTTP_CACHE)
+
+        if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
+                       u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
+                       ngx_http_file_cache_valid(u->conf->cache_valid,
u->headers_in.status_n))
+        {
+            ngx_int_t  rc;
+
+            rc = u->reinit_request(r);
+
+            if (rc == NGX_OK) {
+                u->cache_status = NGX_HTTP_CACHE_BYPASS;
+                rc = ngx_http_upstream_cache_send(r, u);
+
+                               time_t  now, valid;
+
+                               now = ngx_time();
+
+                               valid = r->cache->valid_sec;
+
+                               if (valid == 0) {
+                                       valid =
ngx_http_file_cache_valid(u->conf->cache_valid,
+
                              u->headers_in.status_n);
+                                       if (valid) {
+                                               r->cache->valid_sec = now +
valid;
+                                       }
+                               }
+
+                               if (valid) {
+                                       r->cache->last_modified =
r->headers_out.last_modified_time;
+                                       r->cache->date = now;
+                                       r->cache->body_start = (u_short)
(u->buffer.pos - u->buffer.start);
+
+                                       // update Header
+                                       ngx_http_file_cache_set_header(r,
u->buffer.start);
+
+                                       ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
r->connection->log, 0,
+
 "update cache \"%s\" header to new expired." , r->cache->file.name.data);
+
+                                       // Reopen file via RW
+                                       ngx_fd_t fd =
ngx_open_file(r->cache->file.name.data, NGX_FILE_RDWR, NGX_FILE_OPEN, 0);
+
+                                       if (fd == NGX_INVALID_FILE) {
+                                               ngx_log_error(NGX_LOG_CRIT,
r->connection->log, ngx_errno,
+
ngx_open_file_n " \"%s\" failed", r->cache->file.name.data);
+                                               return;
+                                       }
+
+                                       // Write cache
+                                       if (write(fd, u->buffer.start,
sizeof(ngx_http_file_cache_header_t)) < 0)
+                                       {
+                                               ngx_log_error(NGX_LOG_CRIT,
r->connection->log, ngx_errno,
+
"write proxy_cache \"%s\" failed", r->cache->file.name.data);
+                                               return;
+                                       }
+
+                                       if (ngx_close_file(fd) ==
NGX_FILE_ERROR) {
+                                               ngx_log_error(NGX_LOG_ALERT,
r->connection->log, ngx_errno,
+
ngx_close_file_n " \"%s\" failed", r->cache->file.name.data);
+                                       }
+                                       ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
r->connection->log, 0,
+
 "update cache \"%s\" header to new expired done." ,
r->cache->file.name.data);
+                               } else {
+                                       u->cacheable = 0;
+                                       r->headers_out.last_modified_time =
-1;
+                               }
+            }
+
+            ngx_http_upstream_finalize_request(r, u, rc);
+            return;
+        }
+
+#endif
+
         if (ngx_http_upstream_test_next(r, u) == NGX_OK) {
             return;
         }
@@ -4006,6 +4086,32 @@

     return NGX_OK;
 }
+
+ngx_int_t
+ngx_http_upstream_last_modified(ngx_http_request_t *r,
+    ngx_http_variable_value_t *v, uintptr_t data)
+{
+    u_char *u;
+
+    if (r->upstream == NULL || r->upstream->cache_status == 0 ||
r->cache==NULL || r->cache->last_modified <= 0) {
+        v->not_found = 1;
+        return NGX_OK;
+    }
+
+    v->valid = 1;
+    v->no_cacheable = 0;
+    v->not_found = 0;
+       u = ngx_pcalloc(r->pool, 30);
+    if (u == NULL) {
+        return NGX_ERROR;
+    }
+
+    v->len = 29;
+       ngx_http_time(u, r->cache->last_modified);
+    v->data = u;
+
+    return NGX_OK;
+}

 #endif


MagicBear
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From andrew at andrewloe.com  Thu Sep 15 19:23:28 2011
From: andrew at andrewloe.com (W. Andrew Loe III)
Date: Thu, 15 Sep 2011 12:23:28 -0700
Subject: [PATCH] Proxy SSL Verify
In-Reply-To: 
References: 
	<20110914001749.GN1137@mdounin.ru>
	
Message-ID: 

I have a patch working against nginx 1.1.3. I'm not entirely happy
with having to set verification_failed on ngx_ssl_connection_t,
however returning 0 from the verify callback did not stop processing
as documented (http://www.openssl.org/docs/ssl/SSL_CTX_set_verify.html),
so I captured and check before the handshake is complete. If anyone
has an alternative solution that is more elegant I will gladly
refactor.

-- Andrew

http://trac.nginx.org/nginx/ticket/13

diff --git a/src/event/ngx_event_openssl.c
b/src/event/ngx_event_openssl.cindex 259b1d8..05b49dd 100644---
a/src/event/ngx_event_openssl.c+++ b/src/event/ngx_event_openssl.c@@
-216,13 +216,10 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
ngx_str_t *cert,? ? ?return NGX_OK;?}?-
ngx_int_t-ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
ngx_str_t *cert,+ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t
*cert,? ? ?ngx_int_t depth)?{- ? ?STACK_OF(X509_NAME) ?*list;-
SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER,
ngx_http_ssl_verify_callback);?? ? ?SSL_CTX_set_verify_depth(ssl->ctx,
depth);@@ -231,10 +228,6 @@ ngx_ssl_client_certificate(ngx_conf_t *cf,
ngx_ssl_t *ssl, ngx_str_t *cert,? ? ? ? ?return NGX_OK;? ? ?}?- ? ?if
(ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {- ? ? ? ?return
NGX_ERROR;- ? ?}-? ? ?if (SSL_CTX_load_verify_locations(ssl->ctx,
(char *) cert->data, NULL)? ? ? ? ?== 0)? ? ?{@@ -244,6 +237,23 @@
ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t
*cert,? ? ? ? ?return NGX_ERROR;? ? ?}?+ ? ?return
NGX_OK;+}++ngx_int_t+ngx_ssl_client_certificate(ngx_conf_t *cf,
ngx_ssl_t *ssl, ngx_str_t *cert,+ ? ?ngx_int_t depth)+{+
STACK_OF(X509_NAME) ?*list;++ ? ?if (ngx_ssl_set_verify_options(ssl,
cert, depth) != NGX_OK) {+ ? ? ? ?return NGX_ERROR;+ ? ?}++ ? ?if
(ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {+ ? ? ? ?return
NGX_ERROR;+ ? ?}+? ? ?list = SSL_load_client_CA_file((char *)
cert->data);?? ? ?if (list == NULL) {@@ -313,11 +323,6 @@
ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl)?static int
ngx_http_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)?{-#if
(NGX_DEBUG)- ? ?char ? ? ? ? ? ? ?*subject, *issuer;- ? ?int
? ? ?err, depth;- ? ?X509 ? ? ? ? ? ? ?*cert;- ? ?X509_NAME
*sname, *iname;? ? ?ngx_connection_t ?*c;? ? ?ngx_ssl_conn_t
*ssl_conn;?@@ -326,6 +331,12 @@ ngx_http_ssl_verify_callback(int ok,
X509_STORE_CTX *x509_store)?? ? ?c = ngx_ssl_get_connection(ssl_conn);
+#if (NGX_DEBUG)+ ? ?char ? ? ? ? ? ? ?*subject, *issuer;+ ? ?int
 ? ? ? ? ?err, depth;+ ? ?X509 ? ? ? ? ? ? ?*cert;+ ? ?X509_NAME
? *sname, *iname;+? ? ?cert =
X509_STORE_CTX_get_current_cert(x509_store);? ? ?err =
X509_STORE_CTX_get_error(x509_store);? ? ?depth =
X509_STORE_CTX_get_error_depth(x509_store);@@ -350,6 +361,13 @@
ngx_http_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)? ? ?}
#endif?+ ? ?if (ok != 1)+ ? ?{+ ? ? ? ?ngx_ssl_error(NGX_LOG_EMERG,
c->log, 0, "ngx_http_ssl_verify_callback failed");+
c->ssl->verification_failed = 1;+ ? ? ? ?return 0;+ ? ?}+? ? ?return
1;?}?@@ -575,6 +593,11 @@ ngx_ssl_handshake(ngx_connection_t *c)
ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL_do_handshake: %d",
n);?+ ? ?if (c->ssl->verification_failed != NGX_OK)+ ? ?{+ ? ? ?return
NGX_ERROR;+ ? ?}+? ? ?if (n == 1) {?? ? ? ? ?if
(ngx_handle_read_event(c->read, 0) != NGX_OK) {diff --git
a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.hindex
33cab7b..b59baf9 100644--- a/src/event/ngx_event_openssl.h+++
b/src/event/ngx_event_openssl.h@@ -46,6 +46,8 @@ typedef struct {
unsigned ? ? ? ? ? ? ? ? ? ?buffer:1;? ? ?unsigned
no_wait_shutdown:1;? ? ?unsigned
no_send_shutdown:1;++ ? ?ngx_int_t
verification_failed;?} ngx_ssl_connection_t;??@@ -96,6 +98,8 @@
ngx_int_t ngx_ssl_init(ngx_log_t *log);?ngx_int_t
ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data);
ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
ngx_str_t *cert, ngx_str_t *key);+ngx_int_t
ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,+
ngx_int_t depth);?ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf,
ngx_ssl_t *ssl,? ? ?ngx_str_t *cert, ngx_int_t depth);?ngx_int_t
ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl);diff --git
a/src/http/modules/ngx_http_proxy_module.c
b/src/http/modules/ngx_http_proxy_module.cindex 902cfb8..834301e
100644--- a/src/http/modules/ngx_http_proxy_module.c+++
b/src/http/modules/ngx_http_proxy_module.c@@ -440,7 +440,27 @@ static
ngx_command_t ?ngx_http_proxy_commands[] = {
NGX_HTTP_LOC_CONF_OFFSET,? ? ? ?offsetof(ngx_http_proxy_loc_conf_t,
upstream.ssl_session_reuse),? ? ? ?NULL },+ ? ?+ ? ?{
ngx_string("proxy_ssl_verify_peer"),+
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,+
? ? ?ngx_conf_set_flag_slot,+ ? ? ?NGX_HTTP_LOC_CONF_OFFSET,+
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer),+
NULL },++ ? ?{ ngx_string("proxy_ssl_verify_depth"),+
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,+
? ? ?ngx_conf_set_num_slot,+ ? ? ?NGX_HTTP_LOC_CONF_OFFSET,+
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),+
NULL },?+ ? ?{ ngx_string("proxy_ssl_ca_certificate"),+
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,+
? ? ?ngx_conf_set_str_slot,+ ? ? ?NGX_HTTP_LOC_CONF_OFFSET,+
offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),+
?NULL },?#endif?? ? ? ?ngx_null_command@@ -1697,6 +1717,8 @@
ngx_http_proxy_create_loc_conf(ngx_conf_t *cf)
conf->upstream.intercept_errors = NGX_CONF_UNSET;?#if (NGX_HTTP_SSL)
 ?conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;+
conf->upstream.ssl_verify_peer = NGX_CONF_UNSET;+
conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;?#endif?? ? ?/*
"proxy_cyclic_temp_file" is disabled */@@ -1955,6 +1977,22 @@
ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void *parent, void
*child)?#if (NGX_HTTP_SSL)
ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
 ? ? ? ? ? ? ?prev->upstream.ssl_session_reuse, 1);+
ngx_conf_merge_value(conf->upstream.ssl_verify_peer,+
 ? ? ? ? ? ?prev->upstream.ssl_verify_peer, 0);+
ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,+
 ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify_depth, 1);+
ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,+
? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_ca_certificate, "");++ ? ?if
(conf->upstream.ssl_verify_peer) {+ ? ? ?if
(conf->upstream.ssl_ca_certificate.len == 0) {+
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,+ ? ? ? ? ? ?"no
\"proxy_ssl_ca_certificate\" is defined for "+ ? ? ? ? ? ?"the
\"proxy_ssl_verify_peer\" directive");++ ? ? ? ?return
NGX_CONF_ERROR;+ ? ? ?}+ ? ?}?#endif
ngx_conf_merge_value(conf->redirect, prev->redirect, 1);diff --git
a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.cindex
29432dc..474cf0d 100644--- a/src/http/ngx_http_upstream.c+++
b/src/http/ngx_http_upstream.c@@ -1210,6 +1210,15 @@
ngx_http_upstream_ssl_init_connection(ngx_http_request_t *r,?{
ngx_int_t ? rc;?+ ? ?if (ngx_ssl_set_verify_options(u->conf->ssl,+
? ? ?&u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth)+
!= NGX_OK)+ ? ?{+ ? ? ?ngx_http_upstream_finalize_request(r, u,+
? ?NGX_HTTP_INTERNAL_SERVER_ERROR);+ ? ? ?return;+ ? ?}+? ? ?if
(ngx_ssl_create_connection(u->conf->ssl, c,
? ? ? ?NGX_SSL_BUFFER|NGX_SSL_CLIENT)? ? ? ? ?!= NGX_OK)@@ -4527,3
+4536,4 @@ ngx_http_upstream_init_main_conf(ngx_conf_t *cf, void
*conf)?? ? ?return NGX_CONF_OK;?}+diff --git
a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.hindex
fa848c0..cc71ba9 100644--- a/src/http/ngx_http_upstream.h+++
b/src/http/ngx_http_upstream.h@@ -177,6 +177,9 @@ typedef struct {?#if
(NGX_HTTP_SSL)? ? ?ngx_ssl_t ? ? ? ? ? ? ? ? ? ? ? *ssl;
ngx_flag_t ? ? ? ? ? ? ? ? ? ? ? ssl_session_reuse;+ ? ?ngx_flag_t
? ? ? ? ? ? ? ? ? ssl_verify_peer;+ ? ?ngx_uint_t
 ssl_verify_depth;+ ? ?ngx_str_t
ssl_ca_certificate;?#endif?? ? ?ngx_str_t
module;

On Wed, Sep 14, 2011 at 5:47 PM, W. Andrew Loe III  wrote:
> Thank you for the help! I am now working against 1.1.2 which was the
> latest release this morning. My code is available on github
> (https://github.com/loe/nginx/tree/proxy_ssl_verify) but I will also
> include it here.
>
> I am having an issue with SSL_get_verify_result never failing, it
> always returns 0 even with ngx_http_verify_callback logs that the
> certificate has not been verified. I have inserted a few debugging
> statements to illustrate the behavior. I have also included a location
> you can put in the default nginx.conf that will proxy to an SSL
> server.
>
> diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c
> index 259b1d8..078978b 100644
> --- a/src/event/ngx_event_openssl.c
> +++ b/src/event/ngx_event_openssl.c
> @@ -216,13 +216,10 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t
> *ssl, ngx_str_t *cert,
> ? ? return NGX_OK;
> ?}
>
> -
> ?ngx_int_t
> -ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
> +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
> ? ? ngx_int_t depth)
> ?{
> - ? ?STACK_OF(X509_NAME) ?*list;
> -
> ? ? SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER,
> ngx_http_ssl_verify_callback);
>
> ? ? SSL_CTX_set_verify_depth(ssl->ctx, depth);
> @@ -231,10 +228,6 @@ ngx_ssl_client_certificate(ngx_conf_t *cf,
> ngx_ssl_t *ssl, ngx_str_t *cert,
> ? ? ? ? return NGX_OK;
> ? ? }
>
> - ? ?if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
> - ? ? ? ?return NGX_ERROR;
> - ? ?}
> -
> ? ? if (SSL_CTX_load_verify_locations(ssl->ctx, (char *) cert->data, NULL)
> ? ? ? ? == 0)
> ? ? {
> @@ -244,6 +237,23 @@ ngx_ssl_client_certificate(ngx_conf_t *cf,
> ngx_ssl_t *ssl, ngx_str_t *cert,
> ? ? ? ? return NGX_ERROR;
> ? ? }
>
> + ? ?return NGX_OK;
> +}
> +
> +ngx_int_t
> +ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
> + ? ?ngx_int_t depth)
> +{
> + ? ?STACK_OF(X509_NAME) ?*list;
> +
> + ? ?if (ngx_ssl_set_verify_options(ssl, cert, depth) != NGX_OK) {
> + ? ? ? ?return NGX_ERROR;
> + ? ?}
> +
> + ? ?if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
> + ? ? ? ?return NGX_ERROR;
> + ? ?}
> +
> ? ? list = SSL_load_client_CA_file((char *) cert->data);
>
> ? ? if (list == NULL) {
> @@ -350,7 +360,7 @@ ngx_http_ssl_verify_callback(int ok,
> X509_STORE_CTX *x509_store)
> ? ? }
> ?#endif
>
> - ? ?return 1;
> + ? ?return ok;
> ?}
>
>
> @@ -566,7 +576,7 @@ ngx_ssl_set_session(ngx_connection_t *c,
> ngx_ssl_session_t *session)
> ?ngx_int_t
> ?ngx_ssl_handshake(ngx_connection_t *c)
> ?{
> - ? ?int ? ? ? ?n, sslerr;
> + ? ?int ? ? ? ?n, sslerr, verify_err, verify_mode;
> ? ? ngx_err_t ?err;
>
> ? ? ngx_ssl_clear_error(c->log);
> @@ -577,6 +587,22 @@ ngx_ssl_handshake(ngx_connection_t *c)
>
> ? ? if (n == 1) {
>
> + ? ? ? ?if (SSL_get_peer_certificate(c->ssl->connection) != NULL)
> + ? ? ? ?{
> + ? ? ? ? ? ?ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "SSL_get_peer_certificate is present");
> + ? ? ? ?}
> +
> + ? ? ? ?verify_mode = SSL_get_verify_mode(c->ssl->connection);
> + ? ? ? ?ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "SSL_get_verify_mode: %d", verify_mode);
> +
> + ? ? ? ?verify_err = SSL_get_verify_result(c->ssl->connection);
> + ? ? ? ?ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "SSL_get_verify_result: %d", verify_err);
> + ? ? ? ?if (verify_err != X509_V_OK)
> + ? ? ? ?{
> + ? ? ? ? ? ?ngx_ssl_error(NGX_LOG_ALERT, c->log, 0,
> "SSL_get_verify_result() failed");
> + ? ? ? ? ? ?return NGX_ERROR;
> + ? ? ? ?}
> +
> ? ? ? ? if (ngx_handle_read_event(c->read, 0) != NGX_OK) {
> ? ? ? ? ? ? return NGX_ERROR;
> ? ? ? ? }
> @@ -2354,3 +2380,4 @@ ngx_openssl_exit(ngx_cycle_t *cycle)
> ? ? EVP_cleanup();
> ? ? ENGINE_cleanup();
> ?}
> +
> diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h
> index 33cab7b..0aac3e8 100644
> --- a/src/event/ngx_event_openssl.h
> +++ b/src/event/ngx_event_openssl.h
> @@ -96,6 +96,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log);
> ?ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data);
> ?ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
> ? ? ngx_str_t *cert, ngx_str_t *key);
> +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
> + ? ?ngx_int_t depth);
> ?ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
> ? ? ngx_str_t *cert, ngx_int_t depth);
> ?ngx_int_t ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl);
> diff --git a/src/http/modules/ngx_http_proxy_module.c
> b/src/http/modules/ngx_http_proxy_module.c
> index 902cfb8..834301e 100644
> --- a/src/http/modules/ngx_http_proxy_module.c
> +++ b/src/http/modules/ngx_http_proxy_module.c
> @@ -440,7 +440,27 @@ static ngx_command_t ?ngx_http_proxy_commands[] = {
> ? ? ? NGX_HTTP_LOC_CONF_OFFSET,
> ? ? ? offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
> ? ? ? NULL },
> +
> + ? ?{ ngx_string("proxy_ssl_verify_peer"),
> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> + ? ? ?ngx_conf_set_flag_slot,
> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer),
> + ? ? ?NULL },
> +
> + ? ?{ ngx_string("proxy_ssl_verify_depth"),
> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> + ? ? ?ngx_conf_set_num_slot,
> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
> + ? ? ?NULL },
>
> + ? ?{ ngx_string("proxy_ssl_ca_certificate"),
> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> + ? ? ?ngx_conf_set_str_slot,
> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
> + ? ? ?NULL },
> ?#endif
>
> ? ? ? ngx_null_command
> @@ -1697,6 +1717,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_t *cf)
> ? ? conf->upstream.intercept_errors = NGX_CONF_UNSET;
> ?#if (NGX_HTTP_SSL)
> ? ? conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
> + ? ?conf->upstream.ssl_verify_peer = NGX_CONF_UNSET;
> + ? ?conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
> ?#endif
>
> ? ? /* "proxy_cyclic_temp_file" is disabled */
> @@ -1955,6 +1977,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf,
> void *parent, void *child)
> ?#if (NGX_HTTP_SSL)
> ? ? ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? prev->upstream.ssl_session_reuse, 1);
> + ? ?ngx_conf_merge_value(conf->upstream.ssl_verify_peer,
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify_peer, 0);
> + ? ?ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify_depth, 1);
> + ? ?ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_ca_certificate, "");
> +
> + ? ?if (conf->upstream.ssl_verify_peer) {
> + ? ? ?if (conf->upstream.ssl_ca_certificate.len == 0) {
> + ? ? ? ?ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
> + ? ? ? ? ? ?"no \"proxy_ssl_ca_certificate\" is defined for "
> + ? ? ? ? ? ?"the \"proxy_ssl_verify_peer\" directive");
> +
> + ? ? ? ?return NGX_CONF_ERROR;
> + ? ? ?}
> + ? ?}
> ?#endif
>
> ? ? ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
> diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
> index 29432dc..474cf0d 100644
> --- a/src/http/ngx_http_upstream.c
> +++ b/src/http/ngx_http_upstream.c
> @@ -1210,6 +1210,15 @@
> ngx_http_upstream_ssl_init_connection(ngx_http_request_t *r,
> ?{
> ? ? ngx_int_t ? rc;
>
> + ? ?if (ngx_ssl_set_verify_options(u->conf->ssl,
> + ? ? ? ? ?&u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth)
> + ? ? ? ?!= NGX_OK)
> + ? ?{
> + ? ? ?ngx_http_upstream_finalize_request(r, u,
> + ? ? ? ? ?NGX_HTTP_INTERNAL_SERVER_ERROR);
> + ? ? ?return;
> + ? ?}
> +
> ? ? if (ngx_ssl_create_connection(u->conf->ssl, c,
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? NGX_SSL_BUFFER|NGX_SSL_CLIENT)
> ? ? ? ? != NGX_OK)
> @@ -4527,3 +4536,4 @@ ngx_http_upstream_init_main_conf(ngx_conf_t *cf,
> void *conf)
>
> ? ? return NGX_CONF_OK;
> ?}
> +
> diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h
> index fa848c0..cc71ba9 100644
> --- a/src/http/ngx_http_upstream.h
> +++ b/src/http/ngx_http_upstream.h
> @@ -177,6 +177,9 @@ typedef struct {
> ?#if (NGX_HTTP_SSL)
> ? ? ngx_ssl_t ? ? ? ? ? ? ? ? ? ? ? *ssl;
> ? ? ngx_flag_t ? ? ? ? ? ? ? ? ? ? ? ssl_session_reuse;
> + ? ?ngx_flag_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify_peer;
> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify_depth;
> + ? ?ngx_str_t ? ? ? ? ? ? ? ? ? ? ? ?ssl_ca_certificate;
> ?#endif
>
> ? ? ngx_str_t ? ? ? ? ? ? ? ? ? ? ? ?module;
>
> On Tue, Sep 13, 2011 at 5:17 PM, Maxim Dounin  wrote:
>> Hello!
>>
>> On Tue, Sep 13, 2011 at 02:44:48PM -0700, W. Andrew Loe III wrote:
>>
>>> This patch allows you to force OpenSSL to validate the certificate of
>>> the server the http_proxy module is communicating with. Originally
>>> built against 0.7.x branch, I will forward port when I can. I would
>>> appreciate if anyone else has input on how to do this more elegantly,
>>> my skills are rudimentary at best.
>>>
>>>
>>> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.c
>>> src/event/ngx_event_openssl.c
>>> --- ../nginx-0.7.67/src/event/ngx_event_openssl.c ? ? 2010-06-07
>>> 04:55:20.000000000 -0700
>>> +++ src/event/ngx_event_openssl.c ? ? 2011-09-13 14:17:05.000000000 -0700
>>> @@ -157,6 +157,12 @@
>>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_CHALLENGE_BUG);
>>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG);
>>>
>>> + ? ?/* verification options */
>>> +
>>> + ? ?SSL_CTX_load_verify_locations(ssl->ctx, (const char
>>> *)ssl->ca_certificate.data, NULL);
>>> + ? ?SSL_CTX_set_verify(ssl->ctx, ssl->verify, NULL);
>>> + ? ?SSL_CTX_set_verify_depth(ssl->ctx, ssl->verify_depth);
>>> +
>>
>> This should be done in separate function, similar to
>> ngx_ssl_client_certificate() (actually, subset of it). ?And with
>> appropriate error checking.
>>
>>> ? ? ?/* server side options */
>>>
>>> ? ? ?SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG);
>>> diff -uNr ../nginx-0.7.67/src/event/ngx_event_openssl.h
>>> src/event/ngx_event_openssl.h
>>> --- ../nginx-0.7.67/src/event/ngx_event_openssl.h ? ? 2010-06-07
>>> 03:09:14.000000000 -0700
>>> +++ src/event/ngx_event_openssl.h ? ? 2011-09-13 14:17:05.000000000 -0700
>>> @@ -27,6 +27,9 @@
>>> ?typedef struct {
>>> ? ? ?SSL_CTX ? ? ? ? ? ? ? ? ? ?*ctx;
>>> ? ? ?ngx_log_t ? ? ? ? ? ? ? ? ?*log;
>>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ?verify;
>>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ?verify_depth;
>>> + ? ?ngx_str_t ? ? ? ? ? ? ? ? ? ca_certificate;
>>> ?} ngx_ssl_t;
>>
>> This shouldn't be here at all.
>>
>>>
>>>
>>> diff -uNr ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c
>>> src/http/modules/ngx_http_proxy_module.c
>>> --- ../nginx-0.7.67/src/http/modules/ngx_http_proxy_module.c ?2010-06-07
>>> 05:23:23.000000000 -0700
>>> +++ src/http/modules/ngx_http_proxy_module.c ?2011-09-13 14:17:05.000000000 -0700
>>> @@ -466,6 +466,27 @@
>>> ? ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
>>> ? ? ? ?NULL },
>>>
>>> + ? ? ?{ ngx_string("proxy_ssl_verify"),
>>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>>> + ? ? ?ngx_conf_set_num_slot,
>>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify),
>>> + ? ? ?NULL },
>>
>> You don't want to let users to control binary arguments passed to
>> openssl.
>>
>> It should be either on/off switch (flag slot), or should go away
>> completely, switched on by certificate file presense.
>>
>> If it stays, it probably should be named as
>> "proxy_ssl_verify_peer" to be in line with "ssl_verify_client"
>> directive (and see below).
>>
>>> +
>>> + ? ? ?{ ngx_string("proxy_ssl_verify_depth"),
>>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>>> + ? ? ?ngx_conf_set_num_slot,
>>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
>>> + ? ? ?NULL },
>>> +
>>> + ? ? ?{ ngx_string("proxy_ssl_ca_certificate"),
>>
>> Probably "proxy_ssl_peer_certificate" would be a better directive
>> name. ?Not sure.
>>
>>> + ? ? ?NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
>>> + ? ? ?ngx_conf_set_str_slot,
>>> + ? ? ?NGX_HTTP_LOC_CONF_OFFSET,
>>> + ? ? ?offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
>>> + ? ? ?NULL },
>>> +
>>> ?#endif
>>>
>>> ? ? ? ?ngx_null_command
>>> @@ -1950,6 +1971,8 @@
>>> ? ? ?conf->upstream.intercept_errors = NGX_CONF_UNSET;
>>> ?#if (NGX_HTTP_SSL)
>>> ? ? ?conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
>>> + ? ?conf->upstream.ssl_verify = NGX_CONF_UNSET_UINT;
>>> + ? ?conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
>>> ?#endif
>>>
>>> ? ? ?/* "proxy_cyclic_temp_file" is disabled */
>>> @@ -2196,6 +2219,22 @@
>>> ?#if (NGX_HTTP_SSL)
>>> ? ? ?ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_session_reuse, 1);
>>> + ? ?ngx_conf_merge_uint_value(conf->upstream.ssl_verify,
>>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify, 0);
>>> + ? ?ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
>>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_verify_depth, 1);
>>> + ? ?ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
>>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?prev->upstream.ssl_ca_certificate, "");
>>> +
>>> + ? ?if (conf->upstream.ssl_verify) {
>>> + ? ? ?if (conf->upstream.ssl_ca_certificate.len == 0) {
>>> + ? ? ? ?ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
>>> + ? ? ? ? ? ?"no \"proxy_ssl_ca_certificate\" is defined for "
>>> + ? ? ? ? ? ?"the \"proxy_ssl_verify\" directive");
>>> +
>>> + ? ? ? ?return NGX_CONF_ERROR;
>>> + ? ? ?}
>>
>> No 2-space indentation, please.
>>
>>> + ? ?}
>>> ?#endif
>>>
>>> ? ? ?ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
>>> @@ -3011,6 +3050,12 @@
>>>
>>> ? ? ?plcf->upstream.ssl->log = cf->log;
>>>
>>> + ? ?plcf->upstream.ssl->ca_certificate.len =
>>> plcf->upstream.ssl_ca_certificate.len;
>>> + ? ?plcf->upstream.ssl->ca_certificate.data =
>>> plcf->upstream.ssl_ca_certificate.data;
>>> +
>>> + ? ?plcf->upstream.ssl->verify = plcf->upstream.ssl_verify;
>>> + ? ?plcf->upstream.ssl->verify_depth = plcf->upstream.ssl_verify_depth;
>>> +
>>> ? ? ?if (ngx_ssl_create(plcf->upstream.ssl,
>>> ? ? ? ? ? ? ? ? ? ? ? ? NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1, NULL)
>>> ? ? ? ? ?!= NGX_OK)
>>> diff -uNr ../nginx-0.7.67/src/http/ngx_http_upstream.h
>>> src/http/ngx_http_upstream.h
>>> --- ../nginx-0.7.67/src/http/ngx_http_upstream.h ? ? ?2010-06-07
>>> 05:23:23.000000000 -0700
>>> +++ src/http/ngx_http_upstream.h ? ? ?2011-09-13 14:17:05.000000000 -0700
>>> @@ -173,6 +173,9 @@
>>> ?#if (NGX_HTTP_SSL)
>>> ? ? ?ngx_ssl_t ? ? ? ? ? ? ? ? ? ? ? *ssl;
>>> ? ? ?ngx_flag_t ? ? ? ? ? ? ? ? ? ? ? ssl_session_reuse;
>>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify;
>>> + ? ?ngx_uint_t ? ? ? ? ? ? ? ? ? ? ? ssl_verify_depth;
>>> + ? ?ngx_str_t ? ? ? ? ? ? ? ? ? ? ? ?ssl_ca_certificate;
>>> ?#endif
>>>
>>> ?} ngx_http_upstream_conf_t;
>>
>> You may also want to add "proxy_ssl_crl" directive (trivial), as
>> well as some form of remote CN checking.
>>
>> Please also note that posting patches against 0.7.* (as well as
>> 0.8.*), isn't meaningful. ?Development branch is 1.1.*.
>>
>> Maxim Dounin
>>
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: proxy_ssl_verify-1.1.3.patch
Type: application/octet-stream
Size: 8015 bytes
Desc: not available
URL: 

From appa at perusio.net  Fri Sep 16 12:25:47 2011
From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida)
Date: Fri, 16 Sep 2011 13:25:47 +0100
Subject: Problems building 1.1.3 with keepalive.
Message-ID: <87litoy8qc.wl%appa@perusio.net>

I've tried building 1.1.3 using a bunch of patches provided by Maxim. 

They're here:  https://github.com/perusio/nginx-mdounin-patches

With these I was able to get a clean build of 1.1.2.

Applying the same set of patches to 1.1.3 doesn't raise any issues
when patching. What happens is that the build process fails when
compiling the ngx_http_fastcgi module.

This is the compiler error message:

     /ngx_http_fastcgi_module.c:1664:32: error: ?ngx_http_upstream_t? has no member named ?keepalive?

I'm using the separate module for ngx_http_upstream keepalive. It's
not in core.

There's a struct that is lacking a member related to the upstream
keepalive functionality.

Any idea of what's going on? I know that "my" set of patches is
"uncommon". Nevertheless I find it a bit strange that the build fails
with 1.1.3 while is successful with 1.1.2.

The upstream keepalive patches are taken from Maxim's patch queue
posted earlier this month on this list.

Thanks,
--- appa


From mdounin at mdounin.ru  Fri Sep 16 13:00:00 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 16 Sep 2011 17:00:00 +0400
Subject: [PATCH] Proxy SSL Verify
In-Reply-To: 
References: 
	<20110914001749.GN1137@mdounin.ru>
	
	
Message-ID: <20110916125959.GP1137@mdounin.ru>

Hello!

On Thu, Sep 15, 2011 at 12:23:28PM -0700, W. Andrew Loe III wrote:

> I have a patch working against nginx 1.1.3. I'm not entirely happy
> with having to set verification_failed on ngx_ssl_connection_t,
> however returning 0 from the verify callback did not stop processing
> as documented (http://www.openssl.org/docs/ssl/SSL_CTX_set_verify.html),
> so I captured and check before the handshake is complete. If anyone
> has an alternative solution that is more elegant I will gladly
> refactor.

[...]

> http://trac.nginx.org/nginx/ticket/13

Just a side note: flooding trac with patches doesn't really help.  
It's not really possible to review patches there.  Just posting 
them here is much better idea, possibly linking ticket to mailing 
list discussions as appropriate.

[...awfully corrupted inline patch skipped, below is one from 
attachment; hope they match...]

> diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c
> index 259b1d8..05b49dd 100644
> --- a/src/event/ngx_event_openssl.c
> +++ b/src/event/ngx_event_openssl.c
> @@ -216,13 +216,10 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
>      return NGX_OK;
>  }
>  
> -
>  ngx_int_t

Please keep 2 blank lines between functions.  (I know this file 
has recent style corruption after ecdh patch, it will be fixed.)

> -ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
> +ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
>      ngx_int_t depth)
>  {
> -    STACK_OF(X509_NAME)  *list;
> -
>      SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, ngx_http_ssl_verify_callback);
>  
>      SSL_CTX_set_verify_depth(ssl->ctx, depth);
> @@ -231,10 +228,6 @@ ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
>          return NGX_OK;
>      }
>  
> -    if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
> -        return NGX_ERROR;
> -    }
> -

You actually need this for peer certificate as well.

>      if (SSL_CTX_load_verify_locations(ssl->ctx, (char *) cert->data, NULL)
>          == 0)
>      {
> @@ -244,6 +237,23 @@ ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
>          return NGX_ERROR;
>      }
>  
> +    return NGX_OK;
> +}
> +
> +ngx_int_t
> +ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
> +    ngx_int_t depth)
> +{
> +    STACK_OF(X509_NAME)  *list;
> +
> +    if (ngx_ssl_set_verify_options(ssl, cert, depth) != NGX_OK) {
> +        return NGX_ERROR;
> +    }
> +
> +    if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) {
> +        return NGX_ERROR;
> +    }
> +
>      list = SSL_load_client_CA_file((char *) cert->data);
>  
>      if (list == NULL) {
> @@ -313,11 +323,6 @@ ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl)
>  static int
>  ngx_http_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)
>  {
> -#if (NGX_DEBUG)
> -    char              *subject, *issuer;
> -    int                err, depth;
> -    X509              *cert;
> -    X509_NAME         *sname, *iname;
>      ngx_connection_t  *c;
>      ngx_ssl_conn_t    *ssl_conn;
>  
> @@ -326,6 +331,12 @@ ngx_http_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)
>  
>      c = ngx_ssl_get_connection(ssl_conn);
>  
> +#if (NGX_DEBUG)
> +    char              *subject, *issuer;
> +    int                err, depth;
> +    X509              *cert;
> +    X509_NAME         *sname, *iname;
> +
>      cert = X509_STORE_CTX_get_current_cert(x509_store);
>      err = X509_STORE_CTX_get_error(x509_store);
>      depth = X509_STORE_CTX_get_error_depth(x509_store);
> @@ -350,6 +361,13 @@ ngx_http_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store)
>      }
>  #endif
>  
> +    if (ok != 1)
> +    {
> +        ngx_ssl_error(NGX_LOG_EMERG, c->log, 0, "ngx_http_ssl_verify_callback failed");
> +        c->ssl->verification_failed = 1;
> +        return 0;
> +    }
> +
>      return 1;
>  }
>  
> @@ -575,6 +593,11 @@ ngx_ssl_handshake(ngx_connection_t *c)
>  
>      ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL_do_handshake: %d", n);
>  
> +    if (c->ssl->verification_failed != NGX_OK)
> +    {
> +      return NGX_ERROR;
> +    }
> +
>      if (n == 1) {
>  
>          if (ngx_handle_read_event(c->read, 0) != NGX_OK) {

You may want to avoid touching ngx_http_ssl_verify_callback() and 
ngx_ssl_handshake().  This will break client cert processing.

In normal https this is checked in ngx_http_process_request().  
Appropriate checks should be done in upstream case as well.

And you anyway need verify checks in upstream code as checking CN 
will require it.

Please also note that code code here have multiple style issues 
("{" should be on line with "if" unless it's multiline, wrong 
indentation).  This is irrelevant as code is wrong anyway, noted 
just to make sure coding style in next iteration will be better.  

> diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h
> index 33cab7b..b59baf9 100644
> --- a/src/event/ngx_event_openssl.h
> +++ b/src/event/ngx_event_openssl.h
> @@ -46,6 +46,8 @@ typedef struct {
>      unsigned                    buffer:1;
>      unsigned                    no_wait_shutdown:1;
>      unsigned                    no_send_shutdown:1;
> +
> +    ngx_int_t                   verification_failed;
>  } ngx_ssl_connection_t;

No, please.  See above.

>  
>  
> @@ -96,6 +98,8 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log);
>  ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data);
>  ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
>      ngx_str_t *cert, ngx_str_t *key);
> +ngx_int_t ngx_ssl_set_verify_options(ngx_ssl_t *ssl, ngx_str_t *cert,
> +    ngx_int_t depth);
>  ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
>      ngx_str_t *cert, ngx_int_t depth);
>  ngx_int_t ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl);
> diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
> index 902cfb8..834301e 100644
> --- a/src/http/modules/ngx_http_proxy_module.c
> +++ b/src/http/modules/ngx_http_proxy_module.c
> @@ -440,7 +440,27 @@ static ngx_command_t  ngx_http_proxy_commands[] = {
>        NGX_HTTP_LOC_CONF_OFFSET,
>        offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse),
>        NULL },
> +    
> +    { ngx_string("proxy_ssl_verify_peer"),
> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_flag_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_peer),
> +      NULL },
> +
> +    { ngx_string("proxy_ssl_verify_depth"),
> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_num_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify_depth),
> +      NULL },
>  
> +    { ngx_string("proxy_ssl_ca_certificate"),
> +      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
> +      ngx_conf_set_str_slot,
> +      NGX_HTTP_LOC_CONF_OFFSET,
> +      offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_ca_certificate),
> +      NULL },

As I already said, you may want to call this 
proxy_ssl_peer_certificate to make

    proxy_ssl_verify_peer
    proxy_ssl_peer_certificate

in line with

    ssl_verify_client
    ssl_client_certificate

>  #endif
>  
>        ngx_null_command
> @@ -1697,6 +1717,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_t *cf)
>      conf->upstream.intercept_errors = NGX_CONF_UNSET;
>  #if (NGX_HTTP_SSL)
>      conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
> +    conf->upstream.ssl_verify_peer = NGX_CONF_UNSET;
> +    conf->upstream.ssl_verify_depth = NGX_CONF_UNSET_UINT;
>  #endif
>  
>      /* "proxy_cyclic_temp_file" is disabled */
> @@ -1955,6 +1977,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child)
>  #if (NGX_HTTP_SSL)
>      ngx_conf_merge_value(conf->upstream.ssl_session_reuse,
>                                prev->upstream.ssl_session_reuse, 1);
> +    ngx_conf_merge_value(conf->upstream.ssl_verify_peer,
> +                              prev->upstream.ssl_verify_peer, 0);
> +    ngx_conf_merge_uint_value(conf->upstream.ssl_verify_depth,
> +                              prev->upstream.ssl_verify_depth, 1);
> +    ngx_conf_merge_str_value(conf->upstream.ssl_ca_certificate,
> +                              prev->upstream.ssl_ca_certificate, "");
> +
> +    if (conf->upstream.ssl_verify_peer) {
> +      if (conf->upstream.ssl_ca_certificate.len == 0) {
> +        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
> +            "no \"proxy_ssl_ca_certificate\" is defined for "
> +            "the \"proxy_ssl_verify_peer\" directive");
> +
> +        return NGX_CONF_ERROR;
> +      }
> +    }

As I already said, no 2 spaces indentation, please.

>  #endif
>  
>      ngx_conf_merge_value(conf->redirect, prev->redirect, 1);
> diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
> index 29432dc..474cf0d 100644
> --- a/src/http/ngx_http_upstream.c
> +++ b/src/http/ngx_http_upstream.c
> @@ -1210,6 +1210,15 @@ ngx_http_upstream_ssl_init_connection(ngx_http_request_t *r,
>  {
>      ngx_int_t   rc;
>  
> +    if (ngx_ssl_set_verify_options(u->conf->ssl,
> +          &u->conf->ssl_ca_certificate, u->conf->ssl_verify_depth)
> +        != NGX_OK)
> +    {
> +      ngx_http_upstream_finalize_request(r, u,
> +          NGX_HTTP_INTERNAL_SERVER_ERROR);
> +      return;
> +    }
> +

You want this to be set during config parsing, not at runtime.

>      if (ngx_ssl_create_connection(u->conf->ssl, c,
>                                    NGX_SSL_BUFFER|NGX_SSL_CLIENT)
>          != NGX_OK)
> @@ -4527,3 +4536,4 @@ ngx_http_upstream_init_main_conf(ngx_conf_t *cf, void *conf)
>  
>      return NGX_CONF_OK;
>  }
> +

Unrelated whitespace change.

> diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h
> index fa848c0..cc71ba9 100644
> --- a/src/http/ngx_http_upstream.h
> +++ b/src/http/ngx_http_upstream.h
> @@ -177,6 +177,9 @@ typedef struct {
>  #if (NGX_HTTP_SSL)
>      ngx_ssl_t                       *ssl;
>      ngx_flag_t                       ssl_session_reuse;
> +    ngx_flag_t                       ssl_verify_peer;
> +    ngx_uint_t                       ssl_verify_depth;
> +    ngx_str_t                        ssl_ca_certificate;
>  #endif
>  
>      ngx_str_t                        module;

Maxim Dounin


From mdounin at mdounin.ru  Fri Sep 16 13:14:13 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 16 Sep 2011 17:14:13 +0400
Subject: Problems building 1.1.3 with keepalive.
In-Reply-To: <87litoy8qc.wl%appa@perusio.net>
References: <87litoy8qc.wl%appa@perusio.net>
Message-ID: <20110916131413.GQ1137@mdounin.ru>

Hello!

On Fri, Sep 16, 2011 at 01:25:47PM +0100, Ant?nio P. P. Almeida wrote:

> I've tried building 1.1.3 using a bunch of patches provided by Maxim. 
> 
> They're here:  https://github.com/perusio/nginx-mdounin-patches
> 
> With these I was able to get a clean build of 1.1.2.
> 
> Applying the same set of patches to 1.1.3 doesn't raise any issues
> when patching. What happens is that the build process fails when
> compiling the ngx_http_fastcgi module.
> 
> This is the compiler error message:
> 
>      /ngx_http_fastcgi_module.c:1664:32: error: ?ngx_http_upstream_t? has no member named ?keepalive?

Looks like this patch was lost in transit:

http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001137.html

> I'm using the separate module for ngx_http_upstream keepalive. It's
> not in core.
> 
> There's a struct that is lacking a member related to the upstream
> keepalive functionality.
> 
> Any idea of what's going on? I know that "my" set of patches is
> "uncommon". Nevertheless I find it a bit strange that the build fails
> with 1.1.3 while is successful with 1.1.2.
> 
> The upstream keepalive patches are taken from Maxim's patch queue
> posted earlier this month on this list.

You may have better luck using cumulative patch from here:
http://nginx.org/patches/patch-nginx-keepalive-full-6.txt

Alternatively, just use svn trunk or wait several days for 1.1.4.  
Upstream keepalive patches were committed yesterday and will be 
available in next devel release.

Maxim Dounin


From mat999 at gmail.com  Fri Sep 16 13:23:09 2011
From: mat999 at gmail.com (SplitIce)
Date: Fri, 16 Sep 2011 23:23:09 +1000
Subject: Problems building 1.1.3 with keepalive.
In-Reply-To: <20110916131413.GQ1137@mdounin.ru>
References: <87litoy8qc.wl%appa@perusio.net> <20110916131413.GQ1137@mdounin.ru>
Message-ID: 

Yay 1.1.4 will be an awsome release :)

On Fri, Sep 16, 2011 at 11:14 PM, Maxim Dounin  wrote:

> Hello!
>
> On Fri, Sep 16, 2011 at 01:25:47PM +0100, Ant?nio P. P. Almeida wrote:
>
> > I've tried building 1.1.3 using a bunch of patches provided by Maxim.
> >
> > They're here:  https://github.com/perusio/nginx-mdounin-patches
> >
> > With these I was able to get a clean build of 1.1.2.
> >
> > Applying the same set of patches to 1.1.3 doesn't raise any issues
> > when patching. What happens is that the build process fails when
> > compiling the ngx_http_fastcgi module.
> >
> > This is the compiler error message:
> >
> >      /ngx_http_fastcgi_module.c:1664:32: error: ?ngx_http_upstream_t? has
> no member named ?keepalive?
>
> Looks like this patch was lost in transit:
>
> http://mailman.nginx.org/pipermail/nginx-devel/2011-September/001137.html
>
> > I'm using the separate module for ngx_http_upstream keepalive. It's
> > not in core.
> >
> > There's a struct that is lacking a member related to the upstream
> > keepalive functionality.
> >
> > Any idea of what's going on? I know that "my" set of patches is
> > "uncommon". Nevertheless I find it a bit strange that the build fails
> > with 1.1.3 while is successful with 1.1.2.
> >
> > The upstream keepalive patches are taken from Maxim's patch queue
> > posted earlier this month on this list.
>
> You may have better luck using cumulative patch from here:
> http://nginx.org/patches/patch-nginx-keepalive-full-6.txt
>
> Alternatively, just use svn trunk or wait several days for 1.1.4.
> Upstream keepalive patches were committed yesterday and will be
> available in next devel release.
>
> Maxim Dounin
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From mdounin at mdounin.ru  Fri Sep 16 15:52:47 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 16 Sep 2011 19:52:47 +0400
Subject: [Patch] proxy cache for 304 Not Modified
In-Reply-To: 
References: 
Message-ID: <20110916155247.GS1137@mdounin.ru>

Hello!

On Fri, Sep 16, 2011 at 01:54:04AM +0800, ????? wrote:

> Hello guys,
> I have wrote a module to make nginx support 304 to decrease bandwidth usage.
> note: I have a newbie for nginx module development, so the above module may
> have some problem. Welcome to test it and feedback another problem with me.
> Note:
> I cannot found where can using nginx internal cache update without delete
> the old cache, so I using reopen the cache file to write a new expire
> header. Can anyone help me to fix this problem?
> 
> You can download full patch file from here:
> http://m-b.cc/share/proxy_304.txt

See review below.

This is definitely step in right direction, re-checking cache 
items should be supported.  Though I can't say I'm happy with the 
patch.  Hope it will be improved. :)

> 
> # User MagicBear 
> Upstream:
> add $upstream_last_modified variant.
> add handler for 304 Unmodified.
> Proxy:
> change to send If-Modified-Since header.
> 
> diff -ruN a/http/modules/ngx_http_proxy_module.c
> b/http/modules/ngx_http_proxy_module.c
> --- a/http/modules/ngx_http_proxy_module.c      2011-09-15
> 22:23:03.284431407 +0800
> +++ b/http/modules/ngx_http_proxy_module.c      2011-09-16
> 01:41:44.654428632 +0800
> @@ -543,7 +543,7 @@
>      { ngx_string("Connection"), ngx_string("close") },
>      { ngx_string("Keep-Alive"), ngx_string("") },
>      { ngx_string("Expect"), ngx_string("") },
> -    { ngx_string("If-Modified-Since"), ngx_string("") },
> +    { ngx_string("If-Modified-Since"),
> ngx_string("$upstream_last_modified") },
>      { ngx_string("If-Unmodified-Since"), ngx_string("") },
>      { ngx_string("If-None-Match"), ngx_string("") },
>      { ngx_string("If-Match"), ngx_string("") },
> diff -ruN a/http/ngx_http_upstream.c b/http/ngx_http_upstream.c
> --- a/http/ngx_http_upstream.c  2011-09-15 22:23:03.284431407 +0800
> +++ b/http/ngx_http_upstream.c  2011-09-16 01:41:44.654428632 +0800
> @@ -16,6 +16,8 @@
>      ngx_http_upstream_t *u);
>  static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r,
>      ngx_http_variable_value_t *v, uintptr_t data);
> +static ngx_int_t ngx_http_upstream_last_modified(ngx_http_request_t *r,
> +    ngx_http_variable_value_t *v, uintptr_t data);
>  #endif
> 
>  static void ngx_http_upstream_init_request(ngx_http_request_t *r);
> @@ -342,6 +344,10 @@
>        ngx_http_upstream_cache_status, 0,
>        NGX_HTTP_VAR_NOCACHEABLE, 0 },
> 
> +    { ngx_string("upstream_last_modified"), NULL,
> +      ngx_http_upstream_last_modified, 0,
> +      NGX_HTTP_VAR_NOCACHEABLE, 0 },
> +
>  #endif
> 
>      { ngx_null_string, NULL, NULL, 0, 0, 0 }
> @@ -1618,6 +1624,80 @@
>              u->buffer.last = u->buffer.pos;
>          }
> 
> +#if (NGX_HTTP_CACHE)
> +

Not sure if it's appropriate place.  Probably 
ngx_http_upstream_test_next() would be better, near 
cache_use_stale processing.

BTW, please use "-p" switch in diff, it emits function names and 
thus makes review easier.

> +        if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
> +                       u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
> +                       ngx_http_file_cache_valid(u->conf->cache_valid,
> u->headers_in.status_n))
> +        {

The ngx_http_file_cache_valid() test seems to be incorrect (and 
completely unneeded), as you are going to preserve reply which is 
already in cache, not 304 reply.

(There are also multiple style issues in the patch.  I have to 
manually reformat it to make code readable for review.  Please 
make sure to follow nginx coding style with further postings.)

> +            ngx_int_t  rc;
> +
> +            rc = u->reinit_request(r);
> +
> +            if (rc == NGX_OK) {

You may want to invert this check to make code more readable.

> +                u->cache_status = NGX_HTTP_CACHE_BYPASS;

Setting cache_status to NGX_HTTP_CACHE_BYPASS looks misleading.  
Probably some other status should be introduced for this.

> +                rc = ngx_http_upstream_cache_send(r, u);
> +
> +                               time_t  now, valid;
> +
> +                               now = ngx_time();
> +
> +                               valid = r->cache->valid_sec;
> +
> +                               if (valid == 0) {

How can r->cache->valid_sec be non-zero here?  As far as I 
understand this should never happen.  Even if does happen, this is 
unwise to trust it.

> +                                       valid =
> ngx_http_file_cache_valid(u->conf->cache_valid,
> +
>                               u->headers_in.status_n);
> +                                       if (valid) {
> +                                               r->cache->valid_sec = now +
> valid;
> +                                       }
> +                               }
> +
> +                               if (valid) {
> +                                       r->cache->last_modified =
> r->headers_out.last_modified_time;
> +                                       r->cache->date = now;
> +                                       r->cache->body_start = (u_short)
> (u->buffer.pos - u->buffer.start);
> +
> +                                       // update Header
> +                                       ngx_http_file_cache_set_header(r,
> u->buffer.start);
> +
> +                                       ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
> r->connection->log, 0,
> +
>  "update cache \"%s\" header to new expired." , r->cache->file.name.data);
> +
> +                                       // Reopen file via RW
> +                                       ngx_fd_t fd =
> ngx_open_file(r->cache->file.name.data, NGX_FILE_RDWR, NGX_FILE_OPEN, 0);
> +
> +                                       if (fd == NGX_INVALID_FILE) {
> +                                               ngx_log_error(NGX_LOG_CRIT,
> r->connection->log, ngx_errno,
> +
> ngx_open_file_n " \"%s\" failed", r->cache->file.name.data);
> +                                               return;
> +                                       }
> +
> +                                       // Write cache
> +                                       if (write(fd, u->buffer.start,
> sizeof(ngx_http_file_cache_header_t)) < 0)
> +                                       {
> +                                               ngx_log_error(NGX_LOG_CRIT,
> r->connection->log, ngx_errno,
> +
> "write proxy_cache \"%s\" failed", r->cache->file.name.data);
> +                                               return;
> +                                       }
> +
> +                                       if (ngx_close_file(fd) ==
> NGX_FILE_ERROR) {
> +                                               ngx_log_error(NGX_LOG_ALERT,
> r->connection->log, ngx_errno,
> +
> ngx_close_file_n " \"%s\" failed", r->cache->file.name.data);
> +                                       }
> +                                       ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
> r->connection->log, 0,
> +
>  "update cache \"%s\" header to new expired done." ,
> r->cache->file.name.data);
> +                               } else {
> +                                       u->cacheable = 0;
> +                                       r->headers_out.last_modified_time =
> -1;
> +                               }

All this logic to update valid_sec in cache file should be 
abstracted into a function in ngx_http_file_cache.c.

It probably should just write valid_sec and nothing more.

> +            }
> +
> +            ngx_http_upstream_finalize_request(r, u, rc);
> +            return;
> +        }
> +
> +#endif
> +
>          if (ngx_http_upstream_test_next(r, u) == NGX_OK) {
>              return;
>          }
> @@ -4006,6 +4086,32 @@
> 
>      return NGX_OK;
>  }
> +
> +ngx_int_t
> +ngx_http_upstream_last_modified(ngx_http_request_t *r,
> +    ngx_http_variable_value_t *v, uintptr_t data)
> +{
> +    u_char *u;

Please don't call character pointers as "u".  This letter is 
generally used for r->upstream pointer.  Better to name it "p".

> +
> +    if (r->upstream == NULL || r->upstream->cache_status == 0 ||
> r->cache==NULL || r->cache->last_modified <= 0) {

There is no need to check r->cache if cache_status != 0.

I also believe that r->cache->last_modified == 0 is valid, though 
not sure if it may appear to be 0 unintentionally.

> +        v->not_found = 1;
> +        return NGX_OK;
> +    }
> +
> +    v->valid = 1;
> +    v->no_cacheable = 0;
> +    v->not_found = 0;
> +       u = ngx_pcalloc(r->pool, 30);

There is no need to use 30 here, 29 is enough...

> +    if (u == NULL) {
> +        return NGX_ERROR;
> +    }
> +
> +    v->len = 29;
> +       ngx_http_time(u, r->cache->last_modified);
> +    v->data = u;

...and please use sizeof("Mon, 28 Sep 1970 06:00:00 GMT") - 1 instead.  

Generic pattern may be found in ngx_http_variable_sent_last_modified():

    p = ngx_pnalloc(r->pool,
                    sizeof("Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT") - 1);
    if (p == NULL) {
        return NGX_ERROR;
    }

    v->len = ngx_http_time(p, r->headers_out.last_modified_time) - p;
    v->valid = 1;
    v->no_cacheable = 0;
    v->not_found = 0;
    v->data = p;

> +
> +    return NGX_OK;
> +}
> 
>  #endif
> 
> 
> MagicBear


Maxim Dounin


From jeremie.legrand at atos.net  Fri Sep 16 15:17:26 2011
From: jeremie.legrand at atos.net (=?iso-8859-1?Q?Legrand_J=E9r=E9mie?=)
Date: Fri, 16 Sep 2011 17:17:26 +0200
Subject: Advise to develop a rights-control module
Message-ID: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>

Hi,

I need to develop a module that  check the rights of an user regarding to URI parameters.

User is allowed : I send the request to backend server with proxy_pass
User is not allowed : I need to generate a body response displaying information about the error.

This module should be like access and auth_basic module launched during the NGX_HTTP_ACCESS_PHASE but in this phase I can't send a body response.
I tried to declare it has a content handler but, in consequence, it does not work with the proxy module which is also a content handler.

What can I do to develop this kind of module ?

Thanks by advance for any answer !

J.Legrand

________________________________

Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? d'Atos ne pourra ?tre recherch?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis.

This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Atos liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 

From appa at perusio.net  Fri Sep 16 17:26:48 2011
From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida)
Date: Fri, 16 Sep 2011 18:26:48 +0100
Subject: Problems building 1.1.3 with keepalive.
In-Reply-To: <20110916131413.GQ1137@mdounin.ru>
References: <87litoy8qc.wl%appa@perusio.net> <20110916131413.GQ1137@mdounin.ru>
Message-ID: <87k498xusn.wl%appa@perusio.net>

On 16 Set 2011 14h14 WEST, mdounin at mdounin.ru wrote:

Hello Maxim,

> Hello!
>

> Looks like this patch was lost in transit:

It wasn't lost. I was able to build 1.1.2 without it. It gave a
conflict on the first hunk. I ended up editing the patch and removing
that first hunk. Now it works.

> Alternatively, just use svn trunk or wait several days for 1.1.4.  
> Upstream keepalive patches were committed yesterday and will be 
> available in next devel release.

I'll do that. In the meantime...
Good news the inclusion of the patch in the official repo.

Thank you for your work on Nginx.

--- appa


From magicbearmo at gmail.com  Fri Sep 16 20:34:39 2011
From: magicbearmo at gmail.com (MagicBear)
Date: Sat, 17 Sep 2011 04:34:39 +0800
Subject: [Patch] proxy cache for 304 Not Modified
In-Reply-To: <20110916155247.GS1137@mdounin.ru>
References: 
	<20110916155247.GS1137@mdounin.ru>
Message-ID: 

Hello Maxim!

2011/9/16 Maxim Dounin 
>
> Hello!
>
> On Fri, Sep 16, 2011 at 01:54:04AM +0800, ????? wrote:
>
> > Hello guys,
> > I have wrote a module to make nginx support 304 to decrease bandwidth usage.
> > note: I have a newbie for nginx module development, so the above module may
> > have some problem. Welcome to test it and feedback another problem with me.
> > Note:
> > I cannot found where can using nginx internal cache update without delete
> > the old cache, so I using reopen the cache file to write a new expire
> > header. Can anyone help me to fix this problem?
> >
> > You can download full patch file from here:
> > http://m-b.cc/share/proxy_304.txt
>
> See review below.
>
> This is definitely step in right direction, re-checking cache
> items should be supported. ?Though I can't say I'm happy with the
> patch. ?Hope it will be improved. :)

Thanks for your advice.

>
> >
> > # User MagicBear 
> > Upstream:
> > add $upstream_last_modified variant.
> > add handler for 304 Unmodified.
> > Proxy:
> > change to send If-Modified-Since header.
> >
> > diff -ruN a/http/modules/ngx_http_proxy_module.c
> > b/http/modules/ngx_http_proxy_module.c
> > --- a/http/modules/ngx_http_proxy_module.c ? ? ?2011-09-15
> > 22:23:03.284431407 +0800
> > +++ b/http/modules/ngx_http_proxy_module.c ? ? ?2011-09-16
> > 01:41:44.654428632 +0800
> > @@ -543,7 +543,7 @@
> > ? ? ?{ ngx_string("Connection"), ngx_string("close") },
> > ? ? ?{ ngx_string("Keep-Alive"), ngx_string("") },
> > ? ? ?{ ngx_string("Expect"), ngx_string("") },
> > - ? ?{ ngx_string("If-Modified-Since"), ngx_string("") },
> > + ? ?{ ngx_string("If-Modified-Since"),
> > ngx_string("$upstream_last_modified") },
> > ? ? ?{ ngx_string("If-Unmodified-Since"), ngx_string("") },
> > ? ? ?{ ngx_string("If-None-Match"), ngx_string("") },
> > ? ? ?{ ngx_string("If-Match"), ngx_string("") },
> > diff -ruN a/http/ngx_http_upstream.c b/http/ngx_http_upstream.c
> > --- a/http/ngx_http_upstream.c ?2011-09-15 22:23:03.284431407 +0800
> > +++ b/http/ngx_http_upstream.c ?2011-09-16 01:41:44.654428632 +0800
> > @@ -16,6 +16,8 @@
> > ? ? ?ngx_http_upstream_t *u);
> > ?static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r,
> > ? ? ?ngx_http_variable_value_t *v, uintptr_t data);
> > +static ngx_int_t ngx_http_upstream_last_modified(ngx_http_request_t *r,
> > + ? ?ngx_http_variable_value_t *v, uintptr_t data);
> > ?#endif
> >
> > ?static void ngx_http_upstream_init_request(ngx_http_request_t *r);
> > @@ -342,6 +344,10 @@
> > ? ? ? ?ngx_http_upstream_cache_status, 0,
> > ? ? ? ?NGX_HTTP_VAR_NOCACHEABLE, 0 },
> >
> > + ? ?{ ngx_string("upstream_last_modified"), NULL,
> > + ? ? ?ngx_http_upstream_last_modified, 0,
> > + ? ? ?NGX_HTTP_VAR_NOCACHEABLE, 0 },
> > +
> > ?#endif
> >
> > ? ? ?{ ngx_null_string, NULL, NULL, 0, 0, 0 }
> > @@ -1618,6 +1624,80 @@
> > ? ? ? ? ? ? ?u->buffer.last = u->buffer.pos;
> > ? ? ? ? ?}
> >
> > +#if (NGX_HTTP_CACHE)
> > +
>
> Not sure if it's appropriate place. ?Probably
> ngx_http_upstream_test_next() would be better, near
> cache_use_stale processing.

I have move it to inside function.

>
> BTW, please use "-p" switch in diff, it emits function names and
> thus makes review easier.

This patch I have open this switch~

>
> > + ? ? ? ?if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
> > + ? ? ? ? ? ? ? ? ? ? ? u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
> > + ? ? ? ? ? ? ? ? ? ? ? ngx_http_file_cache_valid(u->conf->cache_valid,
> > u->headers_in.status_n))
> > + ? ? ? ?{
>
> The ngx_http_file_cache_valid() test seems to be incorrect (and
> completely unneeded), as you are going to preserve reply which is
> already in cache, not 304 reply.
>

I review the code, this is completely unneeded, you are right, now I
move get valid time to here, I think that may be more good?

> (There are also multiple style issues in the patch. ?I have to
> manually reformat it to make code readable for review. ?Please
> make sure to follow nginx coding style with further postings.)
>

I am sorry for that, I using notepad++ to edit the code, but I didn't
notice that notepad++ default using \t to format, and nginx using 4
space, because they seem to be same at notepad++ and console.

> > + ? ? ? ? ? ?ngx_int_t ?rc;
> > +
> > + ? ? ? ? ? ?rc = u->reinit_request(r);
> > +
> > + ? ? ? ? ? ?if (rc == NGX_OK) {
>
> You may want to invert this check to make code more readable.
>
> > + ? ? ? ? ? ? ? ?u->cache_status = NGX_HTTP_CACHE_BYPASS;
>
> Setting cache_status to NGX_HTTP_CACHE_BYPASS looks misleading.
> Probably some other status should be introduced for this.
>
> > + ? ? ? ? ? ? ? ?rc = ngx_http_upstream_cache_send(r, u);
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? time_t ?now, valid;
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? now = ngx_time();
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? valid = r->cache->valid_sec;
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (valid == 0) {
>
> How can r->cache->valid_sec be non-zero here? ?As far as I
> understand this should never happen. ?Even if does happen, this is
> unwise to trust it.
>
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? valid =
> > ngx_http_file_cache_valid(u->conf->cache_valid,
> > +
> > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? u->headers_in.status_n);
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (valid) {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? r->cache->valid_sec = now +
> > valid;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (valid) {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? r->cache->last_modified =
> > r->headers_out.last_modified_time;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? r->cache->date = now;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? r->cache->body_start = (u_short)
> > (u->buffer.pos - u->buffer.start);
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? // update Header
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_http_file_cache_set_header(r,
> > u->buffer.start);
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
> > r->connection->log, 0,
> > +
> > ?"update cache \"%s\" header to new expired." , r->cache->file.name.data);
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? // Reopen file via RW
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_fd_t fd =
> > ngx_open_file(r->cache->file.name.data, NGX_FILE_RDWR, NGX_FILE_OPEN, 0);
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (fd == NGX_INVALID_FILE) {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_log_error(NGX_LOG_CRIT,
> > r->connection->log, ngx_errno,
> > +
> > ngx_open_file_n " \"%s\" failed", r->cache->file.name.data);
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? return;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? // Write cache
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (write(fd, u->buffer.start,
> > sizeof(ngx_http_file_cache_header_t)) < 0)
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_log_error(NGX_LOG_CRIT,
> > r->connection->log, ngx_errno,
> > +
> > "write proxy_cache \"%s\" failed", r->cache->file.name.data);
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? return;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
> > +
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (ngx_close_file(fd) ==
> > NGX_FILE_ERROR) {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_log_error(NGX_LOG_ALERT,
> > r->connection->log, ngx_errno,
> > +
> > ngx_close_file_n " \"%s\" failed", r->cache->file.name.data);
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ngx_log_debug1(NGX_LOG_DEBUG_HTTP,
> > r->connection->log, 0,
> > +
> > ?"update cache \"%s\" header to new expired done." ,
> > r->cache->file.name.data);
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? } else {
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? u->cacheable = 0;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? r->headers_out.last_modified_time =
> > -1;
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? }
>
> All this logic to update valid_sec in cache file should be
> abstracted into a function in ngx_http_file_cache.c.
>
> It probably should just write valid_sec and nothing more.
>

I have add a function ngx_http_file_cache_set_valid at
ngx_http_file_cache.c now, it will only update valid_sec field.

> > + ? ? ? ? ? ?}
> > +
> > + ? ? ? ? ? ?ngx_http_upstream_finalize_request(r, u, rc);
> > + ? ? ? ? ? ?return;
> > + ? ? ? ?}
> > +
> > +#endif
> > +
> > ? ? ? ? ?if (ngx_http_upstream_test_next(r, u) == NGX_OK) {
> > ? ? ? ? ? ? ?return;
> > ? ? ? ? ?}
> > @@ -4006,6 +4086,32 @@
> >
> > ? ? ?return NGX_OK;
> > ?}
> > +
> > +ngx_int_t
> > +ngx_http_upstream_last_modified(ngx_http_request_t *r,
> > + ? ?ngx_http_variable_value_t *v, uintptr_t data)
> > +{
> > + ? ?u_char *u;
>
> Please don't call character pointers as "u". ?This letter is
> generally used for r->upstream pointer. ?Better to name it "p".
>

I have changed this to p, thx for your advice.

> > +
> > + ? ?if (r->upstream == NULL || r->upstream->cache_status == 0 ||
> > r->cache==NULL || r->cache->last_modified <= 0) {
>
> There is no need to check r->cache if cache_status != 0.
>
> I also believe that r->cache->last_modified == 0 is valid, though
> not sure if it may appear to be 0 unintentionally.
>
> > + ? ? ? ?v->not_found = 1;
> > + ? ? ? ?return NGX_OK;
> > + ? ?}
> > +
> > + ? ?v->valid = 1;
> > + ? ?v->no_cacheable = 0;
> > + ? ?v->not_found = 0;
> > + ? ? ? u = ngx_pcalloc(r->pool, 30);
>
> There is no need to use 30 here, 29 is enough...
>
> > + ? ?if (u == NULL) {
> > + ? ? ? ?return NGX_ERROR;
> > + ? ?}
> > +
> > + ? ?v->len = 29;
> > + ? ? ? ngx_http_time(u, r->cache->last_modified);
> > + ? ?v->data = u;
>
> ...and please use sizeof("Mon, 28 Sep 1970 06:00:00 GMT") - 1 instead.
>
> Generic pattern may be found in ngx_http_variable_sent_last_modified():
>
> ? ?p = ngx_pnalloc(r->pool,
> ? ? ? ? ? ? ? ? ? ?sizeof("Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT") - 1);
> ? ?if (p == NULL) {
> ? ? ? ?return NGX_ERROR;
> ? ?}
>
> ? ?v->len = ngx_http_time(p, r->headers_out.last_modified_time) - p;
> ? ?v->valid = 1;
> ? ?v->no_cacheable = 0;
> ? ?v->not_found = 0;
> ? ?v->data = p;
>

And I have changed the code to this.

> > +
> > + ? ?return NGX_OK;
> > +}
> >
> > ?#endif
> >
> >
> > MagicBear
>
>
> Maxim Dounin
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel


Here is the new patch
full file at: http://m-b.cc/share/patch-nginx-proxy-304.txt
_______________________________________________


# User MagicBear 
Upstream:
add $upstream_last_modified variant.
add handler for 304 Unmodified.
Proxy:
change to send If-Modified-Since header.

TODO:
change write TO not block IO.

diff -ruNp a/src/http/modules/ngx_http_proxy_module.c
nginx-1.1.3/src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c  2011-09-16
02:13:16.274428192 +0800
+++ nginx-1.1.3/src/http/modules/ngx_http_proxy_module.c
2011-09-16 02:13:57.544428180 +0800
@@ -543,7 +543,7 @@ static ngx_keyval_t  ngx_http_proxy_cach
     { ngx_string("Connection"), ngx_string("close") },
     { ngx_string("Keep-Alive"), ngx_string("") },
     { ngx_string("Expect"), ngx_string("") },
-    { ngx_string("If-Modified-Since"), ngx_string("") },
+    { ngx_string("If-Modified-Since"), ngx_string("$upstream_last_modified") },
     { ngx_string("If-Unmodified-Since"), ngx_string("") },
     { ngx_string("If-None-Match"), ngx_string("") },
     { ngx_string("If-Match"), ngx_string("") },
diff -ruNp a/src/http/ngx_http_cache.h nginx-1.1.3/src/http/ngx_http_cache.h
--- a/src/http/ngx_http_cache.h 2011-07-29 23:09:02.000000000 +0800
+++ nginx-1.1.3/src/http/ngx_http_cache.h       2011-09-17
04:08:27.000000000 +0800
@@ -133,6 +133,7 @@ ngx_int_t ngx_http_file_cache_create(ngx
 void ngx_http_file_cache_create_key(ngx_http_request_t *r);
 ngx_int_t ngx_http_file_cache_open(ngx_http_request_t *r);
 void ngx_http_file_cache_set_header(ngx_http_request_t *r, u_char *buf);
+void ngx_http_file_cache_set_valid(ngx_http_request_t *r);
 void ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf);
 ngx_int_t ngx_http_cache_send(ngx_http_request_t *);
 void ngx_http_file_cache_free(ngx_http_cache_t *c, ngx_temp_file_t *tf);
diff -ruNp a/src/http/ngx_http_file_cache.c
nginx-1.1.3/src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c    2011-08-26 01:29:34.000000000 +0800
+++ nginx-1.1.3/src/http/ngx_http_file_cache.c  2011-09-17
04:25:36.000000000 +0800
@@ -765,6 +765,38 @@ ngx_http_file_cache_set_header(ngx_http_
     *p = LF;
 }

+void
+ngx_http_file_cache_set_valid(ngx_http_request_t *r)
+{
+    ngx_file_t  file;
+
+    ngx_memzero(&file, sizeof(ngx_file_t));
+
+    file.name = r->cache->file.name;
+    file.log = r->connection->log;
+
+    file.fd = ngx_open_file(file.name.data, NGX_FILE_RDWR,
+                            NGX_FILE_OPEN, NGX_FILE_DEFAULT_ACCESS);
+
+    if (file.fd == NGX_INVALID_FILE) {
+        ngx_log_error(NGX_LOG_EMERG, r->connection->log, ngx_errno,
+                      ngx_open_file_n " \"%s\" failed",
r->cache->file.name.data);
+        return;
+    }
+
+    if (ngx_write_file(&file, (u_char *)&r->cache->valid_sec,
sizeof(r->cache->valid_sec),
offsetof(ngx_http_file_cache_header_t,valid_sec)) == NGX_ERROR)
+    {
+        ngx_log_error(NGX_LOG_EMERG, r->connection->log, ngx_errno,
+                      "write proxy_cache \"%s\" failed",
r->cache->file.name.data);
+        return;
+    }
+
+    if (ngx_close_file(file.fd) == NGX_FILE_ERROR) {
+        ngx_log_error(NGX_LOG_ALERT, r->connection->log, ngx_errno,
+                      ngx_close_file_n " \"%s\" failed",
r->cache->file.name.data);
+    }
+}
+

 void
 ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf)
diff -ruNp a/src/http/ngx_http_upstream.c
nginx-1.1.3/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c      2011-09-16 02:13:16.274428192 +0800
+++ nginx-1.1.3/src/http/ngx_http_upstream.c    2011-09-17
04:23:02.000000000 +0800
@@ -16,6 +16,8 @@ static ngx_int_t ngx_http_upstream_cache
     ngx_http_upstream_t *u);
 static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r,
     ngx_http_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_http_upstream_last_modified(ngx_http_request_t *r,
+    ngx_http_variable_value_t *v, uintptr_t data);
 #endif

 static void ngx_http_upstream_init_request(ngx_http_request_t *r);
@@ -342,6 +344,10 @@ static ngx_http_variable_t  ngx_http_ups
       ngx_http_upstream_cache_status, 0,
       NGX_HTTP_VAR_NOCACHEABLE, 0 },

+    { ngx_string("upstream_last_modified"), NULL,
+      ngx_http_upstream_last_modified, 0,
+      NGX_HTTP_VAR_NOCACHEABLE, 0 },
+
 #endif

     { ngx_null_string, NULL, NULL, 0, 0, 0 }
@@ -1680,6 +1686,31 @@ ngx_http_upstream_test_next(ngx_http_req
     ngx_uint_t                 status;
     ngx_http_upstream_next_t  *un;

+#if (NGX_HTTP_CACHE)
+    time_t  valid;
+
+    if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
+        u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
+        0!=(valid=ngx_http_file_cache_valid(u->conf->cache_valid,
u->headers_in.status_n)))
+    {
+        ngx_int_t  rc;
+
+        rc = u->reinit_request(r);
+
+        if (rc == NGX_OK) {
+            u->cache_status = NGX_HTTP_CACHE_UPDATING;
+            rc = ngx_http_upstream_cache_send(r, u);
+
+            r->cache->valid_sec = ngx_time() + valid;
+            ngx_http_file_cache_set_valid(r);
+        }
+
+        ngx_http_upstream_finalize_request(r, u, rc);
+        return NGX_OK;
+    }
+
+#endif
+
     status = u->headers_in.status_n;

     for (un = ngx_http_upstream_next_errors; un->status; un++) {
@@ -4006,6 +4037,32 @@ ngx_http_upstream_cache_status(ngx_http_

     return NGX_OK;
 }
+
+ngx_int_t
+ngx_http_upstream_last_modified(ngx_http_request_t *r,
+    ngx_http_variable_value_t *v, uintptr_t data)
+{
+    u_char *p;
+
+    if (r->upstream == NULL || r->upstream->cache_status == 0) {
+        v->not_found = 1;
+        return NGX_OK;
+    }
+
+    p = ngx_pcalloc(r->pool,
+                sizeof("Mon, 28 Sep 1970 06:00:00 GMT") - 1);
+    if (p == NULL) {
+        return NGX_ERROR;
+    }
+
+    v->len = ngx_http_time(p, r->cache->last_modified) - p;
+    v->valid = 1;
+    v->no_cacheable = 0;
+    v->not_found = 0;
+    v->data = p;
+
+    return NGX_OK;
+}

 #endif



MagicBear


From orz at loli.my  Fri Sep 16 20:40:37 2011
From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=)
Date: Sat, 17 Sep 2011 04:40:37 +0800
Subject: Advise to develop a rights-control module
In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
References: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
Message-ID: 

I think using
proxy_intercept_errors on;
and
error_page
may be work

2011/9/16 Legrand J?r?mie :
> Hi,
>
>
>
> I need to develop a module that ?check the rights of an user regarding to
> URI parameters.
>
>
>
> User is allowed : I send the request to backend server with proxy_pass
>
> User is not allowed : I need to generate a body response displaying
> information about the error.
>
>
>
> This module should be like access and auth_basic module launched during the
> NGX_HTTP_ACCESS_PHASE but in this phase I can?t send a body response.
>
> I tried to declare it has a content handler but, in consequence, it does not
> work with the proxy module which is also a content handler.
>
>
>
> What can I do to develop this kind of module ?
>
>
>
> Thanks by advance for any answer !
>
>
>
> J.Legrand
>
> ________________________________
> Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage
> exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret
> professionnel. Si vous recevez ce message par erreur, merci d'en avertir
> imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne
> pouvant ?tre assur?e sur Internet, la responsabilit? d'Atos ne pourra ?tre
> recherch?e quant au contenu de ce message. Bien que les meilleurs efforts
> soient faits pour maintenir cette transmission exempte de tout virus,
> l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne
> saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis.
>
> This e-mail and the documents attached are confidential and intended solely
> for the addressee; it may also be privileged. If you receive this e-mail in
> error, please notify the sender immediately and destroy it. As its integrity
> cannot be secured on the Internet, the Atos liability cannot be triggered
> for the message content. Although the sender endeavours to maintain a
> computer virus-free network, the sender does not warrant that this
> transmission is virus-free and will not be liable for any damages resulting
> from any virus transmitted.
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>


From agentzh at gmail.com  Sat Sep 17 05:48:50 2011
From: agentzh at gmail.com (agentzh)
Date: Sat, 17 Sep 2011 13:48:50 +0800
Subject: Advise to develop a rights-control module
In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
References: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
Message-ID: 

On Fri, Sep 16, 2011 at 11:17 PM, Legrand J?r?mie
 wrote:
> I need to develop a module that ?check the rights of an user regarding to
> URI parameters.
> User is allowed : I send the request to backend server with proxy_pass
>
> User is not allowed : I need to generate a body response displaying
> information about the error.
>

I think this is a perfect use case for the ngx_lua module
(http://wiki.nginx.org/HttpLuaModule ). See the following example:

    location / {
        access_by_lua '
            if ngx.var.arg_foo == "BAD" then
                ngx.status = 403
                ngx.print("you are not allowed due to bad foo param:
", ngx.var.arg_foo)
                ngx.exit(ngx.HTTP_OK)
            end
        ';

        proxy_pass http://...;
    }

We first check if the URI parameter "foo" equals to "BAD", if yes,
just emit a 403 error page with custom response body and exit the
whole request processing process. Otherwise, we just quit the access
phase and continue to proxy_pass as usual. If your validation logic is
so complicated that must be done in C, then you can write a simple Lua
C module (or just LuaJIT's excellent FFI feature).

If you insist in rolling out your own Nginx C module, you can just
take a look at how ngx_lua handles these behind the scene.

>
> This module should be like access and auth_basic module launched during the
> NGX_HTTP_ACCESS_PHASE but in this phase I can?t send a body response.
>

No, you can surely send response and short-circuit request processing
in the access phase, as demonstrated above :)

Regards,
-agentzh


From valery+nginxen at grid.net.ru  Sun Sep 18 10:06:05 2011
From: valery+nginxen at grid.net.ru (Valery Kholodkov)
Date: Sun, 18 Sep 2011 12:06:05 +0200
Subject: Advise to develop a rights-control module
In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
References: <6B62F3E8C02D3E40AB7DBD3D5F2647D929E654ED96@FRVDX100.fr01.awl.atosorigin.net>
Message-ID: <4E75C28D.8050503@grid.net.ru>

It should be possible. Look at how static module does it.

Legrand J?r?mie wrote:
> Hi,
> 
>  
> 
> I need to develop a module that  check the rights of an user regarding 
> to URI parameters.
> 
>  
> 
> User is allowed : I send the request to backend server with proxy_pass
> 
> User is not allowed : I need to generate a body response displaying 
> information about the error.
> 
>  
> 
> This module should be like access and auth_basic module launched during 
> the NGX_HTTP_ACCESS_PHASE but in this phase I can?t send a body response.
> 
> I tried to declare it has a content handler but, in consequence, it does 
> not work with the proxy module which is also a content handler.
> 
>  
> 
> What can I do to develop this kind of module ?
> 
>  
> 
> Thanks by advance for any answer !
> 
>  
> 
> J.Legrand
> 
> 
> ------------------------------------------------------------------------
> 
> Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? 
> l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? 
> par le secret professionnel. Si vous recevez ce message par erreur, 
> merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. 
> L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la 
> responsabilit? d'Atos ne pourra ?tre recherch?e quant au contenu de ce 
> message. Bien que les meilleurs efforts soient faits pour maintenir 
> cette transmission exempte de tout virus, l'exp?diteur ne donne aucune 
> garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e 
> pour tout dommage r?sultant d'un virus transmis.
> 
> This e-mail and the documents attached are confidential and intended 
> solely for the addressee; it may also be privileged. If you receive this 
> e-mail in error, please notify the sender immediately and destroy it. As 
> its integrity cannot be secured on the Internet, the Atos liability 
> cannot be triggered for the message content. Although the sender 
> endeavours to maintain a computer virus-free network, the sender does 
> not warrant that this transmission is virus-free and will not be liable 
> for any damages resulting from any virus transmitted.
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel


-- 
Best regards,
Valery Kholodkov


From mdounin at mdounin.ru  Mon Sep 19 06:47:16 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 10:47:16 +0400
Subject: [PATCH 0 of 4] fixup patches for trunk
Message-ID: 

Hello!

Igor, please review, I would like to commit these patches 
before 1.1.4.

Maxim Dounin


From mdounin at mdounin.ru  Mon Sep 19 06:47:17 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 10:47:17 +0400
Subject: [PATCH 1 of 4] Fix of separate pool for upstream connectins
In-Reply-To: 
References: 
Message-ID: <187de3097048531a3bd3.1316414837@vm-bsd.mdounin.ru>

# HG changeset patch
# User Maxim Dounin 
# Date 1316368092 -14400
# Node ID 187de3097048531a3bd3d7a84a40657d1f0df216
# Parent  61039cdc036dce3956ef8fe91852bab795492222
Fix of separate pool for upstream connectins.

Pool may not be created if connection was created but rejected in connect()
call.  Make sure to check if it is here before trying to destroy it.

diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -2922,7 +2922,10 @@ ngx_http_upstream_next(ngx_http_request_
         }
 #endif
 
-        ngx_destroy_pool(u->peer.connection->pool);
+        if (u->peer.connection->pool) {
+            ngx_destroy_pool(u->peer.connection->pool);
+        }
+
         ngx_close_connection(u->peer.connection);
     }
 
@@ -3017,7 +3020,10 @@ ngx_http_upstream_finalize_request(ngx_h
                        "close http upstream connection: %d",
                        u->peer.connection->fd);
 
-        ngx_destroy_pool(u->peer.connection->pool);
+        if (u->peer.connection->pool) {
+            ngx_destroy_pool(u->peer.connection->pool);
+        }
+
         ngx_close_connection(u->peer.connection);
     }
 


From mdounin at mdounin.ru  Mon Sep 19 06:47:18 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 10:47:18 +0400
Subject: [PATCH 2 of 4] Fix of cpu hog in event pipe
In-Reply-To: 
References: 
Message-ID: 

# HG changeset patch
# User Maxim Dounin 
# Date 1316368093 -14400
# Node ID e71db1db1a002ef34d6f6b24cd1c2059f8df58f5
# Parent  187de3097048531a3bd3d7a84a40657d1f0df216
Fix of cpu hog in event pipe.

If client closed connection in ngx_event_pipe_write_to_downstream(), buffers
in the "out" chain were lost.  This caused cpu hog if all available buffers
were in the "out" chain.  Fix is to call ngx_chain_update_chains() before
checking return code of output filter to avoid loosing buffers in the "out"
chain.

Note that this situation (all available buffers in the "out" chain) isn't
normal, it should be prevented by busy buffers limit.  Though right now it
may happen with complex protocols like fastcgi.  This should be addressed
separately.

diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c
--- a/src/event/ngx_event_pipe.c
+++ b/src/event/ngx_event_pipe.c
@@ -656,13 +656,13 @@ ngx_event_pipe_write_to_downstream(ngx_e
 
         rc = p->output_filter(p->output_ctx, out);
 
+        ngx_chain_update_chains(p->pool, &p->free, &p->busy, &out, p->tag);
+
         if (rc == NGX_ERROR) {
             p->downstream_error = 1;
             return ngx_event_pipe_drain_chains(p);
         }
 
-        ngx_chain_update_chains(p->pool, &p->free, &p->busy, &out, p->tag);
-
         for (cl = p->free; cl; cl = cl->next) {
 
             if (cl->buf->temp_file) {


From mdounin at mdounin.ru  Mon Sep 19 06:47:19 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 10:47:19 +0400
Subject: [PATCH 3 of 4] Fixed loss of chain links in fastcgi module
In-Reply-To: 
References: 
Message-ID: <2bd42223394dff4bddc1.1316414839@vm-bsd.mdounin.ru>

# HG changeset patch
# User Maxim Dounin 
# Date 1316387068 -14400
# Node ID 2bd42223394dff4bddc10c46e331705539749d5d
# Parent  e71db1db1a002ef34d6f6b24cd1c2059f8df58f5
Fixed loss of chain links in fastcgi module.

diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c
--- a/src/http/modules/ngx_http_fastcgi_module.c
+++ b/src/http/modules/ngx_http_fastcgi_module.c
@@ -1744,8 +1744,10 @@ ngx_http_fastcgi_input_filter(ngx_event_
         }
 
         if (p->free) {
-            b = p->free->buf;
-            p->free = p->free->next;
+            cl = p->free;
+            b = cl->buf;
+            p->free = cl->next;
+            ngx_free_chain(p->pool, cl);
 
         } else {
             b = ngx_alloc_buf(p->pool);


From mdounin at mdounin.ru  Mon Sep 19 06:47:20 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 10:47:20 +0400
Subject: [PATCH 4 of 4] Fixed loss of chain links in
	ngx_event_pipe_read_upstream()
In-Reply-To: 
References: 
Message-ID: <2856ee8bc4a4183355ec.1316414840@vm-bsd.mdounin.ru>

# HG changeset patch
# User Maxim Dounin 
# Date 1316414176 -14400
# Node ID 2856ee8bc4a4183355ec13055a1a9bd54927b59c
# Parent  2bd42223394dff4bddc10c46e331705539749d5d
Fixed loss of chain links in ngx_event_pipe_read_upstream().

diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c
--- a/src/event/ngx_event_pipe.c
+++ b/src/event/ngx_event_pipe.c
@@ -409,6 +409,7 @@ ngx_event_pipe_read_upstream(ngx_event_p
             }
 
             p->free_raw_bufs = cl->next;
+            ngx_free_chain(p->pool, cl);
         }
     }
 


From mdounin at mdounin.ru  Mon Sep 19 10:47:36 2011
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 19 Sep 2011 14:47:36 +0400
Subject: [Patch] proxy cache for 304 Not Modified
In-Reply-To: 
References: 
	<20110916155247.GS1137@mdounin.ru>
	
Message-ID: <20110919104736.GX1137@mdounin.ru>

Hello!

On Sat, Sep 17, 2011 at 04:34:39AM +0800, MagicBear wrote:

> Hello Maxim!
> 
> 2011/9/16 Maxim Dounin 
> >
> > Hello!
> >
> > On Fri, Sep 16, 2011 at 01:54:04AM +0800, ????? wrote:
> >
> > > Hello guys,
> > > I have wrote a module to make nginx support 304 to decrease bandwidth usage.
> > > note: I have a newbie for nginx module development, so the above module may
> > > have some problem. Welcome to test it and feedback another problem with me.
> > > Note:
> > > I cannot found where can using nginx internal cache update without delete
> > > the old cache, so I using reopen the cache file to write a new expire
> > > header. Can anyone help me to fix this problem?
> > >
> > > You can download full patch file from here:
> > > http://m-b.cc/share/proxy_304.txt
> >
> > See review below.
> >
> > This is definitely step in right direction, re-checking cache
> > items should be supported. ?Though I can't say I'm happy with the
> > patch. ?Hope it will be improved. :)
> 
> Thanks for your advice.

[...]

> > > + ? ? ? ?if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
> > > + ? ? ? ? ? ? ? ? ? ? ? u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
> > > + ? ? ? ? ? ? ? ? ? ? ? ngx_http_file_cache_valid(u->conf->cache_valid,
> > > u->headers_in.status_n))
> > > + ? ? ? ?{
> >
> > The ngx_http_file_cache_valid() test seems to be incorrect (and
> > completely unneeded), as you are going to preserve reply which is
> > already in cache, not 304 reply.
> >
> 
> I review the code, this is completely unneeded, you are right, now I
> move get valid time to here, I think that may be more good?

No, the new code is still incorrect.  It still uses validity time 
for 304 reply, not the original reply as stored in cache.

Consider the following config:

    proxy_cache ...
    proxy_cache_valid 200 1h;
    proxy_cache_ignore_headers Expires Cache-Control;

We have 200 reply cached, tried to validate it with 
If-Modified-Since request and got 304.

The ngx_http_file_cache_valid() here will be for status_n == 304 
and will return 0, as there is no validity time set for 304 
replies.  But we are actually caching 200 reply, the one we 
originally had in cache, so we should use 1h here.

[...]

> > > + ? ? ? ? ? ? ? ?rc = ngx_http_upstream_cache_send(r, u);
> > > +
> > > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? time_t ?now, valid;
> > > +
> > > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? now = ngx_time();
> > > +
> > > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? valid = r->cache->valid_sec;
> > > +
> > > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? if (valid == 0) {
> >
> > How can r->cache->valid_sec be non-zero here? ?As far as I
> > understand this should never happen. ?Even if does happen, this is
> > unwise to trust it.

It looks like I was completely wrong here: r->cache->valid_sec is 
set by ngx_http_upstream_process_cache_control() and
ngx_http_upstream_process_expires().  This check should be 
preserved.

[...]

> Here is the new patch
> full file at: http://m-b.cc/share/patch-nginx-proxy-304.txt

Thank you, this one looks much better.  See below for more 
comments.

> _______________________________________________
> 
> 
> # User MagicBear 
> Upstream:
> add $upstream_last_modified variant.
> add handler for 304 Unmodified.
> Proxy:
> change to send If-Modified-Since header.
> 
> TODO:
> change write TO not block IO.
> 
> diff -ruNp a/src/http/modules/ngx_http_proxy_module.c
> nginx-1.1.3/src/http/modules/ngx_http_proxy_module.c
> --- a/src/http/modules/ngx_http_proxy_module.c  2011-09-16
> 02:13:16.274428192 +0800
> +++ nginx-1.1.3/src/http/modules/ngx_http_proxy_module.c
> 2011-09-16 02:13:57.544428180 +0800
> @@ -543,7 +543,7 @@ static ngx_keyval_t  ngx_http_proxy_cach
>      { ngx_string("Connection"), ngx_string("close") },
>      { ngx_string("Keep-Alive"), ngx_string("") },
>      { ngx_string("Expect"), ngx_string("") },
> -    { ngx_string("If-Modified-Since"), ngx_string("") },
> +    { ngx_string("If-Modified-Since"), ngx_string("$upstream_last_modified") },
>      { ngx_string("If-Unmodified-Since"), ngx_string("") },
>      { ngx_string("If-None-Match"), ngx_string("") },
>      { ngx_string("If-Match"), ngx_string("") },
> diff -ruNp a/src/http/ngx_http_cache.h nginx-1.1.3/src/http/ngx_http_cache.h
> --- a/src/http/ngx_http_cache.h 2011-07-29 23:09:02.000000000 +0800
> +++ nginx-1.1.3/src/http/ngx_http_cache.h       2011-09-17
> 04:08:27.000000000 +0800
> @@ -133,6 +133,7 @@ ngx_int_t ngx_http_file_cache_create(ngx
>  void ngx_http_file_cache_create_key(ngx_http_request_t *r);
>  ngx_int_t ngx_http_file_cache_open(ngx_http_request_t *r);
>  void ngx_http_file_cache_set_header(ngx_http_request_t *r, u_char *buf);
> +void ngx_http_file_cache_set_valid(ngx_http_request_t *r);
>  void ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf);
>  ngx_int_t ngx_http_cache_send(ngx_http_request_t *);
>  void ngx_http_file_cache_free(ngx_http_cache_t *c, ngx_temp_file_t *tf);
> diff -ruNp a/src/http/ngx_http_file_cache.c
> nginx-1.1.3/src/http/ngx_http_file_cache.c
> --- a/src/http/ngx_http_file_cache.c    2011-08-26 01:29:34.000000000 +0800
> +++ nginx-1.1.3/src/http/ngx_http_file_cache.c  2011-09-17
> 04:25:36.000000000 +0800
> @@ -765,6 +765,38 @@ ngx_http_file_cache_set_header(ngx_http_
>      *p = LF;
>  }
> 
> +void
> +ngx_http_file_cache_set_valid(ngx_http_request_t *r)

Two blank lines between functions, please.

> +{
> +    ngx_file_t  file;
> +
> +    ngx_memzero(&file, sizeof(ngx_file_t));
> +
> +    file.name = r->cache->file.name;
> +    file.log = r->connection->log;
> +
> +    file.fd = ngx_open_file(file.name.data, NGX_FILE_RDWR,
> +                            NGX_FILE_OPEN, NGX_FILE_DEFAULT_ACCESS);
> +
> +    if (file.fd == NGX_INVALID_FILE) {
> +        ngx_log_error(NGX_LOG_EMERG, r->connection->log, ngx_errno,
> +                      ngx_open_file_n " \"%s\" failed", r->cache->file.name.data);

Please wrap lines longer than 80 chars.

> +        return;
> +    }
> +
> +    if (ngx_write_file(&file, (u_char *)&r->cache->valid_sec,
> sizeof(r->cache->valid_sec),
> offsetof(ngx_http_file_cache_header_t,valid_sec)) == NGX_ERROR)

Same as above.  Please use spaces after "(u_char *)" cast and 
after ",".

> +    {
> +        ngx_log_error(NGX_LOG_EMERG, r->connection->log, ngx_errno,
> +                      "write proxy_cache \"%s\" failed",
> r->cache->file.name.data);
> +        return;
> +    }
> +
> +    if (ngx_close_file(file.fd) == NGX_FILE_ERROR) {
> +        ngx_log_error(NGX_LOG_ALERT, r->connection->log, ngx_errno,
> +                      ngx_close_file_n " \"%s\" failed",
> r->cache->file.name.data);
> +    }
> +}
> +

One more question to consider: what happens if cache file was
updated/removed/whatever while we were talking to backend.  
Reopening the file may not be safe in this case.

Probably solution would be to request write permissions on cache 
file on initial open, though a) this may cause problems e.g. on 
Windows (not sure, needs investigation) and b) requires open file cache 
changes as it isn't currently able to open files for writing.

On the other hand, just updating valid_sec may be safe in any case 
(not sure, needs investigation).  

> 
>  void
>  ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf)
> diff -ruNp a/src/http/ngx_http_upstream.c
> nginx-1.1.3/src/http/ngx_http_upstream.c
> --- a/src/http/ngx_http_upstream.c      2011-09-16 02:13:16.274428192 +0800
> +++ nginx-1.1.3/src/http/ngx_http_upstream.c    2011-09-17
> 04:23:02.000000000 +0800
> @@ -16,6 +16,8 @@ static ngx_int_t ngx_http_upstream_cache
>      ngx_http_upstream_t *u);
>  static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r,
>      ngx_http_variable_value_t *v, uintptr_t data);
> +static ngx_int_t ngx_http_upstream_last_modified(ngx_http_request_t *r,
> +    ngx_http_variable_value_t *v, uintptr_t data);
>  #endif
> 
>  static void ngx_http_upstream_init_request(ngx_http_request_t *r);
> @@ -342,6 +344,10 @@ static ngx_http_variable_t  ngx_http_ups
>        ngx_http_upstream_cache_status, 0,
>        NGX_HTTP_VAR_NOCACHEABLE, 0 },
> 
> +    { ngx_string("upstream_last_modified"), NULL,
> +      ngx_http_upstream_last_modified, 0,
> +      NGX_HTTP_VAR_NOCACHEABLE, 0 },
> +
>  #endif
> 
>      { ngx_null_string, NULL, NULL, 0, 0, 0 }
> @@ -1680,6 +1686,31 @@ ngx_http_upstream_test_next(ngx_http_req
>      ngx_uint_t                 status;
>      ngx_http_upstream_next_t  *un;
> 
> +#if (NGX_HTTP_CACHE)
> +    time_t  valid;
> +
> +    if (u->cache_status == NGX_HTTP_CACHE_EXPIRED &&
> +        u->headers_in.status_n == NGX_HTTP_NOT_MODIFIED &&
> +        0!=(valid=ngx_http_file_cache_valid(u->conf->cache_valid,
> u->headers_in.status_n)))

See above, this is incorrect.

> +    {
> +        ngx_int_t  rc;
> +
> +        rc = u->reinit_request(r);
> +
> +        if (rc == NGX_OK) {
> +            u->cache_status = NGX_HTTP_CACHE_UPDATING;

Why NGX_HTTP_CACHE_UPDATING?

> +            rc = ngx_http_upstream_cache_send(r, u);
> +
> +            r->cache->valid_sec = ngx_time() + valid;

And here should be r->cache->valid_sec test (see above, I was 
wrong in previous comment).  You shouldn't blindly trust me, 
especially when I'm asking questions.  :)

Additional question to consider: what should happen if original 
200 reply comes with "Cache-Control: max-age=" (or 
"Expires: