From mdounin at mdounin.ru Fri Jan 3 04:09:40 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Jan 2014 08:09:40 +0400 Subject: Syntax highlighting for nano In-Reply-To: <52C2B723.3050408@fussenegger.info> References: <52C2B723.3050408@fussenegger.info> Message-ID: <20140103040940.GF95113@mdounin.ru> Hello! On Tue, Dec 31, 2013 at 01:22:59PM +0100, Richard Fussenegger, BSc wrote: > I've seen that the latest nginx release contains syntax highlighting for > vim. I created a simple syntax highlighting scheme for nano some time ago. > Maybe you'd like to include it as well. You can find it via the following > link: > > https://github.com/Fleshgrinder/nano-editor-conf-syntax-highlighting > > This may be a catch all for conf files, but I initially created it for my > nginx files and it works great. I think better place would be on wiki, something like this should be a good place to add a link: http://wiki.nginx.org/Configuration#Tools -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 3 04:18:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Jan 2014 08:18:17 +0400 Subject: LSB compliant init script (e.g. Debian 7) In-Reply-To: <52C2BA28.7030502@fussenegger.info> References: <52C2BA28.7030502@fussenegger.info> Message-ID: <20140103041816.GG95113@mdounin.ru> Hello! On Tue, Dec 31, 2013 at 01:35:52PM +0100, Richard Fussenegger, BSc wrote: > I also happen to have a LSB compliant init script for nginx. I think some > lines should be removed for inclusion in the nginx source (everything that > has to do with the temporary paths), but it's a rock solid starting point. > > https://github.com/MovLib/www/blob/master/bin/init-nginx.sh There are collection of various init scripts here on wiki: http://wiki.nginx.org/InitScripts > The script allows you to use "service nginx > {force-reload|reload|restart|start|status|stop}" and "nginx -t" is always > executed before attempting to do anything. This ensures that the service > isn't interrupted because of some misconfiguration. It's also using the > various LSB functions for some eye candy. Just a side note: nginx reload is a correct way to avoid service interruptions, as it doesn't apply a configuration if there are problems. On the other hand, "nginx -t" may produce incorrect results if nginx binary in memory doesn't match one on disk (e.g., in the process of upgrade), and forcing "nginx -t" would be a wrong thing to do. Note well that in some cases (e.g., before start or stop) calling "nginx -t" is useless would be a plain waste of resources. -- Maxim Dounin http://nginx.org/ From richard at fussenegger.info Fri Jan 3 10:54:25 2014 From: richard at fussenegger.info (Richard Fussenegger, BSc) Date: Fri, 03 Jan 2014 11:54:25 +0100 Subject: LSB compliant init script (e.g. Debian 7) In-Reply-To: <20140103041816.GG95113@mdounin.ru> References: <52C2BA28.7030502@fussenegger.info> <20140103041816.GG95113@mdounin.ru> Message-ID: <52C696E1.1010804@fussenegger.info> Hi Maxim, thanks for your answer. On 1/3/2014 5:18 AM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 31, 2013 at 01:35:52PM +0100, Richard Fussenegger, BSc wrote: > >> I also happen to have a LSB compliant init script for nginx. I think some >> lines should be removed for inclusion in the nginx source (everything that >> has to do with the temporary paths), but it's a rock solid starting point. >> >> https://github.com/MovLib/www/blob/master/bin/init-nginx.sh > There are collection of various init scripts here on wiki: > > http://wiki.nginx.org/InitScripts I'll add a link there in that case. >> The script allows you to use "service nginx >> {force-reload|reload|restart|start|status|stop}" and "nginx -t" is always >> executed before attempting to do anything. This ensures that the service >> isn't interrupted because of some misconfiguration. It's also using the >> various LSB functions for some eye candy. > Just a side note: nginx reload is a correct way to avoid service > interruptions, as it doesn't apply a configuration if there are > problems. On the other hand, "nginx -t" may produce incorrect results if > nginx binary in memory doesn't match one on disk (e.g., in the > process of upgrade), and forcing "nginx -t" would be a wrong thing > to do. Note well that in some cases (e.g., before start or stop) > calling "nginx -t" is useless would be a plain waste of resources. I always thought nginx -t is mainly useful to check the configuration files and calling it would be useful so one doesn't try to start / reload if the configuration is wrong. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4264 bytes Desc: S/MIME Cryptographic Signature URL: From oscaretu at gmail.com Fri Jan 3 20:45:38 2014 From: oscaretu at gmail.com (oscaretu .) Date: Fri, 3 Jan 2014 21:45:38 +0100 Subject: Question about restarting nginx Message-ID: Hello. One question comes to my mind: Why doesn't nginx write something to the error log file when it restarts? (for example, with kill -HUP), just as Apache does. For example, in Apache, after kill -HUP $apache_pid, I can see in the error log file: [Tue Jan 22 11:42:50 2013] [notice] SIGHUP received. Attempting to restart [Tue Jan 22 11:42:51 2013] [notice] Digest: generating secret for digest authentication ... [Tue Jan 22 11:42:51 2013] [notice] Digest: done [Tue Jan 22 11:42:52 2013] [notice] Apache/2.2.14 (Unix) PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operation This way I can easily check if everything has gone OK in the restart, just using kill -1 $(cat httpd.pid); tail -f error_log Can this be added by default to nginx? Or there is any other simple form to check that a restart has gone well? Greetings, Oscar -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 3 20:50:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 Jan 2014 00:50:10 +0400 Subject: Question about restarting nginx In-Reply-To: References: Message-ID: <20140103205009.GL95113@mdounin.ru> Hello! On Fri, Jan 03, 2014 at 09:45:38PM +0100, oscaretu . wrote: > One question comes to my mind: Why doesn't nginx write something to the > error log file when it restarts? (for example, with kill -HUP), just as > Apache does. It does: 2014/01/04 00:47:33 [notice] 4529#0: signal 1 (SIGHUP) received, reconfiguring 2014/01/04 00:47:33 [notice] 4529#0: reconfiguring 2014/01/04 00:47:33 [notice] 4529#0: using the "kqueue" event method 2014/01/04 00:47:33 [notice] 4529#0: start worker processes 2014/01/04 00:47:33 [notice] 4529#0: start worker process 57418 2014/01/04 00:47:34 [notice] 4529#0: signal 23 (SIGIO) received 2014/01/04 00:47:34 [notice] 4530#0: gracefully shutting down 2014/01/04 00:47:34 [notice] 4529#0: signal 23 (SIGIO) received 2014/01/04 00:47:34 [notice] 4530#0: exiting 2014/01/04 00:47:34 [notice] 4530#0: exit 2014/01/04 00:47:34 [notice] 4529#0: signal 20 (SIGCHLD) received 2014/01/04 00:47:34 [notice] 4529#0: worker process 4530 exited with code 0 2014/01/04 00:47:34 [notice] 4529#0: signal 23 (SIGIO) received 2014/01/04 00:47:34 [notice] 4529#0: signal 23 (SIGIO) received 2014/01/04 00:47:34 [notice] 4529#0: signal 23 (SIGIO) received But your error_log have to be configured to log messages at notice level. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 3 23:49:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 03 Jan 2014 23:49:06 +0000 Subject: [nginx] Allowed up to two EBUSY errors from sendfile(). Message-ID: details: http://hg.nginx.org/nginx/rev/d39a69427056 branches: changeset: 5498:d39a69427056 user: Maxim Dounin date: Sat Jan 04 03:31:58 2014 +0400 description: Allowed up to two EBUSY errors from sendfile(). Fallback to synchronous sendfile() now only done on 3rd EBUSY without any progress in a row. Not falling back is believed to be better in case of occasional EBUSY, though protection is still needed to make sure there will be no infinite loop. diffstat: src/core/ngx_connection.h | 1 + src/http/ngx_http_copy_filter_module.c | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diffs (32 lines): diff --git a/src/core/ngx_connection.h b/src/core/ngx_connection.h --- a/src/core/ngx_connection.h +++ b/src/core/ngx_connection.h @@ -177,6 +177,7 @@ struct ngx_connection_s { #if (NGX_HAVE_AIO_SENDFILE) unsigned aio_sendfile:1; + unsigned busy_count:2; ngx_buf_t *busy_sendfile; #endif diff --git a/src/http/ngx_http_copy_filter_module.c b/src/http/ngx_http_copy_filter_module.c --- a/src/http/ngx_http_copy_filter_module.c +++ b/src/http/ngx_http_copy_filter_module.c @@ -169,13 +169,15 @@ ngx_http_copy_filter(ngx_http_request_t offset = c->busy_sendfile->file_pos; if (file->aio) { - c->aio_sendfile = (offset != file->aio->last_offset); + c->busy_count = (offset == file->aio->last_offset) ? + c->busy_count + 1 : 0; file->aio->last_offset = offset; - if (c->aio_sendfile == 0) { + if (c->busy_count > 2) { ngx_log_error(NGX_LOG_ALERT, c->log, 0, "sendfile(%V) returned busy again", &file->name); + c->aio_sendfile = 0; } } From mdounin at mdounin.ru Fri Jan 3 23:49:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 03 Jan 2014 23:49:07 +0000 Subject: [nginx] Added per-process random seeding (ticket #456). Message-ID: details: http://hg.nginx.org/nginx/rev/b91bcba29351 branches: changeset: 5499:b91bcba29351 user: Maxim Dounin date: Sat Jan 04 03:32:06 2014 +0400 description: Added per-process random seeding (ticket #456). diffstat: src/os/unix/ngx_process_cycle.c | 2 ++ src/os/win32/ngx_win32_init.c | 2 +- 2 files changed, 3 insertions(+), 1 deletions(-) diffs (24 lines): diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c +++ b/src/os/unix/ngx_process_cycle.c @@ -959,6 +959,8 @@ ngx_worker_process_init(ngx_cycle_t *cyc "sigprocmask() failed"); } + srandom((ngx_pid << 16) ^ ngx_time()); + /* * disable deleting previous events for the listening sockets because * in the worker processes there are no events at all at this point diff --git a/src/os/win32/ngx_win32_init.c b/src/os/win32/ngx_win32_init.c --- a/src/os/win32/ngx_win32_init.c +++ b/src/os/win32/ngx_win32_init.c @@ -228,7 +228,7 @@ ngx_os_init(ngx_log_t *log) ngx_sprintf((u_char *) ngx_unique, "%P%Z", ngx_pid); } - srand((unsigned) ngx_time()); + srand((ngx_pid << 16) ^ (unsigned) ngx_time()); return NGX_OK; } From mdounin at mdounin.ru Fri Jan 3 23:49:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 03 Jan 2014 23:49:09 +0000 Subject: [nginx] Upstream: Cache-Control preferred over Expires. Message-ID: details: http://hg.nginx.org/nginx/rev/6a3ab6fdd70f branches: changeset: 5500:6a3ab6fdd70f user: Maxim Dounin date: Sat Jan 04 03:32:10 2014 +0400 description: Upstream: Cache-Control preferred over Expires. Not really a strict check (as X-Accel-Expires might be ignored or contain invalid value), but quite simple to implement and better than what we have now. diffstat: src/http/ngx_http_upstream.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3656,7 +3656,7 @@ ngx_http_upstream_process_cache_control( return NGX_OK; } - if (r->cache->valid_sec != 0) { + if (r->cache->valid_sec != 0 && u->headers_in.x_accel_expires != NULL) { return NGX_OK; } From mdounin at mdounin.ru Fri Jan 3 23:49:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 03 Jan 2014 23:49:11 +0000 Subject: [nginx] Win32: support for UTF-16 surrogate pairs (ticket #457). Message-ID: details: http://hg.nginx.org/nginx/rev/1cd23ca84a9b branches: changeset: 5501:1cd23ca84a9b user: Maxim Dounin date: Sat Jan 04 03:32:15 2014 +0400 description: Win32: support for UTF-16 surrogate pairs (ticket #457). diffstat: src/os/win32/ngx_files.c | 23 +++++++++++++++++++++-- 1 files changed, 21 insertions(+), 2 deletions(-) diffs (51 lines): diff --git a/src/os/win32/ngx_files.c b/src/os/win32/ngx_files.c --- a/src/os/win32/ngx_files.c +++ b/src/os/win32/ngx_files.c @@ -799,13 +799,25 @@ ngx_utf8_to_utf16(u_short *utf16, u_char continue; } + if (u + 1 == last) { + *len = u - utf16; + break; + } + n = ngx_utf8_decode(&p, 4); - if (n > 0xffff) { + if (n > 0x10ffff) { ngx_set_errno(NGX_EILSEQ); return NULL; } + if (n > 0xffff) { + n -= 0x10000; + *u++ = (u_short) (0xd800 + (n >> 10)); + *u++ = (u_short) (0xdc00 + (n & 0x03ff)); + continue; + } + *u++ = (u_short) n; } @@ -838,12 +850,19 @@ ngx_utf8_to_utf16(u_short *utf16, u_char n = ngx_utf8_decode(&p, 4); - if (n > 0xffff) { + if (n > 0x10ffff) { free(utf16); ngx_set_errno(NGX_EILSEQ); return NULL; } + if (n > 0xffff) { + n -= 0x10000; + *u++ = (u_short) (0xd800 + (n >> 10)); + *u++ = (u_short) (0xdc00 + (n & 0x03ff)); + continue; + } + *u++ = (u_short) n; } From mdounin at mdounin.ru Fri Jan 3 23:49:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 03 Jan 2014 23:49:12 +0000 Subject: [nginx] Fixed "zero size buf in output" alerts. Message-ID: details: http://hg.nginx.org/nginx/rev/4aa64f695031 branches: changeset: 5502:4aa64f695031 user: Maxim Dounin date: Sat Jan 04 03:32:22 2014 +0400 description: Fixed "zero size buf in output" alerts. If a request had an empty request body (with Content-Length: 0), and there were preread data available (e.g., due to a pipelined request in the buffer), the "zero size buf in output" alert might be logged while proxying the request to an upstream. Similar alerts appeared with client_body_in_file_only if a request had an empty request body. diffstat: src/http/ngx_http_request_body.c | 70 ++++++++++++++++++++++++--------------- 1 files changed, 43 insertions(+), 27 deletions(-) diffs (96 lines): diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -150,21 +150,27 @@ ngx_http_read_client_request_body(ngx_ht goto done; } - cl = ngx_chain_get_free_buf(r->pool, &rb->free); - if (cl == NULL) { - rc = NGX_HTTP_INTERNAL_SERVER_ERROR; - goto done; + if (rb->temp_file->file.offset != 0) { + + cl = ngx_chain_get_free_buf(r->pool, &rb->free); + if (cl == NULL) { + rc = NGX_HTTP_INTERNAL_SERVER_ERROR; + goto done; + } + + b = cl->buf; + + ngx_memzero(b, sizeof(ngx_buf_t)); + + b->in_file = 1; + b->file_last = rb->temp_file->file.offset; + b->file = &rb->temp_file->file; + + rb->bufs = cl; + + } else { + rb->bufs = NULL; } - - b = cl->buf; - - ngx_memzero(b, sizeof(ngx_buf_t)); - - b->in_file = 1; - b->file_last = rb->temp_file->file.offset; - b->file = &rb->temp_file->file; - - rb->bufs = cl; } post_handler(r); @@ -375,20 +381,26 @@ ngx_http_do_read_client_request_body(ngx return NGX_HTTP_INTERNAL_SERVER_ERROR; } - cl = ngx_chain_get_free_buf(r->pool, &rb->free); - if (cl == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + if (rb->temp_file->file.offset != 0) { + + cl = ngx_chain_get_free_buf(r->pool, &rb->free); + if (cl == NULL) { + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + + b = cl->buf; + + ngx_memzero(b, sizeof(ngx_buf_t)); + + b->in_file = 1; + b->file_last = rb->temp_file->file.offset; + b->file = &rb->temp_file->file; + + rb->bufs = cl; + + } else { + rb->bufs = NULL; } - - b = cl->buf; - - ngx_memzero(b, sizeof(ngx_buf_t)); - - b->in_file = 1; - b->file_last = rb->temp_file->file.offset; - b->file = &rb->temp_file->file; - - rb->bufs = cl; } r->read_event_handler = ngx_http_block_reading; @@ -843,6 +855,10 @@ ngx_http_request_body_length_filter(ngx_ for (cl = in; cl; cl = cl->next) { + if (rb->rest == 0) { + break; + } + tl = ngx_chain_get_free_buf(r->pool, &rb->free); if (tl == NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; From d.bussink at gmail.com Sat Jan 4 11:30:53 2014 From: d.bussink at gmail.com (Dirkjan Bussink) Date: Sat, 04 Jan 2014 11:30:53 +0000 Subject: [PATCH] Add ssl_session_ticket option to enable / disable session tickets Message-ID: # HG changeset patch # User Dirkjan Bussink # Date 1388832057 0 # Node ID b236387415f02c6b5874aca5aadd216028edbe00 # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c Add ssl_session_ticket option to enable / disable session tickets This adds support so it's possible to explicitly disable SSL Session Tickets. In order to have good Forward Secrecy support either session tickets have to be reloaded by restarting nginx regularly, or by disabling session tickets. If session tickets are enabled and the process lives for a long a time, an attacker can grab the session ticket from the process and use that to decrypt any traffic that occured during the entire lifetime of the process. diff -r 4aa64f695031 -r b236387415f0 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Sat Jan 04 10:40:57 2014 +0000 @@ -160,6 +160,13 @@ 0, NULL }, + { ngx_string("ssl_session_ticket"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, session_ticket), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -436,6 +443,7 @@ sscf->verify_depth = NGX_CONF_UNSET_UINT; sscf->builtin_session_cache = NGX_CONF_UNSET; sscf->session_timeout = NGX_CONF_UNSET; + sscf->session_ticket = NGX_CONF_UNSET; sscf->session_ticket_keys = NGX_CONF_UNSET_PTR; sscf->stapling = NGX_CONF_UNSET; sscf->stapling_verify = NGX_CONF_UNSET; @@ -644,6 +652,14 @@ return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_ticket, prev->session_ticket, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_ticket) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff -r 4aa64f695031 -r b236387415f0 src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.h Sat Jan 04 10:40:57 2014 +0000 @@ -44,6 +44,7 @@ ngx_shm_zone_t *shm_zone; + ngx_flag_t session_ticket; ngx_array_t *session_ticket_keys; ngx_flag_t stapling; diff -r 4aa64f695031 -r b236387415f0 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.c Sat Jan 04 10:40:57 2014 +0000 @@ -116,6 +116,13 @@ 0, NULL }, + { ngx_string("ssl_session_ticket"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, session_ticket), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -191,6 +198,7 @@ scf->prefer_server_ciphers = NGX_CONF_UNSET; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; + scf->session_ticket = NGX_CONF_UNSET; scf->session_ticket_keys = NGX_CONF_UNSET_PTR; return scf; @@ -339,6 +347,15 @@ return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_ticket, + prev->session_ticket, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_ticket) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff -r 4aa64f695031 -r b236387415f0 src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.h Sat Jan 04 10:40:57 2014 +0000 @@ -41,6 +41,7 @@ ngx_shm_zone_t *shm_zone; + ngx_flag_t session_ticket; ngx_array_t *session_ticket_keys; u_char *file; From faskiri.devel at gmail.com Sat Jan 4 11:44:51 2014 From: faskiri.devel at gmail.com (Fasih) Date: Sat, 4 Jan 2014 17:14:51 +0530 Subject: Regarding keepalive and idempotency Message-ID: Hi guys Hello guys Nginx keepalive seems to retry automatically on failure. However for non-idempotent requests, it is incorrect by RFC to retry automatically because the server could have changed its state before nginx detected the error. Is this a bug that would be fixed or did I not get it right? Relevant RFC section A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress. This means that clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction so long as the request sequence is idempotent (see section 9.1.2). Non-idempotent methods or sequences MUST NOT be automatically retried, although user agents MAY offer a human operator the choice of retrying the request(s). Confirmation by user-agent software with semantic understanding of the application MAY substitute for user confirmation. The automatic retry SHOULD NOT be repeated if the second sequence of requests fails. Regards Fasih -------------- next part -------------- An HTML attachment was scrubbed... URL: From kailuo.wang at huffingtonpost.com Sat Jan 4 18:48:39 2014 From: kailuo.wang at huffingtonpost.com (Kailuo Wang) Date: Sat, 4 Jan 2014 13:48:39 -0500 Subject: Couldn't compile nginx with ngx_http_upstream_hash_module on Ubuntu Message-ID: > > We are trying to make a custom build of nginx with this upstream_hash module, > we had success on MacOSX but couldn't repeat it on Ubuntu 13.04: > When I run ./configure --add-module=../nginx_upstream_hash, I got the > following message before it's terminated: > > adding module in ../nginx_upstream_hash >> ./configure: 4: .: Can't open auto/have > > > It seems to be related to the last line of the config file in the > upstream_hash module. Here is the full content of the config file: > > ngx_addon_name=ngx_http_upstream_hash_module > HTTP_MODULES="$HTTP_MODULES ngx_http_upstream_hash_module" > NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_upstream_hash_module.c" > > have=NGX_HTTP_UPSTREAM_HASH . auto/have > > > I've been stuck here for hours and will really appreciate if anyone can > point me in direction of possible solutions. > > Thanks very much in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilvayy at gmail.com Mon Jan 6 06:49:20 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Mon, 6 Jan 2014 07:49:20 +0100 Subject: Support of IMAP ID command in mail proxy module? Message-ID: Hi, It seems that some people already made requests about this: - imap proxy and untagged commands: http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003490.html - How do I send an ID command (IMAP) in the mail module? : http://mailman.nginx.org/pipermail/nginx-devel/2010-December/000580.html It's not a large feature to implement, IMHO, depending how it will be configurable. Regards, Filipe 2014/1/1 : > 3. Support of IMAP ID command in mail proxy module? (Michael Kliewe) > > > ------------------------------ > > Message: 3 > Date: Tue, 31 Dec 2013 17:38:09 +0100 > From: Michael Kliewe > To: nginx-devel at nginx.org > Subject: Support of IMAP ID command in mail proxy module? > Message-ID: <52C2F2F1.40308 at phpgangsta.de> > Content-Type: text/plain; charset=ISO-8859-15; format=flowed > > Hello, > > I'm using the mail module of nginx to proxy and loadbalance IMAP+POP3 > connections to backend servers. I do authentication via http authentication. > > Some Clients are sending IMAP ID commands to the server with information > about their software and version. I would like to log that and maybe use > that during authentication. It would be great if nginx would support > that command, and if it has been sent before LOGIN, also provide the > information to the http authentication script. > IMAP ID can be found in RFC 2971: http://www.faqs.org/rfcs/rfc2971.html > > The information about the client could be used to route the user to a > specific backend server that has some client-specific IMAP bug fixes in > place, or could be used to restrict logins of a specific user to only > one specific client if the user wants that for slightly higher security, > or for a list of "last activity of the user" like GMail does, or just > for client statistics. > > I'm not a C programmer, so sadly I cannot write a patch myself for this, > but maybe someone is able to add this small feature? > > Thanks > Michael > From mdounin at mdounin.ru Tue Jan 7 02:22:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Jan 2014 06:22:44 +0400 Subject: Regarding keepalive and idempotency In-Reply-To: References: Message-ID: <20140107022244.GV95113@mdounin.ru> Hello! On Sat, Jan 04, 2014 at 05:14:51PM +0530, Fasih wrote: > Hi guys > > Hello guys > > Nginx keepalive seems to retry automatically on failure. However for > non-idempotent requests, it is incorrect by RFC to retry automatically > because the server could have changed its state before nginx detected the > error. > > Is this a bug that would be fixed or did I not get it right? As of now, keepalive connection retries aren't aware of idempotence, much like proxy_next_upstream. Retries are only done in case of early errors though, and this is expected to be good enought in most cases. The future plan is to teach proxy_next_upstream and friends about idempotent or not idempotent methods, and probably also splitting "error" state into errors before we were theoretically able to send at least some bytes of the request (that is, retries are for sure safe even in case of non-idempotent methods), and errors after that point. -- Maxim Dounin http://nginx.org/ From fasihullah.askiri at gmail.com Tue Jan 7 17:14:26 2014 From: fasihullah.askiri at gmail.com (Fasihullah Askiri) Date: Tue, 7 Jan 2014 22:44:26 +0530 Subject: Regarding keepalive and idempotency In-Reply-To: <20140107022244.GV95113@mdounin.ru> References: <20140107022244.GV95113@mdounin.ru> Message-ID: Hi Maxim Thanks a lot for the clarification. Is there a timeline on the future plan? Is it like a few releases or is it more like a long term plan? On 1/7/14, Maxim Dounin wrote: > Hello! > > On Sat, Jan 04, 2014 at 05:14:51PM +0530, Fasih wrote: > >> Hi guys >> >> Hello guys >> >> Nginx keepalive seems to retry automatically on failure. However for >> non-idempotent requests, it is incorrect by RFC to retry automatically >> because the server could have changed its state before nginx detected the >> error. >> >> Is this a bug that would be fixed or did I not get it right? > > As of now, keepalive connection retries aren't aware of > idempotence, much like proxy_next_upstream. Retries are only done > in case of early errors though, and this is expected to be good > enought in most cases. > > The future plan is to teach proxy_next_upstream and friends about > idempotent or not idempotent methods, and probably also splitting > "error" state into errors before we were theoretically able to > send at least some bytes of the request (that is, retries are for > sure safe even in case of non-idempotent methods), and errors > after that point. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- +Fasih Life is 10% what happens to you and 90% how you react to it From guido.accardo at usmediaconsulting.com Tue Jan 7 17:20:41 2014 From: guido.accardo at usmediaconsulting.com (Guido Accardo) Date: Tue, 7 Jan 2014 15:20:41 -0200 Subject: Using body content to generate a correct response Message-ID: Hi everyone, I'm using Nginx to serve applications of Real Time Bidding listening in different machines inside of my network. Each one of these applications is handling request from only one exchange and I'm successfully proxying the content to the correct applications by using upstreams. One thing that I'm doing when I need to do some modifications in one o more of these applications is use "return 204" (for RTB system using OpenRTB protocl, HTTP 204 means NO BID) to not forward content. This was working great until now that my employer has decided to include another exchange, in consequence, another application. The new exchange is not OpenRTB based, it uses another protocol to answer, and of course no bid response is different too. In order to correctly answer with no bid I have to parse XML content which is inside of the request sent by the exchange. This is the form of the response I have to send: da21129c-aca1-11e2-8c8c-2be3e1d9a996 502863 4372011 OppId, advId, buyerLine, bidResEnv signature are items that comes with the request. As you can see I have 2 challenges here: * Parse XML body content with Nginx * Manipulate It to generate the correct answer Do I need to develop a Nginx plugin or there is a simpler way to parse the body content and use it to answer? I know that Nginx can use headers from the request in the response, so I presume that something with the body could also done. Thank you, cheers, Guido.- -------------- next part -------------- An HTML attachment was scrubbed... URL: From igrigorik at gmail.com Tue Jan 7 23:41:48 2014 From: igrigorik at gmail.com (Ilya Grigorik) Date: Tue, 7 Jan 2014 15:41:48 -0800 Subject: [nginx] SSL: ssl_buffer_size directive. In-Reply-To: References: <20131222222735.GT95113@mdounin.ru> Message-ID: (waking up from post-holiday coma :-)) ... Happy 2014! On Fri, Dec 20, 2013 at 12:49 PM, Alex wrote: > > This would require a bit more work than the current patch, but I'd love > to see a similar strategy in nginx. Hardcoding a fixed record size will > inevitably lead to suboptimal delivery of either interactive or bulk > traffic. Thoughts? > > It'd be interesting to know how difficult it'd be to implement such a > dynamic behavior of the SSL buffer size. An easier, albeit less optimal > solution would be to adjust the ssl_buffer_size directive depending on > the request URI (via location blocks). Not sure if Maxim's patch would > allow for that already? If large files are served from a known request > URI pattern, you could then increase the SSL buffer size accordingly for > that location. > No, ssl_buffer_size is a server-wide directive [1]. Further, I don't think you want to go down this path: just because you're serving a large stream does not mean you don't want a fast TTFB at the beginning of the stream. For example, for video streaming you still want to optimize for the "time to first frame" such that you can decode the stream headers and get the video preview / first few frames on the screen as soon as possible. That said, once you've got the first few frames on screen, then by all means, max out the record size to decrease framing overhead. In short, for best performance, you want dynamic behavior. [1] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size On Sun, Dec 22, 2013 at 2:27 PM, Maxim Dounin wrote: > > Awesome, really glad to see this! A couple of followup questions... > > > > (a) Is there any way to force a packet flush on record end? At the moment > > nginx will fragment multiple records across packet boundaries, which is > > suboptimal as it means that I need a minimum of two packets to decode any > > record - e.g. if I set my record size to 1370 bytes, the first packet > will > > contain the first full record plus another 20-50 bytes of next record. > > There is OpenSSL socket layer on the way down. It may be possible > > to achieve something with SSL_[CTX_]set_max_send_fragment() in > > OpenSSL 1.0.0+, but I haven't looked into details. (As I already > > said, I don't think that using packet-sized records is a good > > idea, it looks like an overkill and waste of resources, both > network and CPU.)> (b) Current NGX_SSL_BUFSIZE is set to 16KB which is > effectively guaranteed > > to overflow the CWND of a new connection and introduce another RTT for > > interactive traffic - e.g. HTTP pages. I would love to see a lower > starting > > record size to mitigate this problem -- defaults matter! > > We are considering using 4k or 8k as the default in the > > future. For now, the directive is mostly to simplify > > experimenting with various buffer sizes. > > > On the subject of optimizing record size, the GFE team at Google recently > > landed ~following logic: > > > > - new connections default to small record size > > -- each record fits into a TCP packet > > -- packets are flushed at record boundaries > > - server tracks number of bytes written since reset and timestamp of last > > write > > -- if bytes written > {configurable byte threshold) then boost record > size > > to 16KB > > -- if last write timestamp > now - {configurable time threshold} then > reset > > sent byte count > > > > In other words, start with small record size to optimize for delivery of > > small/interactive objects (bulk of HTTP traffic). Then, if large file is > > being transferred bump record size to 16KB and continue using that until > > the connection goes idle.. when communication resumes, start with small > > record size and repeat. Overall, this is aimed to optimize delivery of > > small files where incremental delivery is a priority, and also for large > > downloads where overall throughput is a priority. > > > > Both byte and time thresholds are exposed as configurable flags, and > > current defaults in GFE are 1MB and 1s. > > > > This would require a bit more work than the current patch, but I'd love > to > > see a similar strategy in nginx. Hardcoding a fixed record size will > > inevitably lead to suboptimal delivery of either interactive or bulk > > traffic. Thoughts? > > While some logic like this is certainly needed to use packet-sized > > records, it looks overcomplicated and probably not at all needed > with 4k/8k buffers. > This logic is not at all specific to packet-sized records -- that said, yes, it delivers most benefit when you are starting the session with a packet-sized record. For sake of an example, let's say we set the new default to 4k: + all records will fit into a minimum CWND (IW4 and IW10) - packet loss is still a factor and may affect TTFB, but impact is much less than current 16KB record. - we incur fixed 4x overhead (bytes and CPU cycles) on large streams The "dynamic" implementation simply addresses the last two shortcomings: (a) using a packet-size record guarantees that we deliver the best TTFB, and (b) we minimize the CPU/byte overhead costs of smaller records by raising record size once connection is "warmed up". Further, I think it's misleading to say that "for large streams, just use a larger default record"... as I noted above, even large streams (e.g. video) need a fast time to first byte/frame. I think its important that we optimize for the out-of-the box performance/experience: your average web developer / system admin won't know what record size to set for their use case, and they'll have a mix of payloads which don't lend themselves to any one record size. Besides, any static value will inevitably lead to a tradeoff in TTFB or throughput, which is an unnecessary trade-off to begin with. If we want configuration knobs, then as advanced options, offer ability to customize the "boost threshold" (in KB), and inactivity timeout (to revert back to smaller size). Finally, if you want, the definition of "small record" could be a flag as well - setting it to 16KB would effectively disable the logic and give you current behavior... Yes, this is more complex than just setting a static record size, but the performance gains are significant both in throughput and latency -- and after all, isn't that what nginx is all about? :) ig -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Jan 8 19:59:42 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 8 Jan 2014 11:59:42 -0800 Subject: [PATCH] Resolver: added support for domain names with a trailing dot Message-ID: Hello! We've noticed that the builtin resolver in Nginx cannot handle domain names with a trailing dot (like "agentzh.org.") and just return the error "Host not found". The following patch attempts a fix. Thanks! -agentzh # HG changeset patch # User Yichun Zhang # Date 1389209699 28800 # Node ID 3d10680c0399cb8d2e3b601412df0495ffaab4a5 # Parent c0d6eae5a1c5d16cf6a9d6a3a73656972f838eab Resolver: added support for domain names with a trailing dot. diff -r c0d6eae5a1c5 -r 3d10680c0399 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Fri Dec 13 20:49:52 2013 +0400 +++ b/src/core/ngx_resolver.c Wed Jan 08 11:34:59 2014 -0800 @@ -467,6 +467,10 @@ ngx_resolver_ctx_t *next; ngx_resolver_node_t *rn; + if (ctx->name.len > 0 && ctx->name.data[ctx->name.len - 1] == '.') { + ctx->name.len--; + } + ngx_strlow(ctx->name.data, ctx->name.data, ctx->name.len); hash = ngx_crc32_short(ctx->name.data, ctx->name.len); -------------- next part -------------- A non-text attachment was scrubbed... Name: resolve-names-with-a-trailing-dot.patch Type: text/x-patch Size: 755 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Jan 9 13:21:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 17:21:07 +0400 Subject: Regarding keepalive and idempotency In-Reply-To: References: <20140107022244.GV95113@mdounin.ru> Message-ID: <20140109132107.GI1835@mdounin.ru> Hello! On Tue, Jan 07, 2014 at 10:44:26PM +0530, Fasihullah Askiri wrote: > Thanks a lot for the clarification. Is there a timeline on the future > plan? Is it like a few releases or is it more like a long term plan? No ETA, it's a long term plan. > > On 1/7/14, Maxim Dounin wrote: > > Hello! > > > > On Sat, Jan 04, 2014 at 05:14:51PM +0530, Fasih wrote: > > > >> Hi guys > >> > >> Hello guys > >> > >> Nginx keepalive seems to retry automatically on failure. However for > >> non-idempotent requests, it is incorrect by RFC to retry automatically > >> because the server could have changed its state before nginx detected the > >> error. > >> > >> Is this a bug that would be fixed or did I not get it right? > > > > As of now, keepalive connection retries aren't aware of > > idempotence, much like proxy_next_upstream. Retries are only done > > in case of early errors though, and this is expected to be good > > enought in most cases. > > > > The future plan is to teach proxy_next_upstream and friends about > > idempotent or not idempotent methods, and probably also splitting > > "error" state into errors before we were theoretically able to > > send at least some bytes of the request (that is, retries are for > > sure safe even in case of non-idempotent methods), and errors > > after that point. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > -- > +Fasih > > Life is 10% what happens to you and 90% how you react to it > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 9 15:55:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 19:55:31 +0400 Subject: Using body content to generate a correct response In-Reply-To: References: Message-ID: <20140109155531.GM1835@mdounin.ru> Hello! On Tue, Jan 07, 2014 at 03:20:41PM -0200, Guido Accardo wrote: > Hi everyone, > > I'm using Nginx to serve applications of Real Time Bidding listening in > different machines inside of my network. Each one of these applications is > handling request from only one exchange and I'm successfully proxying the > content to the correct applications by using upstreams. > > One thing that I'm doing when I need to do some modifications in one o more > of these applications is use "return 204" (for RTB system using OpenRTB > protocl, HTTP 204 means NO BID) to not forward content. This was working > great until now that my employer has decided to include another exchange, > in consequence, another application. The new exchange is not OpenRTB based, > it uses another protocol to answer, and of course no bid response is > different too. > > In order to correctly answer with no bid I have to parse XML content which > is inside of the request sent by the exchange. This is the form of the > response I have to send: > > > xmlns="urn:yahoo:amp:3pi:bidResp" > xsi:schemaLocation="urn:yahoo:amp:3pi:bidResp BidResponse.xsd" > version="6.0"> > signType="SHA-1" token="1218078703"/> > > > > da21129c-aca1-11e2-8c8c-2be3e1d9a996 > 502863 > 4372011 > > > > > > OppId, advId, buyerLine, bidResEnv signature are items that comes with the > request. > > As you can see I have 2 challenges here: > > * Parse XML body content with Nginx > * Manipulate It to generate the correct answer > > Do I need to develop a Nginx plugin or there is a simpler way to parse the > body content and use it to answer? > > I know that Nginx can use headers from the request in the response, so I > presume that something with the body could also done. Trivial solution would be to use embedded perl to read a request body and return appropriate answer, see here: http://nginx.org/en/docs/http/ngx_http_perl_module.html It should be also possible to do this using 3rd party Lua module. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 9 16:47:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 20:47:57 +0400 Subject: [PATCH] Add ssl_session_ticket option to enable / disable session tickets In-Reply-To: References: Message-ID: <20140109164757.GN1835@mdounin.ru> Hello! On Sat, Jan 04, 2014 at 11:30:53AM +0000, Dirkjan Bussink wrote: > # HG changeset patch > # User Dirkjan Bussink > # Date 1388832057 0 > # Node ID b236387415f02c6b5874aca5aadd216028edbe00 > # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c > Add ssl_session_ticket option to enable / disable session tickets I tend to think "ssl_session_tickets" (note trailing "s") would be a better name for the directive (and various names in the code should be changed accordingly). Additionally, something like "SSL: ssl_session_tickets directive." should be a better summary line. > This adds support so it's possible to explicitly disable SSL Session > Tickets. In order to have good Forward Secrecy support either session > tickets have to be reloaded by restarting nginx regularly, or by > disabling session tickets. > > If session tickets are enabled and the process lives for a long a time, > an attacker can grab the session ticket from the process and use that to > decrypt any traffic that occured during the entire lifetime of the > process. This description probably could be improved a bit, at least from terminology point of view. Session tickets are not something to be reloaded, it's session ticket keys which should be replaced regularly for better forward secrecy. And there are at least two ways to do so without restarting nginx - via binary upgrade procedure, or by providing a ticket key file and doing a configuration reload. Otherwise looks good. [...] -- Maxim Dounin http://nginx.org/ From keith at hulu.com Thu Jan 9 23:33:31 2014 From: keith at hulu.com (Keith Ainsworth) Date: Thu, 9 Jan 2014 17:33:31 -0600 Subject: Contributing a new module Message-ID: I've developed (still adding features too) a filter module that returns MD5 hashes the content, returning an extremely small digest message. This I find very useful for checking the integrity of large files on media servers (as serving the whole content to a centralized checker is extremely expensive on network). I realize, you could spin up your own file integrity checker, but on services with several servers handling the load of serving files, it's a lot easier to simply have a node or two query a bunch of hash locations and check against expected or historic values. The source is at: http://github.com/kainswor/nginx_md5_filter So my question is, how can I get this out there? I'd love to see it on the list of 3rd party modules ;) From piotr at cloudflare.com Fri Jan 10 01:09:32 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 10 Jan 2014 02:09:32 +0100 Subject: [PATCH] SPDY: send PING reply frame right away. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1389316088 -3600 # Fri Jan 10 02:08:08 2014 +0100 # Node ID c26d5f5e8d74dc9ab71476688074717857df5216 # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c SPDY: send PING reply frame right away. Previously, PING reply frame was queued right away, but it was send along subsequent response, which means that in case of long running request PING reply could have been delayed by more than 10 seconds, which is the time some browsers are waiting for a PING reply. Those browsers would then correctly consider such connection broken and would resend exactly the same request over a new connection, which isn't safe in case of non-idempotent HTTP methods. Signed-off-by: Piotr Sikora diff -r 4aa64f695031 -r c26d5f5e8d74 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/ngx_http_spdy.c Fri Jan 10 02:08:08 2014 +0100 @@ -1367,6 +1367,10 @@ ngx_http_spdy_state_ping(ngx_http_spdy_c pos += NGX_SPDY_PING_SIZE; + if (ngx_http_spdy_send_output_queue(sc) == NGX_ERROR) { + return ngx_http_spdy_state_protocol_error(sc); + } + return ngx_http_spdy_state_complete(sc, pos, end); } From piotr at cloudflare.com Fri Jan 10 01:09:57 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 10 Jan 2014 02:09:57 +0100 Subject: [PATCH] SPDY: send SETTINGS frame right away. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1389316093 -3600 # Fri Jan 10 02:08:13 2014 +0100 # Node ID 270023a6c218007687f75baa52c5b16cced5a638 # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c SPDY: send SETTINGS frame right away. Signed-off-by: Piotr Sikora diff -r 4aa64f695031 -r 270023a6c218 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/ngx_http_spdy.c Fri Jan 10 02:08:13 2014 +0100 @@ -1663,7 +1663,7 @@ ngx_http_spdy_send_settings(ngx_http_spd ngx_http_spdy_queue_frame(sc, frame); - return NGX_OK; + return ngx_http_spdy_send_output_queue(sc); } From faskiri.devel at gmail.com Fri Jan 10 12:12:23 2014 From: faskiri.devel at gmail.com (Fasih) Date: Fri, 10 Jan 2014 17:42:23 +0530 Subject: WWW-Authenticate header Message-ID: Hi RFC allows a server to respond with multiple WWW-Authenticate header ( http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). "User agents are advised to take special care in parsing the WWW- Authenticate field value as it might contain more than one challenge, or if more than one WWW-Authenticate header field is provided, the contents of a challenge itself can contain a comma-separated list of authentication parameters." However nginx defines WWW-Authenticate header as an ngx_table_elt_t in the ngx_http_headers_out_t struct as opposed to an ngx_array_t like other allowed repeated value headers. Is this a bug that I should file? Regards +Fasih -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 10 13:49:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Jan 2014 17:49:46 +0400 Subject: WWW-Authenticate header In-Reply-To: References: Message-ID: <20140110134946.GO1835@mdounin.ru> Hello! On Fri, Jan 10, 2014 at 05:42:23PM +0530, Fasih wrote: > Hi > > RFC allows a server to respond with multiple WWW-Authenticate header ( > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). > > "User agents are advised to take special care in parsing the WWW- > Authenticate field value as it might contain more than one challenge, or if > more than one WWW-Authenticate header field is provided, the contents of a > challenge itself can contain a comma-separated list of authentication > parameters." > > However nginx defines WWW-Authenticate header as an ngx_table_elt_t in > the ngx_http_headers_out_t struct as opposed to an ngx_array_t like other > allowed repeated value headers. > > Is this a bug that I should file? Have you seen this to be a problem in real life? -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 10 13:58:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Jan 2014 17:58:58 +0400 Subject: Contributing a new module In-Reply-To: References: Message-ID: <20140110135858.GP1835@mdounin.ru> Hello! On Thu, Jan 09, 2014 at 05:33:31PM -0600, Keith Ainsworth wrote: > I've developed (still adding features too) a filter module that returns MD5 hashes the content, returning an extremely small digest message. > This I find very useful for checking the integrity of large files on media servers (as serving the whole content to a centralized checker is extremely expensive on network). > I realize, you could spin up your own file integrity checker, but on services with several servers handling the load of serving files, it's a lot easier to simply have a node or two query a bunch of hash locations and check against expected or historic values. > > The source is at: http://github.com/kainswor/nginx_md5_filter > > So my question is, how can I get this out there? I'd love to see it on the list of 3rd party modules ;) Just add one to http://wiki.nginx.org/3rdPartyModules? ;) -- Maxim Dounin http://nginx.org/ From d.bussink at gmail.com Fri Jan 10 14:49:20 2014 From: d.bussink at gmail.com (Dirkjan Bussink) Date: Fri, 10 Jan 2014 15:49:20 +0100 Subject: [PATCH] Add ssl_session_ticket option to enable / disable session tickets In-Reply-To: <20140109164757.GN1835@mdounin.ru> References: <20140109164757.GN1835@mdounin.ru> Message-ID: <20EEBF0B-4998-454B-BDA9-124157FF518A@gmail.com> On 09 Jan 2014, at 17:47, Maxim Dounin wrote: > I tend to think "ssl_session_tickets" (note trailing "s") would be > a better name for the directive (and various names in the code > should be changed accordingly). > > Additionally, something like "SSL: ssl_session_tickets directive." > should be a better summary line. Alright, I can resubmit the patch with those changes. > This description probably could be improved a bit, at least from > terminology point of view. Session tickets are not something to > be reloaded, it's session ticket keys which should be replaced > regularly for better forward secrecy. And there are at least two > ways to do so without restarting nginx - via binary upgrade > procedure, or by providing a ticket key file and doing a > configuration reload. > > Otherwise looks good. Yeah, mostly the issue is that with the default settings atm people often end up inadvertently with a setup that isn?t as good as they think it is. I?ll review the wording here and improve it by properly mentioning the ticket key. I?ll also make sure to refer to the other techniques correctly then. ? Dirkjan From d.bussink at gmail.com Fri Jan 10 15:21:33 2014 From: d.bussink at gmail.com (Dirkjan Bussink) Date: Fri, 10 Jan 2014 15:21:33 +0000 Subject: [PATCH] SSL: ssl_session_tickets directive Message-ID: # HG changeset patch # User Dirkjan Bussink # Date 1389366760 -3600 # Node ID d049b0ea00a388c142627f10a0ee01c5b1bedc43 # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c SSL: ssl_session_tickets directive. This adds support so it's possible to explicitly disable SSL Session Tickets. In order to have good Forward Secrecy support either the session ticket key has to be reloaded by using nginx' binary upgrade process or using an external key file and reloading the configuration. This directive adds another possibility to have good support by disabling session tickets altogether. If session tickets are enabled and the process lives for a long a time, an attacker can grab the session ticket from the process and use that to decrypt any traffic that occured during the entire lifetime of the process. diff -r 4aa64f695031 -r d049b0ea00a3 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Fri Jan 10 16:12:40 2014 +0100 @@ -160,6 +160,13 @@ 0, NULL }, + { ngx_string("ssl_session_tickets"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, session_tickets), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -436,6 +443,7 @@ sscf->verify_depth = NGX_CONF_UNSET_UINT; sscf->builtin_session_cache = NGX_CONF_UNSET; sscf->session_timeout = NGX_CONF_UNSET; + sscf->session_tickets = NGX_CONF_UNSET; sscf->session_ticket_keys = NGX_CONF_UNSET_PTR; sscf->stapling = NGX_CONF_UNSET; sscf->stapling_verify = NGX_CONF_UNSET; @@ -644,6 +652,14 @@ return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_tickets, prev->session_tickets, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_tickets) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff -r 4aa64f695031 -r d049b0ea00a3 src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.h Fri Jan 10 16:12:40 2014 +0100 @@ -44,6 +44,7 @@ ngx_shm_zone_t *shm_zone; + ngx_flag_t session_tickets; ngx_array_t *session_ticket_keys; ngx_flag_t stapling; diff -r 4aa64f695031 -r d049b0ea00a3 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.c Fri Jan 10 16:12:40 2014 +0100 @@ -116,6 +116,13 @@ 0, NULL }, + { ngx_string("ssl_session_tickets"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, session_tickets), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -191,6 +198,7 @@ scf->prefer_server_ciphers = NGX_CONF_UNSET; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; + scf->session_tickets = NGX_CONF_UNSET; scf->session_ticket_keys = NGX_CONF_UNSET_PTR; return scf; @@ -339,6 +347,15 @@ return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_tickets, + prev->session_tickets, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_tickets) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff -r 4aa64f695031 -r d049b0ea00a3 src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.h Fri Jan 10 16:12:40 2014 +0100 @@ -41,6 +41,7 @@ ngx_shm_zone_t *shm_zone; + ngx_flag_t session_tickets; ngx_array_t *session_ticket_keys; u_char *file; From d.bussink at gmail.com Fri Jan 10 15:22:56 2014 From: d.bussink at gmail.com (Dirkjan Bussink) Date: Fri, 10 Jan 2014 16:22:56 +0100 Subject: [PATCH] Add ssl_session_ticket option to enable / disable session tickets In-Reply-To: <20EEBF0B-4998-454B-BDA9-124157FF518A@gmail.com> References: <20140109164757.GN1835@mdounin.ru> <20EEBF0B-4998-454B-BDA9-124157FF518A@gmail.com> Message-ID: <21D74BFC-AB50-4F2F-8053-60E6B8EA3D20@gmail.com> On 10 Jan 2014, at 15:49, Dirkjan Bussink wrote: > Alright, I can resubmit the patch with those changes. Ok, I?ve resent an updated version of the patch. ? Dirkjan From ru at nginx.com Fri Jan 10 19:10:06 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 10 Jan 2014 23:10:06 +0400 Subject: [PATCH] Resolver: added support for domain names with a trailing dot In-Reply-To: References: Message-ID: <20140110191006.GA40401@lo0.su> On Wed, Jan 08, 2014 at 11:59:42AM -0800, Yichun Zhang (agentzh) wrote: > We've noticed that the builtin resolver in Nginx cannot handle domain > names with a trailing dot (like "agentzh.org.") and just return the > error "Host not found". The following patch attempts a fix. There's no such thing as domain names with a trailing dot, with one exception of the root domain name. Specifying the dot at the end of a domain name is the feature of the system resolver(3); see hostname(7) manpage on Linux, BSD or MAC for details. Since resolver in nginx doesn't support $HOSTALIASES, nor does it support searching through the list of domain names, there's no much point in specifying domain names with the trailing dot. It similarly doesn't support "hostnames" like "127.1" or "0x7f000001" which the system resolver does. So I must ask. Why do you think that resolver in nginx should ever support names with a trailing dot? > # HG changeset patch > # User Yichun Zhang > # Date 1389209699 28800 > # Node ID 3d10680c0399cb8d2e3b601412df0495ffaab4a5 > # Parent c0d6eae5a1c5d16cf6a9d6a3a73656972f838eab > Resolver: added support for domain names with a trailing dot. > > diff -r c0d6eae5a1c5 -r 3d10680c0399 src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Fri Dec 13 20:49:52 2013 +0400 > +++ b/src/core/ngx_resolver.c Wed Jan 08 11:34:59 2014 -0800 > @@ -467,6 +467,10 @@ > ngx_resolver_ctx_t *next; > ngx_resolver_node_t *rn; > > + if (ctx->name.len > 0 && ctx->name.data[ctx->name.len - 1] == '.') { > + ctx->name.len--; > + } > + > ngx_strlow(ctx->name.data, ctx->name.data, ctx->name.len); > > hash = ngx_crc32_short(ctx->name.data, ctx->name.len); Regarding the patch, it would make more sense to strip the trailing dot once on entry, in ngx_resolve_name(), not in ngx_resolve_name_locked() which is also called internally. From keith at hulu.com Fri Jan 10 19:21:53 2014 From: keith at hulu.com (Keith Ainsworth) Date: Fri, 10 Jan 2014 13:21:53 -0600 Subject: Contributing a new module In-Reply-To: <20140110135858.GP1835@mdounin.ru> References: <20140110135858.GP1835@mdounin.ru> Message-ID: <4B4C00C3-2F4B-4457-8913-493D5127CBF1@hulu.com> I knew there must've been something obvious I was overlooking. Thanks! Sent from my iPhone > On Jan 10, 2014, at 5:59 AM, "Maxim Dounin" wrote: > > Hello! > >> On Thu, Jan 09, 2014 at 05:33:31PM -0600, Keith Ainsworth wrote: >> >> I've developed (still adding features too) a filter module that returns MD5 hashes the content, returning an extremely small digest message. >> This I find very useful for checking the integrity of large files on media servers (as serving the whole content to a centralized checker is extremely expensive on network). >> I realize, you could spin up your own file integrity checker, but on services with several servers handling the load of serving files, it's a lot easier to simply have a node or two query a bunch of hash locations and check against expected or historic values. >> >> The source is at: http://github.com/kainswor/nginx_md5_filter >> >> So my question is, how can I get this out there? I'd love to see it on the list of 3rd party modules ;) > > Just add one to http://wiki.nginx.org/3rdPartyModules? > ;) > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From agentzh at gmail.com Fri Jan 10 20:13:26 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 10 Jan 2014 12:13:26 -0800 Subject: [PATCH] Resolver: added support for domain names with a trailing dot In-Reply-To: <20140110191006.GA40401@lo0.su> References: <20140110191006.GA40401@lo0.su> Message-ID: Hello! On Fri, Jan 10, 2014 at 11:10 AM, Ruslan Ermilov wrote: > > There's no such thing as domain names with a trailing dot, > with one exception of the root domain name. > Well, they are just a fully qualified domain names. > > So I must ask. Why do you think that resolver in nginx > should ever support names with a trailing dot? > Because our customers use things like "www.google.com." and expect it to work like in their web browsers. And Nginx's resolver just returns "Host not found" immediately. > > Regarding the patch, it would make more sense to strip the > trailing dot once on entry, in ngx_resolve_name(), not in > ngx_resolve_name_locked() which is also called internally. > Thank you for the suggestion! Attached the revised patch. Thanks! -agentzh # HG changeset patch # User Yichun Zhang # Date 1389381734 28800 # Node ID 4b50d1f299d8a69f3e3f7975132e1490352642fe # Parent 3d10680c0399cb8d2e3b601412df0495ffaab4a5 Resolver: added support for domain names with a trailing dot. diff -r 3d10680c0399 -r 4b50d1f299d8 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Jan 08 11:34:59 2014 -0800 +++ b/src/core/ngx_resolver.c Fri Jan 10 11:22:14 2014 -0800 @@ -356,6 +356,10 @@ r = ctx->resolver; + if (ctx->name.len > 0 && ctx->name.data[ctx->name.len - 1] == '.') { + ctx->name.len--; + } + ngx_log_debug1(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve: \"%V\"", &ctx->name); -------------- next part -------------- A non-text attachment was scrubbed... Name: resolve-names-with-a-trailing-dot-V2.patch Type: text/x-patch Size: 696 bytes Desc: not available URL: From vbart at nginx.com Sat Jan 11 01:26:41 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 11 Jan 2014 05:26:41 +0400 Subject: [PATCH] SPDY: send PING reply frame right away. In-Reply-To: References: Message-ID: <6848541.irVadqEZEM@vbart-laptop> On Friday 10 January 2014 02:09:32 Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1389316088 -3600 > # Fri Jan 10 02:08:08 2014 +0100 > # Node ID c26d5f5e8d74dc9ab71476688074717857df5216 > # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c > SPDY: send PING reply frame right away. > > Previously, PING reply frame was queued right away, but it was send > along subsequent response, which means that in case of long running > request PING reply could have been delayed by more than 10 seconds, > which is the time some browsers are waiting for a PING reply. > > Those browsers would then correctly consider such connection broken > and would resend exactly the same request over a new connection, > which isn't safe in case of non-idempotent HTTP methods. [..] Thank you for the patch. But, there is also no much sense in trying to send queue as soon as the PING frame was added (i.e. parsed from input buffer). The same is true as well for your next patch for the SETTINGS frame. I am going to fix the problem by this change: diff -r bbf87b408b92 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Fri Jan 10 02:08:12 2014 +0400 +++ b/src/http/ngx_http_spdy.c Sat Jan 11 05:20:50 2014 +0400 @@ -378,6 +378,15 @@ ngx_http_spdy_read_handler(ngx_event_t * return; } + if (sc->last_out) { + if (ngx_http_spdy_send_output_queue(sc) == NGX_ERROR) { + ngx_http_spdy_finalize_connection(sc, + c->error ? NGX_HTTP_CLIENT_CLOSED_REQUEST + : NGX_HTTP_INTERNAL_SERVER_ERROR); + return; + } + } + sc->blocked = 0; if (sc->processing) { Any objections? wbr, Valentin V. Bartenev From kyprizel at gmail.com Sat Jan 11 15:52:12 2014 From: kyprizel at gmail.com (kyprizel) Date: Sat, 11 Jan 2014 19:52:12 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive Message-ID: In some cases we need to vary period after OCSP response will be refreshed. By default it was hardcoded to 3600 sec. This directive allows to change it via config. Also, there were some kind of bursts when all the cluster nodes and nginx workers go to update their OCSP staples - random delay within 180 sec was added to fix it. # HG changeset patch # User Eldar Zaitov # Date 1389455065 -14400 # Node ID c883560fbb43a249cc19bb9eaea7c30ad486f84c # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c SSL: ssl_stapling_valid directive. Sets caching time for stapled OCSP response. Example: ssl_stapling_valid 1h; Default: 1 hour. diff -r 4aa64f695031 -r c883560fbb43 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/event/ngx_event_openssl.h Sat Jan 11 19:44:25 2014 +0400 @@ -119,7 +119,8 @@ ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl); ngx_int_t ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *file, ngx_str_t *responder, ngx_uint_t verify); + ngx_str_t *file, ngx_str_t *responder, ngx_uint_t verify, + time_t cache_time); ngx_int_t ngx_ssl_stapling_resolver(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_resolver_t *resolver, ngx_msec_t resolver_timeout); RSA *ngx_ssl_rsa512_key_callback(ngx_ssl_conn_t *ssl_conn, int is_export, diff -r 4aa64f695031 -r c883560fbb43 src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/event/ngx_event_openssl_stapling.c Sat Jan 11 19:44:25 2014 +0400 @@ -32,6 +32,7 @@ X509 *issuer; time_t valid; + time_t cache_time; unsigned verify:1; unsigned loading:1; @@ -116,7 +117,7 @@ ngx_int_t ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file, - ngx_str_t *responder, ngx_uint_t verify) + ngx_str_t *responder, ngx_uint_t verify, time_t cache_time) { ngx_int_t rc; ngx_pool_cleanup_t *cln; @@ -146,6 +147,7 @@ staple->ssl_ctx = ssl->ctx; staple->timeout = 60000; staple->verify = verify; + staple->cache_time = cache_time; if (file->len) { /* use OCSP response from the file */ @@ -656,7 +658,11 @@ done: staple->loading = 0; - staple->valid = ngx_time() + 3600; /* ssl_stapling_valid */ + + /* ssl_stapling_valid */ + + staple->valid = ngx_time() + staple->cache_time + + (ngx_random() % 180); ngx_ssl_ocsp_done(ctx); return; diff -r 4aa64f695031 -r c883560fbb43 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Sat Jan 11 19:44:25 2014 +0400 @@ -209,6 +209,13 @@ offsetof(ngx_http_ssl_srv_conf_t, stapling_verify), NULL }, + { ngx_string("ssl_stapling_valid"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_sec_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, stapling_valid), + NULL }, + ngx_null_command }; @@ -439,6 +446,7 @@ sscf->session_ticket_keys = NGX_CONF_UNSET_PTR; sscf->stapling = NGX_CONF_UNSET; sscf->stapling_verify = NGX_CONF_UNSET; + sscf->stapling_valid = NGX_CONF_UNSET; return sscf; } @@ -500,6 +508,8 @@ ngx_conf_merge_str_value(conf->stapling_file, prev->stapling_file, ""); ngx_conf_merge_str_value(conf->stapling_responder, prev->stapling_responder, ""); + ngx_conf_merge_value(conf->stapling_valid, + prev->stapling_valid, 3600); conf->ssl.log = cf->log; @@ -656,7 +666,8 @@ if (conf->stapling) { if (ngx_ssl_stapling(cf, &conf->ssl, &conf->stapling_file, - &conf->stapling_responder, conf->stapling_verify) + &conf->stapling_responder, conf->stapling_verify, + conf->stapling_valid) != NGX_OK) { return NGX_CONF_ERROR; diff -r 4aa64f695031 -r c883560fbb43 src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.h Sat Jan 11 19:44:25 2014 +0400 @@ -50,6 +50,7 @@ ngx_flag_t stapling_verify; ngx_str_t stapling_file; ngx_str_t stapling_responder; + time_t stapling_valid; u_char *file; ngx_uint_t line; -------------- next part -------------- An HTML attachment was scrubbed... URL: From faskiri.devel at gmail.com Sat Jan 11 16:58:52 2014 From: faskiri.devel at gmail.com (Fasih) Date: Sat, 11 Jan 2014 22:28:52 +0530 Subject: WWW-Authenticate header In-Reply-To: <20140110134946.GO1835@mdounin.ru> References: <20140110134946.GO1835@mdounin.ru> Message-ID: Yes, that's how I noticed it. I am using nginx as a reverse proxy. The upstream sends two WWW-Authenticate headers with different realms. I was processing www_authenticate header and hadnt realized that it was legal to send multiple WWW-Authenticate headers. On Fri, Jan 10, 2014 at 7:19 PM, Maxim Dounin wrote: > Hello! > > On Fri, Jan 10, 2014 at 05:42:23PM +0530, Fasih wrote: > > > Hi > > > > RFC allows a server to respond with multiple WWW-Authenticate header ( > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). > > > > "User agents are advised to take special care in parsing the WWW- > > Authenticate field value as it might contain more than one challenge, or > if > > more than one WWW-Authenticate header field is provided, the contents of > a > > challenge itself can contain a comma-separated list of authentication > > parameters." > > > > However nginx defines WWW-Authenticate header as an ngx_table_elt_t in > > the ngx_http_headers_out_t struct as opposed to an ngx_array_t like other > > allowed repeated value headers. > > > > Is this a bug that I should file? > > Have you seen this to be a problem in real life? > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sven at ha.cki.ng Mon Jan 13 10:29:26 2014 From: sven at ha.cki.ng (Sven Peter) Date: Mon, 13 Jan 2014 11:29:26 +0100 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates Message-ID: <8744640301ae0f7d4c16.1389608966@123.fritz.box> # HG changeset patch # User Sven Peter # Date 1389607375 -3600 # Mon Jan 13 11:02:55 2014 +0100 # Node ID 8744640301ae0f7d4c16108e68c9ae6eb60f2213 # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c mail_{ssl,auth_http}_module: add support for SSL client certificates This patch adds support for SSL client certificates to the mail proxy capabilities of nginx both for STARTTLS and SSL mode. Just like the HTTP SSL module a root CA is defined in the mail section of the configuration file. Verification can be optional or mandatory. Additionally, the result of the verification is exposed to the auth http backend via the SSL-Verify, SSL-Subject-DN and SSL-Issuer-DN HTTP headers. diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 11:02:55 2014 +0100 @@ -1144,6 +1144,11 @@ ngx_buf_t *b; ngx_str_t login, passwd; ngx_mail_core_srv_conf_t *cscf; + ngx_str_t ssl_client_verify = {0, NULL}; + ngx_str_t ssl_client_raw_s_dn = {0, NULL}; + ngx_str_t ssl_client_raw_i_dn = {0, NULL}; + ngx_str_t ssl_client_s_dn = {0, NULL}; + ngx_str_t ssl_client_i_dn = {0, NULL}; if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { return NULL; @@ -1153,6 +1158,29 @@ return NULL; } + // ssl_client_verify doesn't need to be escaped since it comes from nginx itself +#if (NGX_MAIL_SSL) + ngx_ssl_get_client_verify(s->connection, pool, &ssl_client_verify); + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_s_dn); + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_i_dn); + + if (ssl_client_raw_s_dn.len != 0) { + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_s_dn, &ssl_client_s_dn) != NGX_OK) { + return NULL; + } + } + + if (ssl_client_raw_i_dn.len != 0) { + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_i_dn, &ssl_client_i_dn) != NGX_OK) { + return NULL; + } + } +#else + // avoid -Wunused-variable + (void)ssl_client_raw_i_dn; + (void)ssl_client_raw_s_dn; +#endif + cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 @@ -1173,6 +1201,9 @@ + sizeof("Auth-SMTP-Helo: ") - 1 + s->smtp_helo.len + sizeof("Auth-SMTP-From: ") - 1 + s->smtp_from.len + sizeof("Auth-SMTP-To: ") - 1 + s->smtp_to.len + + sizeof("SSL-Verify: ") - 1 + ssl_client_verify.len + sizeof(CRLF) - 1 + + sizeof("SSL-Subject-DN: ") - 1 + ssl_client_s_dn.len + sizeof(CRLF) - 1 + + sizeof("SSL-Issuer-DN: ") - 1 + ssl_client_i_dn.len + sizeof(CRLF) - 1 + ahcf->header.len + sizeof(CRLF) - 1; @@ -1255,6 +1286,20 @@ } + if (ssl_client_verify.len && ssl_client_s_dn.len && ssl_client_i_dn.len) { + b->last = ngx_cpymem(b->last, "SSL-Verify: ", sizeof("SSL-Verify: ") - 1); + b->last = ngx_copy(b->last, ssl_client_verify.data, ssl_client_verify.len); + *b->last++ = CR; *b->last++ = LF; + + b->last = ngx_cpymem(b->last, "SSL-Subject-DN: ", sizeof("SSL-Subject-DN: ") - 1); + b->last = ngx_copy(b->last, ssl_client_s_dn.data, ssl_client_s_dn.len); + *b->last++ = CR; *b->last++ = LF; + + b->last = ngx_cpymem(b->last, "SSL-Issuer-DN: ", sizeof("SSL-Issuer-DN: ") - 1); + b->last = ngx_copy(b->last, ssl_client_i_dn.data, ssl_client_i_dn.len); + *b->last++ = CR; *b->last++ = LF; + } + if (ahcf->header.len) { b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); } diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_handler.c Mon Jan 13 11:02:55 2014 +0100 @@ -236,11 +236,40 @@ { ngx_mail_session_t *s; ngx_mail_core_srv_conf_t *cscf; + ngx_mail_ssl_conf_t *sslcf; if (c->ssl->handshaked) { s = c->data; + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); + if (sslcf->verify != NGX_MAIL_SSL_VERIFY_OFF) { + long rc; + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK && + (sslcf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA && ngx_ssl_verify_error_optional(rc))) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client SSL certificate verify error: (%l:%s)", + rc, X509_verify_cert_error_string(rc)); + ngx_mail_close_connection(c); + return; + } + + if (sslcf->verify == NGX_MAIL_SSL_VERIFY_ON) { + X509 *cert; + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client sent no required SSL certificate"); + ngx_mail_close_connection(c); + return; + } + X509_free(cert); + } + } + if (s->starttls) { cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.c Mon Jan 13 11:02:55 2014 +0100 @@ -43,6 +43,13 @@ { ngx_null_string, 0 } }; +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { + { ngx_string("off"), NGX_MAIL_SSL_VERIFY_OFF }, + { ngx_string("on"), NGX_MAIL_SSL_VERIFY_ON }, + { ngx_string("optional"), NGX_MAIL_SSL_VERIFY_OPTIONAL }, + { ngx_string("optional_no_ca"), NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA }, + { ngx_null_string, 0 } +}; static ngx_command_t ngx_mail_ssl_commands[] = { @@ -130,7 +137,40 @@ offsetof(ngx_mail_ssl_conf_t, session_timeout), NULL }, - ngx_null_command + { + ngx_string("ssl_verify_client"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify), + &ngx_mail_ssl_verify + }, + { + ngx_string("ssl_verify_depth"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, + ngx_conf_set_num_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify_depth), + NULL + }, + { + ngx_string("ssl_client_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, client_certificate), + NULL + }, + { + ngx_string("ssl_trusted_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), + NULL + }, + + ngx_null_command }; @@ -184,6 +224,8 @@ * scf->ecdh_curve = { 0, NULL }; * scf->ciphers = { 0, NULL }; * scf->shm_zone = NULL; + * scf->client_certificate = { 0, NULL }; + * scf->trusted_certificate = { 0, NULL }; */ scf->enable = NGX_CONF_UNSET; @@ -192,6 +234,8 @@ scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; scf->session_ticket_keys = NGX_CONF_UNSET_PTR; + scf->verify = NGX_CONF_UNSET_UINT; + scf->verify_depth = NGX_CONF_UNSET_UINT; return scf; } @@ -230,6 +274,11 @@ ngx_conf_merge_str_value(conf->ciphers, prev->ciphers, NGX_DEFAULT_CIPHERS); + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_MAIL_SSL_VERIFY_OFF); + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); + + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, ""); + ngx_conf_merge_str_value(conf->trusted_certificate, prev->trusted_certificate, ""); conf->ssl.log = cf->log; @@ -310,6 +359,21 @@ return NGX_CONF_ERROR; } + if (conf->verify) { + if (conf->client_certificate.len == 0 && conf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no ssl_client_certificate for ssl_client_verify"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_client_certificate(cf, &conf->ssl, + &conf->client_certificate, + conf->verify_depth) + != NGX_OK) { + return NGX_CONF_ERROR; + } + } + if (conf->prefer_server_ciphers) { SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.h Mon Jan 13 11:02:55 2014 +0100 @@ -37,8 +37,14 @@ ngx_str_t dhparam; ngx_str_t ecdh_curve; + ngx_str_t client_certificate; + ngx_str_t trusted_certificate; + ngx_str_t ciphers; + ngx_uint_t verify; + ngx_uint_t verify_depth; + ngx_shm_zone_t *shm_zone; ngx_array_t *session_ticket_keys; @@ -47,6 +53,13 @@ ngx_uint_t line; } ngx_mail_ssl_conf_t; +enum ngx_mail_ssl_verify_enum { + NGX_MAIL_SSL_VERIFY_OFF = 0, + NGX_MAIL_SSL_VERIFY_ON, + NGX_MAIL_SSL_VERIFY_OPTIONAL, + NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA, +}; + extern ngx_module_t ngx_mail_ssl_module; From fdasilvayy at gmail.com Mon Jan 13 12:09:02 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Mon, 13 Jan 2014 13:09:02 +0100 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates Message-ID: Hi. Some remarks about your patch . 2014/1/13 : > From: Sven Peter > To: nginx-devel at nginx.org > Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL > client certificates > Message-ID: <8744640301ae0f7d4c16.1389608966 at 123.fritz.box> > Content-Type: text/plain; charset="us-ascii" > > # HG changeset patch > # User Sven Peter > # Date 1389607375 -3600 > # Mon Jan 13 11:02:55 2014 +0100 > # Node ID 8744640301ae0f7d4c16108e68c9ae6eb60f2213 > # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c > mail_{ssl,auth_http}_module: add support for SSL client certificates > > This patch adds support for SSL client certificates to the mail proxy > capabilities of nginx both for STARTTLS and SSL mode. > Just like the HTTP SSL module a root CA is defined in the mail section > of the configuration file. Verification can be optional or mandatory. > Additionally, the result of the verification is exposed to the > auth http backend via the SSL-Verify, SSL-Subject-DN and SSL-Issuer-DN > HTTP headers. > > diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_auth_http_module.c > --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 > +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 11:02:55 2014 +0100 > @@ -1144,6 +1144,11 @@ > ngx_buf_t *b; > ngx_str_t login, passwd; > ngx_mail_core_srv_conf_t *cscf; > + ngx_str_t ssl_client_verify = {0, NULL}; > + ngx_str_t ssl_client_raw_s_dn = {0, NULL}; > + ngx_str_t ssl_client_raw_i_dn = {0, NULL}; > + ngx_str_t ssl_client_s_dn = {0, NULL}; > + ngx_str_t ssl_client_i_dn = {0, NULL}; This kind of initialization is not part in the nginx coding style. > > if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { > return NULL; > @@ -1153,6 +1158,29 @@ > return NULL; > } > > + // ssl_client_verify doesn't need to be escaped since it comes from nginx itself > +#if (NGX_MAIL_SSL) > + ngx_ssl_get_client_verify(s->connection, pool, &ssl_client_verify); > + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_s_dn); > + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_i_dn); Twice call to ngx_ssl_get_subject_dn : Copy-paste issue ? ... Regards, FDS From sven at ha.cki.ng Mon Jan 13 13:10:52 2014 From: sven at ha.cki.ng (Sven Peter) Date: Mon, 13 Jan 2014 14:10:52 +0100 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates In-Reply-To: References: Message-ID: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> Hi, On Jan 13, 2014, at 1:09 PM, Filipe Da Silva wrote: > Hi. > > Some remarks about your patch . > > 2014/1/13 : >> >> >> diff -r 4aa64f695031 -r 8744640301ae src/mail/ngx_mail_auth_http_module.c >> --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 >> +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 11:02:55 2014 +0100 >> @@ -1144,6 +1144,11 @@ >> ngx_buf_t *b; >> ngx_str_t login, passwd; >> ngx_mail_core_srv_conf_t *cscf; >> + ngx_str_t ssl_client_verify = {0, NULL}; >> + ngx_str_t ssl_client_raw_s_dn = {0, NULL}; >> + ngx_str_t ssl_client_raw_i_dn = {0, NULL}; >> + ngx_str_t ssl_client_s_dn = {0, NULL}; >> + ngx_str_t ssl_client_i_dn = {0, NULL}; > > This kind of initialization is not part in the nginx coding style. Ah, sorry. I'll fix that! How do I handle the case when nginx is configured without ssl support (i.e. NGX_MAIL_SSL is not defined).? Just place a #ifdef around the declarations and the other new code below? > >> >> if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { >> return NULL; >> @@ -1153,6 +1158,29 @@ >> return NULL; >> } >> >> + // ssl_client_verify doesn't need to be escaped since it comes from nginx itself >> +#if (NGX_MAIL_SSL) >> + ngx_ssl_get_client_verify(s->connection, pool, &ssl_client_verify); >> + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_s_dn); >> + ngx_ssl_get_subject_dn(s->connection, pool, &ssl_client_i_dn); > > Twice call to ngx_ssl_get_subject_dn : Copy-paste issue ? > Yes, it's a copy-paste issue. I didn't notice because I only verify the subject in my setup. The second call should be ngx_ssl_get_issuer_dn of course. > ... > > Regards, > FDS > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel How do I proceed from here? Re-submit the fixed patch as a reply to this thread? Thanks, Sven From mdounin at mdounin.ru Mon Jan 13 13:57:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 17:57:36 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: References: Message-ID: <20140113135736.GT1835@mdounin.ru> Hello! On Sat, Jan 11, 2014 at 07:52:12PM +0400, kyprizel wrote: > In some cases we need to vary period after OCSP response will be refreshed. > By default it was hardcoded to 3600 sec. This directive allows to change it > via config. In which "some cases"? The directive was ommitted intentionally to simplify things as it seems to be good enough to have hardcoded 1h value. Note well that OCSP responses have their validity times available, and it may be a good idea to derive needed times from there instead of making things user-configurable. > Also, there were some kind of bursts when all the cluster nodes and nginx > workers go to update their OCSP staples - random delay within 180 sec was > added to fix it. This may make sense, but certainly should be a separate patch. [...] > @@ -32,6 +32,7 @@ > X509 *issuer; > > time_t valid; > + time_t cache_time; I don't really like the name used. [...] > @@ -656,7 +658,11 @@ > done: > > staple->loading = 0; > - staple->valid = ngx_time() + 3600; /* ssl_stapling_valid */ > + > + /* ssl_stapling_valid */ > + > + staple->valid = ngx_time() + staple->cache_time > + + (ngx_random() % 180); The comment is here to indicate what the "3600" magic number means. Preserving it shouldn't be needed. [...] -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 13 13:58:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 17:58:39 +0400 Subject: [PATCH] Resolver: added support for domain names with a trailing dot In-Reply-To: References: <20140110191006.GA40401@lo0.su> Message-ID: <20140113135839.GU1835@mdounin.ru> Hello! On Fri, Jan 10, 2014 at 12:13:26PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > On Fri, Jan 10, 2014 at 11:10 AM, Ruslan Ermilov wrote: > > > > There's no such thing as domain names with a trailing dot, > > with one exception of the root domain name. > > > > Well, they are just a fully qualified domain names. Well, not really. There is no need for a trailing dot for a domain name to be fully qualified. The "example.com" domain _is_ fully qualified. The trailing dot is just used by some software to indicate fully qualified names. It looks like it's something specifically mentioned by RFC 3986 though, http://tools.ietf.org/html/rfc3986#section-3.2.2: The rightmost domain label of a fully qualified domain name in DNS may be followed by a single "." and should be if it is necessary to distinguish between the complete domain name and some local domain. So we probably should support it. [...] -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Mon Jan 13 14:08:53 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 13 Jan 2014 18:08:53 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: <20140113135736.GT1835@mdounin.ru> References: <20140113135736.GT1835@mdounin.ru> Message-ID: "some cases", for example = you have a lot of users with wrong system time, so they can't access the server if OCSP responses updated too frequently. According to validity time - most responders issue OCSP response valid for 7 days, but they also can issue responses without nextUpdate option. I think user-configurable thing is much better here b/c in most cases you can't manipulate CA's OCSP responders options and fix them. validity_period vs cache_valid will be better? On Mon, Jan 13, 2014 at 5:57 PM, Maxim Dounin wrote: > Hello! > > On Sat, Jan 11, 2014 at 07:52:12PM +0400, kyprizel wrote: > > > In some cases we need to vary period after OCSP response will be > refreshed. > > By default it was hardcoded to 3600 sec. This directive allows to change > it > > via config. > > In which "some cases"? The directive was ommitted intentionally > to simplify things as it seems to be good enough to have hardcoded > 1h value. > > Note well that OCSP responses have their validity times available, > and it may be a good idea to derive needed times from there > instead of making things user-configurable. > > > Also, there were some kind of bursts when all the cluster nodes and nginx > > workers go to update their OCSP staples - random delay within 180 sec was > > added to fix it. > > This may make sense, but certainly should be a separate patch. > > [...] > > > @@ -32,6 +32,7 @@ > > X509 *issuer; > > > > time_t valid; > > + time_t cache_time; > > I don't really like the name used. > > [...] > > > @@ -656,7 +658,11 @@ > > done: > > > > staple->loading = 0; > > - staple->valid = ngx_time() + 3600; /* ssl_stapling_valid */ > > + > > + /* ssl_stapling_valid */ > > + > > + staple->valid = ngx_time() + staple->cache_time > > + + (ngx_random() % 180); > > The comment is here to indicate what the "3600" magic number > means. Preserving it shouldn't be needed. > > [...] > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 13 14:51:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 18:51:23 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: References: <20140113135736.GT1835@mdounin.ru> Message-ID: <20140113145123.GW1835@mdounin.ru> Hello! On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > "some cases", for example = you have a lot of users with wrong system time, > so they can't access the server if OCSP responses updated too frequently. This looks like a very-very wrong way to address the problem. Instead of resolving the problem it will hide it on some requests (but not on others), making the problem harder to detect and debug. -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Mon Jan 13 15:04:11 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 13 Jan 2014 19:04:11 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: <20140113145123.GW1835@mdounin.ru> References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> Message-ID: So, you going to leave 3600 hardcoded there? On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > "some cases", for example = you have a lot of users with wrong system > time, > > so they can't access the server if OCSP responses updated too frequently. > > This looks like a very-very wrong way to address the problem. > Instead of resolving the problem it will hide it on some requests > (but not on others), making the problem harder to detect and debug. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sven at ha.cki.ng Mon Jan 13 15:28:25 2014 From: sven at ha.cki.ng (Sven Peter) Date: Mon, 13 Jan 2014 16:28:25 +0100 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates In-Reply-To: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> References: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> Message-ID: <869DE836-5635-40C8-A86C-53AFD4B1FD30@ha.cki.ng> Hi again, Here's an updated version of the patch. Sorry for not using hg email, but I couldn't figure out how to convince it to reply to an existing thread. This will probably screw something up, so I attached the mbox file to be sure. -------------- next part -------------- A non-text attachment was scrubbed... Name: mail-ssl-patch.mbox Type: application/octet-stream Size: 11830 bytes Desc: not available URL: -------------- next part -------------- Sven # HG changeset patch # User Sven Peter # Date 1389626052 -3600 # Mon Jan 13 16:14:12 2014 +0100 # Node ID a444733105e8eb96212f142533e714532a23cddf # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c mail_{ssl,auth_http}_module: add support for SSL client certificates This patch adds support for SSL client certificates to the mail proxy capabilities of nginx both for STARTTLS and SSL mode. Just like the HTTP SSL module a root CA is defined in the mail section of the configuration file. Verification can be optional or mandatory. Additionally, the result of the verification is exposed to the auth http backend via the SSL-Verify, SSL-Subject-DN, SSL-Issuer-DN and SSL-Serial HTTP headers. diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 16:14:12 2014 +0100 @@ -1145,6 +1145,16 @@ ngx_str_t login, passwd; ngx_mail_core_srv_conf_t *cscf; +#if (NGX_MAIL_SSL) + ngx_str_t ssl_client_verify; + ngx_str_t ssl_client_raw_s_dn; + ngx_str_t ssl_client_raw_i_dn; + ngx_str_t ssl_client_raw_serial; + ngx_str_t ssl_client_s_dn; + ngx_str_t ssl_client_i_dn; + ngx_str_t ssl_client_serial; +#endif + if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { return NULL; } @@ -1153,6 +1163,51 @@ return NULL; } +#if (NGX_MAIL_SSL) + if (s->connection->ssl) { + /* ssl_client_verify comes from nginx itself - no need to escape */ + if (ngx_ssl_get_client_verify(s->connection, pool, + &ssl_client_verify) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_subject_dn(s->connection, pool, + &ssl_client_raw_s_dn) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_issuer_dn(s->connection, pool, + &ssl_client_raw_i_dn) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_serial_number(s->connection, pool, + &ssl_client_raw_serial) != NGX_OK) { + return NULL; + } + + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_s_dn, + &ssl_client_s_dn) != NGX_OK) { + return NULL; + } + + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_i_dn, + &ssl_client_i_dn) != NGX_OK) { + return NULL; + } + + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_serial, + &ssl_client_serial) != NGX_OK) { + return NULL; + } + } else { + ngx_str_set(&ssl_client_verify, "NONE"); + ssl_client_i_dn.len = 0; + ssl_client_s_dn.len = 0; + ssl_client_serial.len = 0; + } +#endif + cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 @@ -1173,6 +1228,16 @@ + sizeof("Auth-SMTP-Helo: ") - 1 + s->smtp_helo.len + sizeof("Auth-SMTP-From: ") - 1 + s->smtp_from.len + sizeof("Auth-SMTP-To: ") - 1 + s->smtp_to.len +#if (NGX_MAIL_SSL) + + sizeof("SSL-Verify: ") - 1 + ssl_client_verify.len + + sizeof(CRLF) - 1 + + sizeof("SSL-Subject-DN: ") - 1 + ssl_client_s_dn.len + + sizeof(CRLF) - 1 + + sizeof("SSL-Issuer-DN: ") - 1 + ssl_client_i_dn.len + + sizeof(CRLF) - 1 + + sizeof("SSL-Serial: ") - 1 + ssl_client_serial.len + + sizeof(CRLF) - 1 +#endif + ahcf->header.len + sizeof(CRLF) - 1; @@ -1255,6 +1320,34 @@ } +#if (NGX_MAIL_SSL) + if (ssl_client_verify.len) { + b->last = ngx_cpymem(b->last, "SSL-Verify: ", + sizeof("SSL-Verify: ") - 1); + b->last = ngx_copy(b->last, ssl_client_verify.data, + ssl_client_verify.len); + *b->last++ = CR; *b->last++ = LF; + + b->last = ngx_cpymem(b->last, "SSL-Subject-DN: ", + sizeof("SSL-Subject-DN: ") - 1); + b->last = ngx_copy(b->last, ssl_client_s_dn.data, + ssl_client_s_dn.len); + *b->last++ = CR; *b->last++ = LF; + + b->last = ngx_cpymem(b->last, "SSL-Issuer-DN: ", + sizeof("SSL-Issuer-DN: ") - 1); + b->last = ngx_copy(b->last, ssl_client_i_dn.data, + ssl_client_i_dn.len); + *b->last++ = CR; *b->last++ = LF; + + b->last = ngx_cpymem(b->last, "SSL-Serial: ", + sizeof("SSL-Serial: ") - 1); + b->last = ngx_copy(b->last, ssl_client_serial.data, + ssl_client_serial.len); + *b->last++ = CR; *b->last++ = LF; + } +#endif + if (ahcf->header.len) { b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); } diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_handler.c Mon Jan 13 16:14:12 2014 +0100 @@ -236,11 +236,40 @@ { ngx_mail_session_t *s; ngx_mail_core_srv_conf_t *cscf; + ngx_mail_ssl_conf_t *sslcf; if (c->ssl->handshaked) { s = c->data; + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); + if (sslcf->verify != NGX_MAIL_SSL_VERIFY_OFF) { + long rc; + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK && + (sslcf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA && ngx_ssl_verify_error_optional(rc))) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client SSL certificate verify error: (%l:%s)", + rc, X509_verify_cert_error_string(rc)); + ngx_mail_close_connection(c); + return; + } + + if (sslcf->verify == NGX_MAIL_SSL_VERIFY_ON) { + X509 *cert; + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client sent no required SSL certificate"); + ngx_mail_close_connection(c); + return; + } + X509_free(cert); + } + } + if (s->starttls) { cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.c Mon Jan 13 16:14:12 2014 +0100 @@ -43,6 +43,13 @@ { ngx_null_string, 0 } }; +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { + { ngx_string("off"), NGX_MAIL_SSL_VERIFY_OFF }, + { ngx_string("on"), NGX_MAIL_SSL_VERIFY_ON }, + { ngx_string("optional"), NGX_MAIL_SSL_VERIFY_OPTIONAL }, + { ngx_string("optional_no_ca"), NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA }, + { ngx_null_string, 0 } +}; static ngx_command_t ngx_mail_ssl_commands[] = { @@ -130,7 +137,40 @@ offsetof(ngx_mail_ssl_conf_t, session_timeout), NULL }, - ngx_null_command + { + ngx_string("ssl_verify_client"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify), + &ngx_mail_ssl_verify + }, + { + ngx_string("ssl_verify_depth"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, + ngx_conf_set_num_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify_depth), + NULL + }, + { + ngx_string("ssl_client_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, client_certificate), + NULL + }, + { + ngx_string("ssl_trusted_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), + NULL + }, + + ngx_null_command }; @@ -184,6 +224,8 @@ * scf->ecdh_curve = { 0, NULL }; * scf->ciphers = { 0, NULL }; * scf->shm_zone = NULL; + * scf->client_certificate = { 0, NULL }; + * scf->trusted_certificate = { 0, NULL }; */ scf->enable = NGX_CONF_UNSET; @@ -192,6 +234,8 @@ scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; scf->session_ticket_keys = NGX_CONF_UNSET_PTR; + scf->verify = NGX_CONF_UNSET_UINT; + scf->verify_depth = NGX_CONF_UNSET_UINT; return scf; } @@ -230,6 +274,11 @@ ngx_conf_merge_str_value(conf->ciphers, prev->ciphers, NGX_DEFAULT_CIPHERS); + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_MAIL_SSL_VERIFY_OFF); + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); + + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, ""); + ngx_conf_merge_str_value(conf->trusted_certificate, prev->trusted_certificate, ""); conf->ssl.log = cf->log; @@ -310,6 +359,21 @@ return NGX_CONF_ERROR; } + if (conf->verify) { + if (conf->client_certificate.len == 0 && conf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no ssl_client_certificate for ssl_client_verify"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_client_certificate(cf, &conf->ssl, + &conf->client_certificate, + conf->verify_depth) + != NGX_OK) { + return NGX_CONF_ERROR; + } + } + if (conf->prefer_server_ciphers) { SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 +++ b/src/mail/ngx_mail_ssl_module.h Mon Jan 13 16:14:12 2014 +0100 @@ -37,8 +37,14 @@ ngx_str_t dhparam; ngx_str_t ecdh_curve; + ngx_str_t client_certificate; + ngx_str_t trusted_certificate; + ngx_str_t ciphers; + ngx_uint_t verify; + ngx_uint_t verify_depth; + ngx_shm_zone_t *shm_zone; ngx_array_t *session_ticket_keys; @@ -47,6 +53,13 @@ ngx_uint_t line; } ngx_mail_ssl_conf_t; +enum ngx_mail_ssl_verify_enum { + NGX_MAIL_SSL_VERIFY_OFF = 0, + NGX_MAIL_SSL_VERIFY_ON, + NGX_MAIL_SSL_VERIFY_OPTIONAL, + NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA, +}; + extern ngx_module_t ngx_mail_ssl_module; From mdounin at mdounin.ru Mon Jan 13 15:38:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 19:38:27 +0400 Subject: WWW-Authenticate header In-Reply-To: References: <20140110134946.GO1835@mdounin.ru> Message-ID: <20140113153827.GX1835@mdounin.ru> Hello! On Sat, Jan 11, 2014 at 10:28:52PM +0530, Fasih wrote: > Yes, that's how I noticed it. I am using nginx as a reverse proxy. The > upstream sends two WWW-Authenticate headers with different realms. I was > processing www_authenticate header and hadnt realized that it was legal to > send multiple WWW-Authenticate headers. Looks like there are indeed valid real-world uses, see e.g. here: http://stackoverflow.com/a/15894841/1597813 I don't think we want to change www_authenticate to ngx_array_t, but it certainly counts as another case requiring better support for multiple headers, much like with $upstream_http_set_cookie and multiple Set-Cookie headers, and so on. > > On Fri, Jan 10, 2014 at 7:19 PM, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Jan 10, 2014 at 05:42:23PM +0530, Fasih wrote: > > > > > Hi > > > > > > RFC allows a server to respond with multiple WWW-Authenticate header ( > > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). > > > > > > "User agents are advised to take special care in parsing the WWW- > > > Authenticate field value as it might contain more than one challenge, or > > if > > > more than one WWW-Authenticate header field is provided, the contents of > > a > > > challenge itself can contain a comma-separated list of authentication > > > parameters." > > > > > > However nginx defines WWW-Authenticate header as an ngx_table_elt_t in > > > the ngx_http_headers_out_t struct as opposed to an ngx_array_t like other > > > allowed repeated value headers. > > > > > > Is this a bug that I should file? > > > > Have you seen this to be a problem in real life? > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 13 15:42:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 19:42:36 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> Message-ID: <20140113154236.GY1835@mdounin.ru> Hello! On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > So, you going to leave 3600 hardcoded there? Yes, unless you have some better reasons to make it configurable. > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > "some cases", for example = you have a lot of users with wrong system > > time, > > > so they can't access the server if OCSP responses updated too frequently. > > > > This looks like a very-very wrong way to address the problem. > > Instead of resolving the problem it will hide it on some requests > > (but not on others), making the problem harder to detect and debug. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Mon Jan 13 15:45:29 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 13 Jan 2014 19:45:29 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: <20140113154236.GY1835@mdounin.ru> References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> <20140113154236.GY1835@mdounin.ru> Message-ID: The reason is quite easy - most responders _do_ set validity time equal to 7 days and there is no reason to update the response every hour and I want to update it more rarely. Some do not set nextUpdate at all and 3600 can be too rarely for them. On Mon, Jan 13, 2014 at 7:42 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > > > So, you going to leave 3600 hardcoded there? > > Yes, unless you have some better reasons to make it > configurable. > > > > > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > > > "some cases", for example = you have a lot of users with wrong system > > > time, > > > > so they can't access the server if OCSP responses updated too > frequently. > > > > > > This looks like a very-very wrong way to address the problem. > > > Instead of resolving the problem it will hide it on some requests > > > (but not on others), making the problem harder to detect and debug. > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 13 16:12:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 20:12:56 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> <20140113154236.GY1835@mdounin.ru> Message-ID: <20140113161256.GZ1835@mdounin.ru> Hello! On Mon, Jan 13, 2014 at 07:45:29PM +0400, kyprizel wrote: > The reason is quite easy - most responders _do_ set validity time equal to > 7 days and there is no reason to update the response every hour and I want > to update it more rarely. > Some do not set nextUpdate at all and 3600 can be too rarely for them. These reasons suggest that deriving validity times from response validity times, as suggested earlier, would be a better way to go. > > > > On Mon, Jan 13, 2014 at 7:42 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > > > > > So, you going to leave 3600 hardcoded there? > > > > Yes, unless you have some better reasons to make it > > configurable. > > > > > > > > > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin > > wrote: > > > > > > > Hello! > > > > > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > > > > > "some cases", for example = you have a lot of users with wrong system > > > > time, > > > > > so they can't access the server if OCSP responses updated too > > frequently. > > > > > > > > This looks like a very-very wrong way to address the problem. > > > > Instead of resolving the problem it will hide it on some requests > > > > (but not on others), making the problem harder to detect and debug. > > > > > > > > -- > > > > Maxim Dounin > > > > http://nginx.org/ > > > > > > > > _______________________________________________ > > > > nginx-devel mailing list > > > > nginx-devel at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Mon Jan 13 16:23:46 2014 From: kyprizel at gmail.com (kyprizel) Date: Mon, 13 Jan 2014 20:23:46 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: <20140113161256.GZ1835@mdounin.ru> References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> <20140113154236.GY1835@mdounin.ru> <20140113161256.GZ1835@mdounin.ru> Message-ID: > This looks like a very-very wrong way to address the problem. > Instead of resolving the problem it will hide it on some requests > (but not on others), making the problem harder to detect and debug. Once user can access the resource - he can see the warning about system time problem (and other warning). If he can't access it at all seeing something like "OCSP response invalid" - he doesn't know what to do. On Mon, Jan 13, 2014 at 8:12 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 13, 2014 at 07:45:29PM +0400, kyprizel wrote: > > > The reason is quite easy - most responders _do_ set validity time equal > to > > 7 days and there is no reason to update the response every hour and I > want > > to update it more rarely. > > Some do not set nextUpdate at all and 3600 can be too rarely for them. > > These reasons suggest that deriving validity times from response > validity times, as suggested earlier, would be a better way to go. > > > > > > > > > On Mon, Jan 13, 2014 at 7:42 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > > > > > > > So, you going to leave 3600 hardcoded there? > > > > > > Yes, unless you have some better reasons to make it > > > configurable. > > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin > > > wrote: > > > > > > > > > Hello! > > > > > > > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > > > > > > > "some cases", for example = you have a lot of users with wrong > system > > > > > time, > > > > > > so they can't access the server if OCSP responses updated too > > > frequently. > > > > > > > > > > This looks like a very-very wrong way to address the problem. > > > > > Instead of resolving the problem it will hide it on some requests > > > > > (but not on others), making the problem harder to detect and debug. > > > > > > > > > > -- > > > > > Maxim Dounin > > > > > http://nginx.org/ > > > > > > > > > > _______________________________________________ > > > > > nginx-devel mailing list > > > > > nginx-devel at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > _______________________________________________ > > > > nginx-devel mailing list > > > > nginx-devel at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 13 18:25:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jan 2014 22:25:26 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> <20140113154236.GY1835@mdounin.ru> <20140113161256.GZ1835@mdounin.ru> Message-ID: <20140113182526.GD1835@mdounin.ru> Hello! On Mon, Jan 13, 2014 at 08:23:46PM +0400, kyprizel wrote: > > This looks like a very-very wrong way to address the problem. > > Instead of resolving the problem it will hide it on some requests > > (but not on others), making the problem harder to detect and debug. > > Once user can access the resource - he can see the warning about system > time problem (and other warning). > If he can't access it at all seeing something like "OCSP response invalid" > - he doesn't know what to do. So the correct solution will probably be to ask browser vendors don't follow "abort the handshake" requirement (see http://trac.nginx.org/nginx/ticket/425 for other reasons why it's a bad idea anyway) and/or inform users about possible reasons of the problem. And/or to relax thisUpdate check. And/or to ask CAs to provide responses with thisUpdate set somewhere in the past. Trying to update OCSP responses less frequently doesn't looks like a solution. There will be periods when a response is fresh anyway. > > > > On Mon, Jan 13, 2014 at 8:12 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Jan 13, 2014 at 07:45:29PM +0400, kyprizel wrote: > > > > > The reason is quite easy - most responders _do_ set validity time equal > > to > > > 7 days and there is no reason to update the response every hour and I > > want > > > to update it more rarely. > > > Some do not set nextUpdate at all and 3600 can be too rarely for them. > > > > These reasons suggest that deriving validity times from response > > validity times, as suggested earlier, would be a better way to go. > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 7:42 PM, Maxim Dounin > > wrote: > > > > > > > Hello! > > > > > > > > On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > > > > > > > > > So, you going to leave 3600 hardcoded there? > > > > > > > > Yes, unless you have some better reasons to make it > > > > configurable. > > > > > > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin > > > > wrote: > > > > > > > > > > > Hello! > > > > > > > > > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > > > > > > > > > "some cases", for example = you have a lot of users with wrong > > system > > > > > > time, > > > > > > > so they can't access the server if OCSP responses updated too > > > > frequently. > > > > > > > > > > > > This looks like a very-very wrong way to address the problem. > > > > > > Instead of resolving the problem it will hide it on some requests > > > > > > (but not on others), making the problem harder to detect and debug. > > > > > > > > > > > > -- > > > > > > Maxim Dounin > > > > > > http://nginx.org/ > > > > > > > > > > > > _______________________________________________ > > > > > > nginx-devel mailing list > > > > > > nginx-devel at nginx.org > > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > > > > _______________________________________________ > > > > > nginx-devel mailing list > > > > > nginx-devel at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > -- > > > > Maxim Dounin > > > > http://nginx.org/ > > > > > > > > _______________________________________________ > > > > nginx-devel mailing list > > > > nginx-devel at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Mon Jan 13 23:45:31 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 14 Jan 2014 00:45:31 +0100 Subject: [PATCH] SPDY: send PING reply frame right away. In-Reply-To: <6848541.irVadqEZEM@vbart-laptop> References: <6848541.irVadqEZEM@vbart-laptop> Message-ID: Hello Valentin, > Thank you for the patch. But, there is also no much sense in trying > to send queue as soon as the PING frame was added (i.e. parsed from > input buffer). > > The same is true as well for your next patch for the SETTINGS frame. I wouldn't go as far as "no much sense" but your patch looks indeed better. > I am going to fix the problem by this change: > > diff -r bbf87b408b92 src/http/ngx_http_spdy.c > --- a/src/http/ngx_http_spdy.c Fri Jan 10 02:08:12 2014 +0400 > +++ b/src/http/ngx_http_spdy.c Sat Jan 11 05:20:50 2014 +0400 > @@ -378,6 +378,15 @@ ngx_http_spdy_read_handler(ngx_event_t * > return; > } > > + if (sc->last_out) { > + if (ngx_http_spdy_send_output_queue(sc) == NGX_ERROR) { > + ngx_http_spdy_finalize_connection(sc, > + c->error ? NGX_HTTP_CLIENT_CLOSED_REQUEST > + : NGX_HTTP_INTERNAL_SERVER_ERROR); > + return; > + } > + } > + > sc->blocked = 0; > > if (sc->processing) { > > Any objections? Nope, it fixes the original problem. Thanks! Best regards, Piotr Sikora From kyprizel at gmail.com Tue Jan 14 07:27:25 2014 From: kyprizel at gmail.com (kyprizel) Date: Tue, 14 Jan 2014 11:27:25 +0400 Subject: [PATCH] SSL: ssl_stapling_valid directive In-Reply-To: <20140113182526.GD1835@mdounin.ru> References: <20140113135736.GT1835@mdounin.ru> <20140113145123.GW1835@mdounin.ru> <20140113154236.GY1835@mdounin.ru> <20140113161256.GZ1835@mdounin.ru> <20140113182526.GD1835@mdounin.ru> Message-ID: Configuration directive allow to update it less or _more_ frequently if required. At the moment nobody knows how often are OCSP responses updated until check the source code b/c there is no word in documentation about it. On Mon, Jan 13, 2014 at 10:25 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 13, 2014 at 08:23:46PM +0400, kyprizel wrote: > > > > This looks like a very-very wrong way to address the problem. > > > Instead of resolving the problem it will hide it on some requests > > > (but not on others), making the problem harder to detect and debug. > > > > Once user can access the resource - he can see the warning about system > > time problem (and other warning). > > If he can't access it at all seeing something like "OCSP response > invalid" > > - he doesn't know what to do. > > So the correct solution will probably be to ask browser vendors > don't follow "abort the handshake" requirement (see > http://trac.nginx.org/nginx/ticket/425 for other reasons why it's > a bad idea anyway) and/or inform users about possible reasons of > the problem. And/or to relax thisUpdate check. And/or to ask CAs > to provide responses with thisUpdate set somewhere in the past. > > Trying to update OCSP responses less frequently doesn't > looks like a solution. There will be periods when a response is > fresh anyway. > > > > > > > > > On Mon, Jan 13, 2014 at 8:12 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Mon, Jan 13, 2014 at 07:45:29PM +0400, kyprizel wrote: > > > > > > > The reason is quite easy - most responders _do_ set validity time > equal > > > to > > > > 7 days and there is no reason to update the response every hour and I > > > want > > > > to update it more rarely. > > > > Some do not set nextUpdate at all and 3600 can be too rarely for > them. > > > > > > These reasons suggest that deriving validity times from response > > > validity times, as suggested earlier, would be a better way to go. > > > > > > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 7:42 PM, Maxim Dounin > > > wrote: > > > > > > > > > Hello! > > > > > > > > > > On Mon, Jan 13, 2014 at 07:04:11PM +0400, kyprizel wrote: > > > > > > > > > > > So, you going to leave 3600 hardcoded there? > > > > > > > > > > Yes, unless you have some better reasons to make it > > > > > configurable. > > > > > > > > > > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 6:51 PM, Maxim Dounin < > mdounin at mdounin.ru> > > > > > wrote: > > > > > > > > > > > > > Hello! > > > > > > > > > > > > > > On Mon, Jan 13, 2014 at 06:08:53PM +0400, kyprizel wrote: > > > > > > > > > > > > > > > "some cases", for example = you have a lot of users with > wrong > > > system > > > > > > > time, > > > > > > > > so they can't access the server if OCSP responses updated too > > > > > frequently. > > > > > > > > > > > > > > This looks like a very-very wrong way to address the problem. > > > > > > > Instead of resolving the problem it will hide it on some > requests > > > > > > > (but not on others), making the problem harder to detect and > debug. > > > > > > > > > > > > > > -- > > > > > > > Maxim Dounin > > > > > > > http://nginx.org/ > > > > > > > > > > > > > > _______________________________________________ > > > > > > > nginx-devel mailing list > > > > > > > nginx-devel at nginx.org > > > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > nginx-devel mailing list > > > > > > nginx-devel at nginx.org > > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > > > > -- > > > > > Maxim Dounin > > > > > http://nginx.org/ > > > > > > > > > > _______________________________________________ > > > > > nginx-devel mailing list > > > > > nginx-devel at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > > _______________________________________________ > > > > nginx-devel mailing list > > > > nginx-devel at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faskiri.devel at gmail.com Tue Jan 14 08:47:34 2014 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 14 Jan 2014 14:17:34 +0530 Subject: WWW-Authenticate header In-Reply-To: <20140113153827.GX1835@mdounin.ru> References: <20140110134946.GO1835@mdounin.ru> <20140113153827.GX1835@mdounin.ru> Message-ID: Created http://trac.nginx.org/nginx/ticket/485#ticket to track this. Thanks! On Mon, Jan 13, 2014 at 9:08 PM, Maxim Dounin wrote: > Hello! > > On Sat, Jan 11, 2014 at 10:28:52PM +0530, Fasih wrote: > > > Yes, that's how I noticed it. I am using nginx as a reverse proxy. The > > upstream sends two WWW-Authenticate headers with different realms. I was > > processing www_authenticate header and hadnt realized that it was legal > to > > send multiple WWW-Authenticate headers. > > Looks like there are indeed valid real-world uses, see e.g. here: > > http://stackoverflow.com/a/15894841/1597813 > > I don't think we want to change www_authenticate to ngx_array_t, > but it certainly counts as another case requiring better support > for multiple headers, much like with $upstream_http_set_cookie and > multiple Set-Cookie headers, and so on. > > > > > On Fri, Jan 10, 2014 at 7:19 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Fri, Jan 10, 2014 at 05:42:23PM +0530, Fasih wrote: > > > > > > > Hi > > > > > > > > RFC allows a server to respond with multiple WWW-Authenticate header > ( > > > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). > > > > > > > > "User agents are advised to take special care in parsing the WWW- > > > > Authenticate field value as it might contain more than one > challenge, or > > > if > > > > more than one WWW-Authenticate header field is provided, the > contents of > > > a > > > > challenge itself can contain a comma-separated list of authentication > > > > parameters." > > > > > > > > However nginx defines WWW-Authenticate header as an ngx_table_elt_t > in > > > > the ngx_http_headers_out_t struct as opposed to an ngx_array_t like > other > > > > allowed repeated value headers. > > > > > > > > Is this a bug that I should file? > > > > > > Have you seen this to be a problem in real life? > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faskiri.devel at gmail.com Tue Jan 14 10:45:32 2014 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 14 Jan 2014 16:15:32 +0530 Subject: Rewrite handling order Message-ID: Hi I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). There is another plugin compiled before my plugin that also handles rewrite (HttpLuaModule). I was expecting to see that my module would rewrite after lua is done, however that is not the case. Some debugging showed that whereas my module pushed into the cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers after lua, the cmcf.phase_engine.handlers had lua *after* my module. The culprit seems to be the following: static ngx_int_t ngx_http_init_phase_handlers(ngx_conf_t *cf, ngx_http_core_main_conf_t *cmcf) { .. ph = cmcf->phase_engine.handlers; .. n += cmcf->phases[i].handlers.nelts; for (j = cmcf->phases[i].handlers.nelts - 1; j >=0; j--) { ph->checker = checker; ph->handler = h[j]; ph->next = n; ph++; } } The order is inverted here (h[j] before h[j-1]). Is this intentional or a bug? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 14 11:11:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 15:11:21 +0400 Subject: Rewrite handling order In-Reply-To: References: Message-ID: <20140114111120.GH1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 04:15:32PM +0530, Fasih wrote: > Hi > > I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). There > is another plugin compiled before my plugin that also handles rewrite > (HttpLuaModule). I was expecting to see that my module would rewrite after > lua is done, however that is not the case. Some debugging showed that > whereas my module pushed into the > cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers after lua, the > cmcf.phase_engine.handlers had lua *after* my module. The culprit seems to > be the following: > > static ngx_int_t > ngx_http_init_phase_handlers(ngx_conf_t *cf, ngx_http_core_main_conf_t > *cmcf) > { > .. > ph = cmcf->phase_engine.handlers; > .. > n += cmcf->phases[i].handlers.nelts; > > for (j = cmcf->phases[i].handlers.nelts - 1; j >=0; j--) { > ph->checker = checker; > ph->handler = h[j]; > ph->next = n; > ph++; > } > } > > The order is inverted here (h[j] before h[j-1]). Is this intentional or a > bug? It's intentional. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jan 14 11:12:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 11:12:37 +0000 Subject: [nginx] SSL: ssl_session_tickets directive. Message-ID: details: http://hg.nginx.org/nginx/rev/d049b0ea00a3 branches: changeset: 5503:d049b0ea00a3 user: Dirkjan Bussink date: Fri Jan 10 16:12:40 2014 +0100 description: SSL: ssl_session_tickets directive. This adds support so it's possible to explicitly disable SSL Session Tickets. In order to have good Forward Secrecy support either the session ticket key has to be reloaded by using nginx' binary upgrade process or using an external key file and reloading the configuration. This directive adds another possibility to have good support by disabling session tickets altogether. If session tickets are enabled and the process lives for a long a time, an attacker can grab the session ticket from the process and use that to decrypt any traffic that occured during the entire lifetime of the process. diffstat: src/http/modules/ngx_http_ssl_module.c | 16 ++++++++++++++++ src/http/modules/ngx_http_ssl_module.h | 1 + src/mail/ngx_mail_ssl_module.c | 17 +++++++++++++++++ src/mail/ngx_mail_ssl_module.h | 1 + 4 files changed, 35 insertions(+), 0 deletions(-) diffs (103 lines): diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -160,6 +160,13 @@ static ngx_command_t ngx_http_ssl_comma 0, NULL }, + { ngx_string("ssl_session_tickets"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, session_tickets), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -436,6 +443,7 @@ ngx_http_ssl_create_srv_conf(ngx_conf_t sscf->verify_depth = NGX_CONF_UNSET_UINT; sscf->builtin_session_cache = NGX_CONF_UNSET; sscf->session_timeout = NGX_CONF_UNSET; + sscf->session_tickets = NGX_CONF_UNSET; sscf->session_ticket_keys = NGX_CONF_UNSET_PTR; sscf->stapling = NGX_CONF_UNSET; sscf->stapling_verify = NGX_CONF_UNSET; @@ -644,6 +652,14 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_tickets, prev->session_tickets, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_tickets) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff --git a/src/http/modules/ngx_http_ssl_module.h b/src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h +++ b/src/http/modules/ngx_http_ssl_module.h @@ -44,6 +44,7 @@ typedef struct { ngx_shm_zone_t *shm_zone; + ngx_flag_t session_tickets; ngx_array_t *session_ticket_keys; ngx_flag_t stapling; diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -116,6 +116,13 @@ static ngx_command_t ngx_mail_ssl_comma 0, NULL }, + { ngx_string("ssl_session_tickets"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, session_tickets), + NULL }, + { ngx_string("ssl_session_ticket_key"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_array_slot, @@ -191,6 +198,7 @@ ngx_mail_ssl_create_conf(ngx_conf_t *cf) scf->prefer_server_ciphers = NGX_CONF_UNSET; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; + scf->session_tickets = NGX_CONF_UNSET; scf->session_ticket_keys = NGX_CONF_UNSET_PTR; return scf; @@ -339,6 +347,15 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, return NGX_CONF_ERROR; } + ngx_conf_merge_value(conf->session_tickets, + prev->session_tickets, 1); + +#ifdef SSL_OP_NO_TICKET + if (!conf->session_tickets) { + SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_NO_TICKET); + } +#endif + ngx_conf_merge_ptr_value(conf->session_ticket_keys, prev->session_ticket_keys, NULL); diff --git a/src/mail/ngx_mail_ssl_module.h b/src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h +++ b/src/mail/ngx_mail_ssl_module.h @@ -41,6 +41,7 @@ typedef struct { ngx_shm_zone_t *shm_zone; + ngx_flag_t session_tickets; ngx_array_t *session_ticket_keys; u_char *file; From mdounin at mdounin.ru Tue Jan 14 11:13:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 15:13:15 +0400 Subject: [PATCH] SSL: ssl_session_tickets directive In-Reply-To: References: Message-ID: <20140114111315.GI1835@mdounin.ru> Hello! On Fri, Jan 10, 2014 at 03:21:33PM +0000, Dirkjan Bussink wrote: > # HG changeset patch > # User Dirkjan Bussink > # Date 1389366760 -3600 > # Node ID d049b0ea00a388c142627f10a0ee01c5b1bedc43 > # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c > SSL: ssl_session_tickets directive. > > This adds support so it's possible to explicitly disable SSL Session > Tickets. In order to have good Forward Secrecy support either the > session ticket key has to be reloaded by using nginx' binary upgrade > process or using an external key file and reloading the configuration. > This directive adds another possibility to have good support by > disabling session tickets altogether. > > If session tickets are enabled and the process lives for a long a time, > an attacker can grab the session ticket from the process and use that to > decrypt any traffic that occured during the entire lifetime of the > process. Committed, thanks. -- Maxim Dounin http://nginx.org/ From fdasilvayy at gmail.com Tue Jan 14 11:54:20 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:20 +0100 Subject: [PATCH 3 of 7] Mail: add ID to the 'ngx_mail_imap_default_capabilities' list In-Reply-To: References: Message-ID: <56df02d0dad9e7746fed.1389700460@HPC> # HG changeset patch # User Filipe da Silva # Date 1389700241 -3600 # Tue Jan 14 12:50:41 2014 +0100 # Node ID 56df02d0dad9e7746fed311c88787fcb3ea902d7 # Parent ece46b257e8d31a1a7a81bf5fcdd0271c1dc2318 Mail: add ID to the 'ngx_mail_imap_default_capabilities' list. diff -r ece46b257e8d -r 56df02d0dad9 src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:30 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:41 2014 +0100 @@ -18,6 +18,7 @@ static ngx_str_t ngx_mail_imap_default_capabilities[] = { + ngx_string("ID"), ngx_string("IMAP4"), ngx_string("IMAP4rev1"), ngx_string("UIDPLUS"), @@ -122,7 +123,7 @@ iscf->client_buffer_size = NGX_CONF_UNSET_SIZE; - if (ngx_array_init(&iscf->capabilities, cf->pool, 4, sizeof(ngx_str_t)) + if (ngx_array_init(&iscf->capabilities, cf->pool, 5, sizeof(ngx_str_t)) != NGX_OK) { return NULL; -------------- next part -------------- A non-text attachment was scrubbed... Name: 002-ImapID_AsDefaultCapability.diff Type: text/x-patch Size: 971 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:17 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:17 +0100 Subject: [PATCH 0 of 7 ] Support of IMAP ID command in mail proxy module In-Reply-To: References: Message-ID: Hello, I've been working with the help of Michael on implementing the RFC 2971 Please find attached the result. --- Filipe DA SILVA From fdasilvayy at gmail.com Tue Jan 14 11:54:23 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:23 +0100 Subject: [PATCH 6 of 7] Mail: add 'Not Enough Arguments' imap error message In-Reply-To: References: Message-ID: <4c742929908a54e06516.1389700463@HPC> # HG changeset patch # User Filipe da Silva # Date 1389700279 -3600 # Tue Jan 14 12:51:19 2014 +0100 # Node ID 4c742929908a54e06516e80493a42846b9b35420 # Parent 147c57844b913f2b1a4dafb44d58e1128039ea03 Mail: add 'Not Enough Arguments' imap error message. It allow to notify some functionnal errors, instead of the generic 'Invalid Error' message diff -r 147c57844b91 -r 4c742929908a src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Tue Jan 14 12:51:12 2014 +0100 +++ b/src/mail/ngx_mail.h Tue Jan 14 12:51:19 2014 +0100 @@ -320,6 +320,7 @@ #define NGX_MAIL_PARSE_INVALID_COMMAND 20 +#define NGX_MAIL_PARSE_NOT_ENOUGH_ARGUMENTS 21 typedef void (*ngx_mail_init_session_pt)(ngx_mail_session_t *s, ngx_connection_t *c); diff -r 147c57844b91 -r 4c742929908a src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:51:12 2014 +0100 +++ b/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:51:19 2014 +0100 @@ -33,6 +33,7 @@ static u_char imap_password[] = "+ UGFzc3dvcmQ6" CRLF; static u_char imap_bye[] = "* BYE" CRLF; static u_char imap_invalid_command[] = "BAD invalid command" CRLF; +static u_char imap_not_enough_arguments[] = "BAD not enough arguments" CRLF; static ngx_str_t ngx_mail_imap_client_id_nil = ngx_string("ID NIL"); static ngx_str_t ngx_mail_imap_server_id_nil = ngx_string("* ID NIL" CRLF); @@ -253,6 +254,12 @@ ngx_str_set(&s->out, imap_invalid_command); s->mail_state = ngx_imap_start; break; + + case NGX_MAIL_PARSE_NOT_ENOUGH_ARGUMENTS: + s->state = 0; + ngx_str_set(&s->out, imap_not_enough_arguments); + s->mail_state = ngx_imap_start; + break; } if (tag) { @@ -311,14 +318,14 @@ arg = s->args.elts; if (s->args.nelts < 1 || arg[0].len == 0) { - return NGX_MAIL_PARSE_INVALID_COMMAND; + return NGX_MAIL_PARSE_NOT_ENOUGH_ARGUMENTS; } // Client sends ID NIL or ID ( ... ) if (s->args.nelts == 1) { if (ngx_strncasecmp(arg[0].data, (u_char *) "NIL", 3) != 0) - return NGX_MAIL_PARSE_INVALID_COMMAND; + return NGX_MAIL_PARSE_NOT_ENOUGH_ARGUMENTS; s->imap_client_id = ngx_mail_imap_client_id_nil; -------------- next part -------------- A non-text attachment was scrubbed... Name: 005-ImapTooFewArguments.diff Type: text/x-patch Size: 2251 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:21 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:21 +0100 Subject: [PATCH 4 of 7] Mail: add IMAP ID command response settings to customize server response In-Reply-To: References: Message-ID: <2d3ff21b5373a83dec32.1389700461@HPC> # HG changeset patch # User Filipe da Silva # Date 1389700251 -3600 # Tue Jan 14 12:50:51 2014 +0100 # Node ID 2d3ff21b5373a83dec32759062c4e04a14567c6e # Parent 56df02d0dad9e7746fed311c88787fcb3ea902d7 Mail: add IMAP ID command response settings to customize server response. diff -r 56df02d0dad9 -r 2d3ff21b5373 src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:50:41 2014 +0100 +++ b/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:50:51 2014 +0100 @@ -304,9 +304,10 @@ static ngx_int_t ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) { - ngx_str_t *arg; + ngx_str_t *arg, server_id; size_t size, i; u_char *p, *data; + ngx_mail_imap_srv_conf_t *iscf; arg = s->args.elts; if (s->args.nelts < 1 || arg[0].len == 0) { @@ -346,11 +347,24 @@ } ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, - "imap client ID:\"%V%V\"", - &s->tag, &s->imap_client_id); + "imap client ID:\"%V%V\"", &s->tag, &s->imap_client_id); // Prepare server response to ID command - s->text = ngx_mail_imap_server_id_nil; + iscf = ngx_mail_get_module_srv_conf(s, ngx_mail_imap_module); + if (iscf->server_id.len > 0) { + server_id = iscf->server_id; + + } else { + s->text = ngx_mail_imap_server_id_nil; + server_id.len = 0; + } + + if (server_id.len >= 2) { + s->text = server_id; + server_id.len -= 2; // remove CRLF from log + ngx_log_debug(NGX_LOG_DEBUG_MAIL, c->log, 0, + "imap server ID:\"%V\"", &server_id); + } return NGX_OK; } diff -r 56df02d0dad9 -r 2d3ff21b5373 src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:41 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:51 2014 +0100 @@ -80,6 +80,13 @@ offsetof(ngx_mail_imap_srv_conf_t, auth_methods), &ngx_mail_imap_auth_methods }, + { ngx_string("imap_server_id"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_2MORE, + ngx_mail_capabilities, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_imap_srv_conf_t, server_ids), + NULL }, + ngx_null_command }; @@ -129,6 +136,11 @@ return NULL; } + if (ngx_array_init(&iscf->server_ids, cf->pool, 8, sizeof(ngx_str_t)) + != NGX_OK) + { + return NULL; + } return iscf; } @@ -154,6 +166,11 @@ |NGX_MAIL_AUTH_PLAIN_ENABLED)); + if (conf->server_ids.nelts == 0) { + conf->server_ids = prev->server_ids; + } + + if (conf->capabilities.nelts == 0) { conf->capabilities = prev->capabilities; } @@ -168,6 +185,13 @@ *c = *d; } + } else if (conf->server_ids.nelts > 0) { + c = ngx_array_push(&conf->capabilities); + if (c == NULL) { + return NGX_CONF_ERROR; + } + // Push ID to initial capabilities + *c = ngx_mail_imap_default_capabilities[0]; } size = sizeof("* CAPABILITY" CRLF) - 1; @@ -250,5 +274,44 @@ sizeof(" STARTTLS LOGINDISABLED") - 1); *p++ = CR; *p = LF; + + if (conf->server_ids.nelts % 2 != 0) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "odd item count(%ui) of key/value pairs declared", + conf->server_ids.nelts ); + return NGX_CONF_ERROR; + } + + if (conf->server_ids.nelts > 0) { + size = sizeof("* ID (" CRLF) - 1; + + c = conf->server_ids.elts; + for (i = 0; i < conf->server_ids.nelts; i++) { + size += 1 + c[i].len + 2; + } + + p = ngx_pnalloc(cf->pool, size); + if (p == NULL) { + return NGX_CONF_ERROR; + } + + conf->server_id.len = size; + conf->server_id.data = p; + + p = ngx_cpymem(p, "* ID (", sizeof("* ID (") - 1); + + *p++ = '"'; + p = ngx_cpymem(p, c[0].data, c[0].len); + *p++ = '"'; + for (i = 1; i < conf->server_ids.nelts; i++) { + *p++ = ' '; + *p++ = '"'; + p = ngx_cpymem(p, c[i].data, c[i].len); + *p++ = '"'; + } + *p++ = ')'; + *p++ = CR; *p = LF; + } + return NGX_CONF_OK; } diff -r 56df02d0dad9 -r 2d3ff21b5373 src/mail/ngx_mail_imap_module.h --- a/src/mail/ngx_mail_imap_module.h Tue Jan 14 12:50:41 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.h Tue Jan 14 12:50:51 2014 +0100 @@ -20,10 +20,12 @@ ngx_str_t capability; ngx_str_t starttls_capability; ngx_str_t starttls_only_capability; + ngx_str_t server_id; ngx_uint_t auth_methods; ngx_array_t capabilities; + ngx_array_t server_ids; } ngx_mail_imap_srv_conf_t; -------------- next part -------------- A non-text attachment was scrubbed... Name: 003-ImapID_ServerReponseSettings.diff Type: text/x-patch Size: 4851 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:22 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:22 +0100 Subject: [PATCH 5 of 7] Mail: add support for dynamic ID field value : $version, $remote-host In-Reply-To: References: Message-ID: <147c57844b913f2b1a4d.1389700462@HPC> # HG changeset patch # User Filipe da Silva # Date 1389700272 -3600 # Tue Jan 14 12:51:12 2014 +0100 # Node ID 147c57844b913f2b1a4dafb44d58e1128039ea03 # Parent 2d3ff21b5373a83dec32759062c4e04a14567c6e Mail: add support for dynamic ID field value : $version, $remote-host. This two keyword are replaced at glance. diff -r 2d3ff21b5373 -r 147c57844b91 src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:50:51 2014 +0100 +++ b/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:51:12 2014 +0100 @@ -305,7 +305,7 @@ ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) { ngx_str_t *arg, server_id; - size_t size, i; + size_t size, i, len; u_char *p, *data; ngx_mail_imap_srv_conf_t *iscf; @@ -351,7 +351,30 @@ // Prepare server response to ID command iscf = ngx_mail_get_module_srv_conf(s, ngx_mail_imap_module); - if (iscf->server_id.len > 0) { + if (iscf->server_id.len > 0 && iscf->server_id_fields) { + if (iscf->server_id_fields & NGX_IMAP_ID_REMOTE_HOST) { + //replace $remote-host by his value + len = iscf->server_id.len; + server_id.len = len + + c->addr_text.len - (sizeof("$remote-host") - 1); + server_id.data = ngx_pnalloc(c->pool, server_id.len); + if (server_id.data == NULL) { + return NGX_ERROR; + } + data = iscf->server_id.data; + data = ngx_strlcasestrn( data, data + len, + (u_char*) "$remote-host", 12 - 1); + size = data - iscf->server_id.data; + p = ngx_copy(server_id.data, iscf->server_id.data, size); + // push addr_text value + p = ngx_copy(p, c->addr_text.data, c->addr_text.len); + data += sizeof("$remote-host") - 1; + size = len - size - (sizeof("$remote-host") - 1); + p = ngx_copy(p, data, size); + + } + } + else if (iscf->server_id.len > 0) { server_id = iscf->server_id; } else { diff -r 2d3ff21b5373 -r 147c57844b91 src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:51 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:51:12 2014 +0100 @@ -10,6 +10,7 @@ #include #include #include +#include static void *ngx_mail_imap_create_srv_conf(ngx_conf_t *cf); @@ -286,6 +287,29 @@ size = sizeof("* ID (" CRLF) - 1; c = conf->server_ids.elts; + + for (i = 1; i < conf->server_ids.nelts; i += 2) { + + if (c[i].data[0] != '$') + continue; + + switch (c[i].len) + { + case 12: + if (ngx_strncasecmp(c[i].data, + (u_char *) "$remote-host", 12) == 0) { + conf->server_id_fields |= NGX_IMAP_ID_REMOTE_HOST; + } + break; + case 8: + if (ngx_strncasecmp(c[i].data, (u_char *) "$version", 8) == 0){ + c[i].data = (u_char *) NGINX_VERSION; + c[i].len = sizeof(NGINX_VERSION) - 1; + } + break; + } + } + for (i = 0; i < conf->server_ids.nelts; i++) { size += 1 + c[i].len + 2; } diff -r 2d3ff21b5373 -r 147c57844b91 src/mail/ngx_mail_imap_module.h --- a/src/mail/ngx_mail_imap_module.h Tue Jan 14 12:50:51 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.h Tue Jan 14 12:51:12 2014 +0100 @@ -13,6 +13,9 @@ #include #include +// Imap ID Dynamic fields +#define NGX_IMAP_ID_REMOTE_HOST 0x0001 +#define NGX_IMAP_ID_VERSION 0x0002 typedef struct { size_t client_buffer_size; @@ -21,6 +24,7 @@ ngx_str_t starttls_capability; ngx_str_t starttls_only_capability; ngx_str_t server_id; + ngx_uint_t server_id_fields; ngx_uint_t auth_methods; -------------- next part -------------- A non-text attachment was scrubbed... Name: 004-ImapID_dynamicFieldValue.diff Type: text/x-patch Size: 4082 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:24 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:24 +0100 Subject: [PATCH 7 of 7] Mail: add limits enforcement to IMAP server ID setting as per RFC2971 In-Reply-To: References: Message-ID: # HG changeset patch # User Filipe da Silva # Date 1389700314 -3600 # Tue Jan 14 12:51:54 2014 +0100 # Node ID bae811e9d65cee82d8deeaaa9cf442bde7d4e458 # Parent 4c742929908a54e06516e80493a42846b9b35420 Mail: add limits enforcement to IMAP server ID setting as per RFC2971. diff -r 4c742929908a -r bae811e9d65c src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:51:19 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:51:54 2014 +0100 @@ -283,6 +283,13 @@ return NGX_CONF_ERROR; } + if (conf->server_ids.nelts >= 60) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Trying to declare %ui ( more than 30 ) field-value pairs", + conf->server_ids.nelts ); + return NGX_CONF_ERROR; + } + if (conf->server_ids.nelts > 0) { size = sizeof("* ID (" CRLF) - 1; @@ -293,6 +300,13 @@ if (c[i].data[0] != '$') continue; + if (c[i].len >= 1024) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Value is too long: %ui characters for value:\"%V\"", + c[i].len, &c[i]); + return NGX_CONF_ERROR; + } + switch (c[i].len) { case 12: @@ -310,6 +324,26 @@ } } + for (i = 0; i < conf->server_ids.nelts; i += 2) { + + if (c[i].len >= 30) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Key name is %ui characters long: Too Long\"%V\"", + c[i].len, &c[i]); + return NGX_CONF_ERROR; + } + for (m = i + 2; m < conf->server_ids.nelts; m += 2) { + if (c[i].len == c[m].len + && ngx_strncasecmp(c[i].data, c[m].data, c[m].len) + == 0) + { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Duplicate Name found: \"%V\"", &c[i]); + return NGX_CONF_ERROR; + } + } + } + for (i = 0; i < conf->server_ids.nelts; i++) { size += 1 + c[i].len + 2; } -------------- next part -------------- A non-text attachment was scrubbed... Name: 006-ImapId_enforcements.diff Type: text/x-patch Size: 2238 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:18 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:18 +0100 Subject: [PATCH 1 of 7] Mail: add IMAP ID command support In-Reply-To: References: Message-ID: <0ff28c3c519125db11ae.1389700458@HPC> # HG changeset patch # User Filipe da Silva # Date 1389700210 -3600 # Tue Jan 14 12:50:10 2014 +0100 # Node ID 0ff28c3c519125db11ae3c56fbf34a7a5975a452 # Parent d049b0ea00a388c142627f10a0ee01c5b1bedc43 Mail: add IMAP ID command support. add parsing of IMAP ID command and his parameter list, see RFC2971 diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Fri Jan 10 16:12:40 2014 +0100 +++ b/src/mail/ngx_mail.h Tue Jan 14 12:50:10 2014 +0100 @@ -215,6 +215,7 @@ unsigned quoted:1; unsigned backslash:1; unsigned no_sync_literal:1; + unsigned params_list:1; unsigned starttls:1; unsigned esmtp:1; unsigned auth_method:3; @@ -233,6 +234,7 @@ ngx_str_t smtp_helo; ngx_str_t smtp_from; ngx_str_t smtp_to; + ngx_str_t imap_client_id; ngx_str_t cmd; @@ -284,6 +286,7 @@ #define NGX_IMAP_AUTHENTICATE 7 +#define NGX_IMAP_ID 8 #define NGX_SMTP_HELO 1 #define NGX_SMTP_EHLO 2 diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Fri Jan 10 16:12:40 2014 +0100 +++ b/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:50:10 2014 +0100 @@ -16,6 +16,8 @@ ngx_connection_t *c); static ngx_int_t ngx_mail_imap_authenticate(ngx_mail_session_t *s, ngx_connection_t *c); +static ngx_int_t ngx_mail_imap_id(ngx_mail_session_t *s, + ngx_connection_t *c); static ngx_int_t ngx_mail_imap_capability(ngx_mail_session_t *s, ngx_connection_t *c); static ngx_int_t ngx_mail_imap_starttls(ngx_mail_session_t *s, @@ -32,6 +34,9 @@ static u_char imap_bye[] = "* BYE" CRLF; static u_char imap_invalid_command[] = "BAD invalid command" CRLF; +static ngx_str_t ngx_mail_imap_client_id_nil = ngx_string("ID NIL"); +static ngx_str_t ngx_mail_imap_server_id_nil = ngx_string("* ID NIL" CRLF); + void ngx_mail_imap_init_session(ngx_mail_session_t *s, ngx_connection_t *c) @@ -179,6 +184,10 @@ tag = (rc != NGX_OK); break; + case NGX_IMAP_ID: + rc = ngx_mail_imap_id(s, c); + break; + case NGX_IMAP_CAPABILITY: rc = ngx_mail_imap_capability(s, c); break; @@ -292,6 +301,60 @@ ngx_mail_send(c->write); } +static ngx_int_t +ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) +{ + ngx_str_t *arg; + size_t size, i; + u_char *p, *data; + + arg = s->args.elts; + if (s->args.nelts < 1 || arg[0].len == 0) { + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + + // Client sends ID NIL or ID ( ... ) + if (s->args.nelts == 1) { + + if (ngx_strncasecmp(arg[0].data, (u_char *) "NIL", 3) != 0) + return NGX_MAIL_PARSE_INVALID_COMMAND; + + s->imap_client_id = ngx_mail_imap_client_id_nil; + + } else { + size = sizeof("ID (") - 1; + for (i = 0; i < s->args.nelts; i++) { + size += 1 + arg[i].len + 2; // 1 space plus 2 quotes + } + + data = ngx_pnalloc(c->pool, size); + if (data == NULL) { + return NGX_ERROR; + } + + p = ngx_cpymem(data, "ID (", sizeof("ID (") - 1); + for (i = 0; i < s->args.nelts; i++) { + *p++ = '"'; + p = ngx_cpymem(p, arg[i].data, arg[i].len); + *p++ = '"'; + *p++ = ' '; + } + *--p = ')'; // replace last space + + s->imap_client_id.len = size; + s->imap_client_id.data = data; + } + + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "imap client ID:\"%V%V\"", + &s->tag, &s->imap_client_id); + + // Prepare server response to ID command + s->text = ngx_mail_imap_server_id_nil; + + return NGX_OK; +} + static ngx_int_t ngx_mail_imap_login(ngx_mail_session_t *s, ngx_connection_t *c) diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail_parse.c --- a/src/mail/ngx_mail_parse.c Fri Jan 10 16:12:40 2014 +0100 +++ b/src/mail/ngx_mail_parse.c Tue Jan 14 12:50:10 2014 +0100 @@ -279,6 +279,16 @@ c = s->cmd_start; switch (p - c) { + case 2: + if ((c[0] == 'I' || c[0] == 'i') + && (c[1] == 'D'|| c[1] == 'd')) + { + s->command = NGX_IMAP_ID; + + } else { + goto invalid; + } + break; case 4: if ((c[0] == 'N' || c[0] == 'n') @@ -409,14 +419,31 @@ case ' ': break; case CR: + if (s->params_list == 1) + goto invalid; state = sw_almost_done; s->arg_end = p; break; case LF: + if (s->params_list == 1) + goto invalid; s->arg_end = p; goto done; + case '(': // params list begin + if (!s->params_list && s->args.nelts == 0) { + s->params_list = 1; + break; + } + goto invalid; + case ')': // params list closing + if (s->params_list == 1 && s->args.nelts > 0) { + s->params_list = 0; + state = sw_spaces_before_argument; + break; + } + goto invalid; case '"': - if (s->args.nelts <= 2) { + if (s->args.nelts <= 2 || s->params_list) { s->quoted = 1; s->arg_start = p + 1; state = sw_argument; @@ -430,7 +457,7 @@ } goto invalid; default: - if (s->args.nelts <= 2) { + if (s->args.nelts <= 2 && !s->params_list) { s->arg_start = p; state = sw_argument; break; @@ -602,6 +629,7 @@ s->quoted = 0; s->no_sync_literal = 0; s->literal_len = 0; + s->params_list = 0; } s->state = (s->command != NGX_IMAP_AUTHENTICATE) ? sw_start : sw_argument; @@ -614,6 +642,7 @@ s->quoted = 0; s->no_sync_literal = 0; s->literal_len = 0; + s->params_list = 0; return NGX_MAIL_PARSE_INVALID_COMMAND; } -------------- next part -------------- A non-text attachment was scrubbed... Name: 000-ImapID_CommandSupport.diff Type: text/x-patch Size: 6682 bytes Desc: not available URL: From fdasilvayy at gmail.com Tue Jan 14 11:54:19 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Tue, 14 Jan 2014 12:54:19 +0100 Subject: [PATCH 2 of 7] Mail: add IMAP client ID value to mail auth script In-Reply-To: References: Message-ID: # HG changeset patch # User Filipe da Silva # Date 1389700230 -3600 # Tue Jan 14 12:50:30 2014 +0100 # Node ID ece46b257e8d31a1a7a81bf5fcdd0271c1dc2318 # Parent 0ff28c3c519125db11ae3c56fbf34a7a5975a452 Mail: add IMAP client ID value to mail auth script. diff -r 0ff28c3c5191 -r ece46b257e8d src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Tue Jan 14 12:50:10 2014 +0100 +++ b/src/mail/ngx_mail_auth_http_module.c Tue Jan 14 12:50:30 2014 +0100 @@ -1176,6 +1176,11 @@ + ahcf->header.len + sizeof(CRLF) - 1; + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { + len += sizeof("Client-IMAP-ID: ") - 1 + + s->imap_client_id.len + sizeof(CRLF) - 1; + } + b = ngx_create_temp_buf(pool, len); if (b == NULL) { return NULL; @@ -1254,6 +1259,13 @@ *b->last++ = CR; *b->last++ = LF; } + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { + b->last = ngx_cpymem(b->last, "Client-IMAP-ID: ", + sizeof("Client-IMAP-ID: ") - 1); + b->last = ngx_copy(b->last, + s->imap_client_id.data, s->imap_client_id.len); + *b->last++ = CR; *b->last++ = LF; + } if (ahcf->header.len) { b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); -------------- next part -------------- A non-text attachment was scrubbed... Name: 001-ImapID_AuthScriptSupport.diff Type: text/x-patch Size: 1359 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Jan 14 11:57:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 11:57:52 +0000 Subject: [nginx] SSL: fixed ssl_verify_depth to take only one argument. Message-ID: details: http://hg.nginx.org/nginx/rev/8ed467553f6b branches: changeset: 5504:8ed467553f6b user: Maxim Dounin date: Tue Jan 14 15:56:40 2014 +0400 description: SSL: fixed ssl_verify_depth to take only one argument. diffstat: src/http/modules/ngx_http_ssl_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -126,7 +126,7 @@ static ngx_command_t ngx_http_ssl_comma &ngx_http_ssl_verify }, { ngx_string("ssl_verify_depth"), - NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_1MORE, + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_num_slot, NGX_HTTP_SRV_CONF_OFFSET, offsetof(ngx_http_ssl_srv_conf_t, verify_depth), From mdounin at mdounin.ru Tue Jan 14 12:08:05 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 16:08:05 +0400 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates In-Reply-To: <869DE836-5635-40C8-A86C-53AFD4B1FD30@ha.cki.ng> References: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> <869DE836-5635-40C8-A86C-53AFD4B1FD30@ha.cki.ng> Message-ID: <20140114120805.GJ1835@mdounin.ru> Hello! On Mon, Jan 13, 2014 at 04:28:25PM +0100, Sven Peter wrote: > # HG changeset patch > # User Sven Peter > # Date 1389626052 -3600 > # Mon Jan 13 16:14:12 2014 +0100 > # Node ID a444733105e8eb96212f142533e714532a23cddf > # Parent 4aa64f6950313311e0d322a2af1788edeb7f036c > mail_{ssl,auth_http}_module: add support for SSL client certificates Better summary line would be: Mail: added support for SSL client certificate. > > This patch adds support for SSL client certificates to the mail proxy > capabilities of nginx both for STARTTLS and SSL mode. > Just like the HTTP SSL module a root CA is defined in the mail section > of the configuration file. Verification can be optional or mandatory. > Additionally, the result of the verification is exposed to the > auth http backend via the SSL-Verify, SSL-Subject-DN, SSL-Issuer-DN > and SSL-Serial HTTP headers. It would be good idea to add a list of configuration directives added. > > diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_auth_http_module.c > --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 > +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 16:14:12 2014 +0100 > @@ -1145,6 +1145,16 @@ > ngx_str_t login, passwd; > ngx_mail_core_srv_conf_t *cscf; > > +#if (NGX_MAIL_SSL) > + ngx_str_t ssl_client_verify; > + ngx_str_t ssl_client_raw_s_dn; > + ngx_str_t ssl_client_raw_i_dn; > + ngx_str_t ssl_client_raw_serial; > + ngx_str_t ssl_client_s_dn; > + ngx_str_t ssl_client_i_dn; > + ngx_str_t ssl_client_serial; > +#endif > + This diverges from the style used. Additionally, variable names seems to be too verbose. > if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { > return NULL; > } > @@ -1153,6 +1163,51 @@ > return NULL; > } > > +#if (NGX_MAIL_SSL) > + if (s->connection->ssl) { > + /* ssl_client_verify comes from nginx itself - no need to escape */ This comment looks obvious. > + if (ngx_ssl_get_client_verify(s->connection, pool, > + &ssl_client_verify) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_subject_dn(s->connection, pool, > + &ssl_client_raw_s_dn) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_issuer_dn(s->connection, pool, > + &ssl_client_raw_i_dn) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_serial_number(s->connection, pool, > + &ssl_client_raw_serial) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_s_dn, > + &ssl_client_s_dn) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_i_dn, > + &ssl_client_i_dn) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_serial, > + &ssl_client_serial) != NGX_OK) { > + return NULL; > + } On the other hand, escaping of at least client certificate serial number looks unneeded. > + } else { > + ngx_str_set(&ssl_client_verify, "NONE"); > + ssl_client_i_dn.len = 0; > + ssl_client_s_dn.len = 0; > + ssl_client_serial.len = 0; Using fake values here looks wrong. In http, nginx marks $ssl_* variables as "not found" for non-ssl connections, which is essentially equivalent to empty strings, i.e., this contradicts to the use of "NONE". > + } > +#endif > + > cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); > > len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 > @@ -1173,6 +1228,16 @@ > + sizeof("Auth-SMTP-Helo: ") - 1 + s->smtp_helo.len > + sizeof("Auth-SMTP-From: ") - 1 + s->smtp_from.len > + sizeof("Auth-SMTP-To: ") - 1 + s->smtp_to.len > +#if (NGX_MAIL_SSL) > + + sizeof("SSL-Verify: ") - 1 + ssl_client_verify.len > + + sizeof(CRLF) - 1 > + + sizeof("SSL-Subject-DN: ") - 1 + ssl_client_s_dn.len > + + sizeof(CRLF) - 1 > + + sizeof("SSL-Issuer-DN: ") - 1 + ssl_client_i_dn.len > + + sizeof(CRLF) - 1 > + + sizeof("SSL-Serial: ") - 1 + ssl_client_serial.len > + + sizeof(CRLF) - 1 > +#endif Using common prefix "Auth-" might be a good idea. > + ahcf->header.len > + sizeof(CRLF) - 1; > > @@ -1255,6 +1320,34 @@ > > } > > +#if (NGX_MAIL_SSL) > + if (ssl_client_verify.len) { > + b->last = ngx_cpymem(b->last, "SSL-Verify: ", > + sizeof("SSL-Verify: ") - 1); > + b->last = ngx_copy(b->last, ssl_client_verify.data, > + ssl_client_verify.len); > + *b->last++ = CR; *b->last++ = LF; > + > + b->last = ngx_cpymem(b->last, "SSL-Subject-DN: ", > + sizeof("SSL-Subject-DN: ") - 1); > + b->last = ngx_copy(b->last, ssl_client_s_dn.data, > + ssl_client_s_dn.len); > + *b->last++ = CR; *b->last++ = LF; > + > + b->last = ngx_cpymem(b->last, "SSL-Issuer-DN: ", > + sizeof("SSL-Issuer-DN: ") - 1); > + b->last = ngx_copy(b->last, ssl_client_i_dn.data, > + ssl_client_i_dn.len); > + *b->last++ = CR; *b->last++ = LF; > + > + b->last = ngx_cpymem(b->last, "SSL-Serial: ", > + sizeof("SSL-Serial: ") - 1); > + b->last = ngx_copy(b->last, ssl_client_serial.data, > + ssl_client_serial.len); > + *b->last++ = CR; *b->last++ = LF; > + } > +#endif > + I don't think that these headers should be sent if there is no SSL connection. Any empty headers should be probably ommitted, too. > if (ahcf->header.len) { > b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); > } > diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_handler.c > --- a/src/mail/ngx_mail_handler.c Sat Jan 04 03:32:22 2014 +0400 > +++ b/src/mail/ngx_mail_handler.c Mon Jan 13 16:14:12 2014 +0100 > @@ -236,11 +236,40 @@ > { > ngx_mail_session_t *s; > ngx_mail_core_srv_conf_t *cscf; > + ngx_mail_ssl_conf_t *sslcf; > > if (c->ssl->handshaked) { > > s = c->data; > > + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); > + if (sslcf->verify != NGX_MAIL_SSL_VERIFY_OFF) { The use of the != check looks silly. You may want to preserve the same code as used in http, where ->verify is more or less boolean with some special true values to differentiate submodes when verify is used. > + long rc; > + rc = SSL_get_verify_result(c->ssl->connection); > + > + if (rc != X509_V_OK && > + (sslcf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA && ngx_ssl_verify_error_optional(rc))) { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > + "client SSL certificate verify error: (%l:%s)", > + rc, X509_verify_cert_error_string(rc)); > + ngx_mail_close_connection(c); > + return; > + } Minor note: a 80+ line here due to use of long names. Maror problem: you allow "optional_no_ca" here, but this is for sure not secure due to no certificate passed to a backend. > + > + if (sslcf->verify == NGX_MAIL_SSL_VERIFY_ON) { > + X509 *cert; > + cert = SSL_get_peer_certificate(c->ssl->connection); > + > + if (cert == NULL) { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > + "client sent no required SSL certificate"); > + ngx_mail_close_connection(c); > + return; > + } > + X509_free(cert); > + } > + } > + > if (s->starttls) { > cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); > > diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.c > --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 > +++ b/src/mail/ngx_mail_ssl_module.c Mon Jan 13 16:14:12 2014 +0100 > @@ -43,6 +43,13 @@ > { ngx_null_string, 0 } > }; > > +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { > + { ngx_string("off"), NGX_MAIL_SSL_VERIFY_OFF }, > + { ngx_string("on"), NGX_MAIL_SSL_VERIFY_ON }, > + { ngx_string("optional"), NGX_MAIL_SSL_VERIFY_OPTIONAL }, > + { ngx_string("optional_no_ca"), NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA }, As noted above, "optional_no_ca" makes no sense without a way to pass a certificate to some backend. > + { ngx_null_string, 0 } > +}; > > static ngx_command_t ngx_mail_ssl_commands[] = { You may note that previously there were 2 empty lines between blocks. With your change, there is just 1 empty line. > > @@ -130,7 +137,40 @@ > offsetof(ngx_mail_ssl_conf_t, session_timeout), > NULL }, > > - ngx_null_command > + { The change here indicate you did something wrong with style. > + ngx_string("ssl_verify_client"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_enum_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, verify), > + &ngx_mail_ssl_verify > + }, > + { The style here is certainly wrong. > + ngx_string("ssl_verify_depth"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, Hm, "1MORE" is wrong here, should be "TAKE1". Fixed this in http ssl module. > + ngx_conf_set_num_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, verify_depth), > + NULL > + }, > + { > + ngx_string("ssl_client_certificate"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, client_certificate), > + NULL > + }, > + { > + ngx_string("ssl_trusted_certificate"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), > + NULL > + }, > + > + ngx_null_command > }; > > > @@ -184,6 +224,8 @@ > * scf->ecdh_curve = { 0, NULL }; > * scf->ciphers = { 0, NULL }; > * scf->shm_zone = NULL; > + * scf->client_certificate = { 0, NULL }; > + * scf->trusted_certificate = { 0, NULL }; > */ > > scf->enable = NGX_CONF_UNSET; > @@ -192,6 +234,8 @@ > scf->builtin_session_cache = NGX_CONF_UNSET; > scf->session_timeout = NGX_CONF_UNSET; > scf->session_ticket_keys = NGX_CONF_UNSET_PTR; > + scf->verify = NGX_CONF_UNSET_UINT; > + scf->verify_depth = NGX_CONF_UNSET_UINT; > > return scf; > } > @@ -230,6 +274,11 @@ > > ngx_conf_merge_str_value(conf->ciphers, prev->ciphers, NGX_DEFAULT_CIPHERS); > > + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_MAIL_SSL_VERIFY_OFF); > + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); > + > + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, ""); > + ngx_conf_merge_str_value(conf->trusted_certificate, prev->trusted_certificate, ""); > > conf->ssl.log = cf->log; > > @@ -310,6 +359,21 @@ > return NGX_CONF_ERROR; > } > > + if (conf->verify) { > + if (conf->client_certificate.len == 0 && conf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA) { > + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, > + "no ssl_client_certificate for ssl_client_verify"); > + return NGX_CONF_ERROR; > + } > + > + if (ngx_ssl_client_certificate(cf, &conf->ssl, > + &conf->client_certificate, > + conf->verify_depth) > + != NGX_OK) { > + return NGX_CONF_ERROR; > + } > + } > + This code looks incomplete (and there are style issues). E.g., it doesn't looks like trusted certificates are loaded at all. It also lacks ssl_crl support, which is also directly related to client certificates. > if (conf->prefer_server_ciphers) { > SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); > } > diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.h > --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 > +++ b/src/mail/ngx_mail_ssl_module.h Mon Jan 13 16:14:12 2014 +0100 > @@ -37,8 +37,14 @@ > ngx_str_t dhparam; > ngx_str_t ecdh_curve; > > + ngx_str_t client_certificate; > + ngx_str_t trusted_certificate; > + > ngx_str_t ciphers; > > + ngx_uint_t verify; > + ngx_uint_t verify_depth; > + > ngx_shm_zone_t *shm_zone; > > ngx_array_t *session_ticket_keys; > @@ -47,6 +53,13 @@ > ngx_uint_t line; > } ngx_mail_ssl_conf_t; > > +enum ngx_mail_ssl_verify_enum { > + NGX_MAIL_SSL_VERIFY_OFF = 0, > + NGX_MAIL_SSL_VERIFY_ON, > + NGX_MAIL_SSL_VERIFY_OPTIONAL, > + NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA, > +}; > + > > extern ngx_module_t ngx_mail_ssl_module; We usually avoid using enum types for enumerated configuration slots. While questionable, it's currently mostly style. There are also other style problems here - indentation is wrong, as well as number of empty lines. -- Maxim Dounin http://nginx.org/ From ru at nginx.com Tue Jan 14 12:29:17 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 14 Jan 2014 12:29:17 +0000 Subject: [nginx] Resolver: added support for domain names with a trailing... Message-ID: details: http://hg.nginx.org/nginx/rev/d091d16ed398 branches: changeset: 5505:d091d16ed398 user: Yichun Zhang date: Fri Jan 10 11:22:14 2014 -0800 description: Resolver: added support for domain names with a trailing dot. diffstat: src/core/ngx_resolver.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r 8ed467553f6b -r d091d16ed398 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Tue Jan 14 15:56:40 2014 +0400 +++ b/src/core/ngx_resolver.c Fri Jan 10 11:22:14 2014 -0800 @@ -356,6 +356,10 @@ ngx_resolve_name(ngx_resolver_ctx_t *ctx r = ctx->resolver; + if (ctx->name.len > 0 && ctx->name.data[ctx->name.len - 1] == '.') { + ctx->name.len--; + } + ngx_log_debug1(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve: \"%V\"", &ctx->name); From ru at nginx.com Tue Jan 14 12:30:23 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 14 Jan 2014 16:30:23 +0400 Subject: [PATCH] Resolver: added support for domain names with a trailing dot In-Reply-To: <20140113135839.GU1835@mdounin.ru> References: <20140110191006.GA40401@lo0.su> <20140113135839.GU1835@mdounin.ru> Message-ID: <20140114123023.GB12886@lo0.su> On Mon, Jan 13, 2014 at 05:58:39PM +0400, Maxim Dounin wrote: > On Fri, Jan 10, 2014 at 12:13:26PM -0800, Yichun Zhang (agentzh) wrote: > > > Hello! > > > > On Fri, Jan 10, 2014 at 11:10 AM, Ruslan Ermilov wrote: > > > > > > There's no such thing as domain names with a trailing dot, > > > with one exception of the root domain name. > > > > > > > Well, they are just a fully qualified domain names. > > Well, not really. There is no need for a trailing dot for a > domain name to be fully qualified. The "example.com" domain _is_ > fully qualified. The trailing dot is just used by some software to > indicate fully qualified names. > > It looks like it's something specifically mentioned by RFC 3986 > though, http://tools.ietf.org/html/rfc3986#section-3.2.2: > > The rightmost domain > label of a fully qualified domain name in DNS may be followed by a > single "." and should be if it is necessary to distinguish between > the complete domain name and some local domain. > > So we probably should support it. I've committed the patch. From d.bussink at gmail.com Tue Jan 14 12:42:39 2014 From: d.bussink at gmail.com (Dirkjan Bussink) Date: Tue, 14 Jan 2014 13:42:39 +0100 Subject: [PATCH] SSL: ssl_session_tickets directive In-Reply-To: <20140114111315.GI1835@mdounin.ru> References: <20140114111315.GI1835@mdounin.ru> Message-ID: On 14 Jan 2014, at 12:13, Maxim Dounin wrote: > Committed, thanks. Thanks you very much! ? Dirkjan From vbart at nginx.com Tue Jan 14 12:57:56 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:57:56 +0000 Subject: [nginx] Year 2014. Message-ID: details: http://hg.nginx.org/nginx/rev/64af0f7c4dcd branches: changeset: 5506:64af0f7c4dcd user: Valentin Bartenev date: Tue Jan 14 16:24:02 2014 +0400 description: Year 2014. diffstat: docs/text/LICENSE | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (12 lines): diff -r d091d16ed398 -r 64af0f7c4dcd docs/text/LICENSE --- a/docs/text/LICENSE Fri Jan 10 11:22:14 2014 -0800 +++ b/docs/text/LICENSE Tue Jan 14 16:24:02 2014 +0400 @@ -1,6 +1,6 @@ /* - * Copyright (C) 2002-2013 Igor Sysoev - * Copyright (C) 2011-2013 Nginx, Inc. + * Copyright (C) 2002-2014 Igor Sysoev + * Copyright (C) 2011-2014 Nginx, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without From vbart at nginx.com Tue Jan 14 12:57:57 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:57:57 +0000 Subject: [nginx] SPDY: fixed format specifiers in logging. Message-ID: details: http://hg.nginx.org/nginx/rev/a30bba3c72e8 branches: changeset: 5507:a30bba3c72e8 user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: fixed format specifiers in logging. diffstat: src/http/ngx_http_spdy.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diffs (57 lines): diff -r 64af0f7c4dcd -r a30bba3c72e8 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Jan 14 16:24:02 2014 +0400 +++ b/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 @@ -484,7 +484,7 @@ ngx_http_spdy_send_output_queue(ngx_http out = frame; ngx_log_debug5(NGX_LOG_DEBUG_HTTP, c->log, 0, - "spdy frame out: %p sid:%ui prio:%ui bl:%ui size:%uz", + "spdy frame out: %p sid:%ui prio:%ui bl:%d size:%uz", out, out->stream ? out->stream->id : 0, out->priority, out->blocked, out->size); } @@ -525,7 +525,7 @@ ngx_http_spdy_send_output_queue(ngx_http } ngx_log_debug4(NGX_LOG_DEBUG_HTTP, c->log, 0, - "spdy frame sent: %p sid:%ui bl:%ui size:%uz", + "spdy frame sent: %p sid:%ui bl:%d size:%uz", out, out->stream ? out->stream->id : 0, out->blocked, out->size); } @@ -659,7 +659,7 @@ ngx_http_spdy_state_head(ngx_http_spdy_c pos += sizeof(uint32_t); ngx_log_debug3(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, - "spdy process frame head:%08Xd f:%ui l:%ui", + "spdy process frame head:%08XD f:%Xd l:%uz", head, sc->flags, sc->length); if (ngx_spdy_ctl_frame_check(head)) { @@ -1480,7 +1480,7 @@ ngx_http_spdy_state_save(ngx_http_spdy_c if (end - pos > NGX_SPDY_STATE_BUFFER_SIZE) { ngx_log_error(NGX_LOG_ALERT, sc->connection->log, 0, "spdy state buffer overflow: " - "%i bytes required", end - pos); + "%z bytes required", end - pos); return ngx_http_spdy_state_internal_error(sc); } #endif @@ -1729,7 +1729,7 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd #if (NGX_DEBUG) if (size > NGX_SPDY_CTL_FRAME_BUFFER_SIZE - NGX_SPDY_FRAME_HEADER_SIZE) { ngx_log_error(NGX_LOG_ALERT, sc->pool->log, 0, - "requested control frame is too big: %z", size); + "requested control frame is too big: %uz", size); return NULL; } @@ -2104,7 +2104,7 @@ ngx_http_spdy_alloc_large_header_buffer( } ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "spdy large header alloc: %p %uz", + "spdy large header alloc: %p %z", buf->pos, buf->end - buf->last); old = r->header_in->pos; From vbart at nginx.com Tue Jan 14 12:57:59 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:57:59 +0000 Subject: [nginx] SPDY: better name for queued frames counter. Message-ID: details: http://hg.nginx.org/nginx/rev/9053fdcea4b7 branches: changeset: 5508:9053fdcea4b7 user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: better name for queued frames counter. No functional changes. diffstat: src/http/ngx_http_spdy.c | 6 +++--- src/http/ngx_http_spdy.h | 3 ++- src/http/ngx_http_spdy_filter_module.c | 14 +++++++------- 3 files changed, 12 insertions(+), 11 deletions(-) diffs (95 lines): diff -r a30bba3c72e8 -r 9053fdcea4b7 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 @@ -2861,9 +2861,9 @@ ngx_http_spdy_finalize_connection(ngx_ht fc->error = 1; - if (stream->waiting) { - r->blocked -= stream->waiting; - stream->waiting = 0; + if (stream->queued) { + r->blocked -= stream->queued; + stream->queued = 0; ev = fc->write; ev->delayed = 0; diff -r a30bba3c72e8 -r 9053fdcea4b7 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 @@ -119,7 +119,8 @@ struct ngx_http_spdy_stream_s { ngx_http_spdy_stream_t *next; ngx_uint_t header_buffers; - ngx_uint_t waiting; + ngx_uint_t queued; + ngx_http_spdy_out_frame_t *free_frames; ngx_chain_t *free_data_headers; diff -r a30bba3c72e8 -r 9053fdcea4b7 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -607,7 +607,7 @@ ngx_http_spdy_header_filter(ngx_http_req cln->handler = ngx_http_spdy_filter_cleanup; cln->data = stream; - stream->waiting = 1; + stream->queued = 1; return ngx_http_spdy_filter_send(c, stream); } @@ -633,7 +633,7 @@ ngx_http_spdy_body_filter(ngx_http_reque if (in == NULL || r->header_only) { - if (stream->waiting) { + if (stream->queued) { return NGX_AGAIN; } @@ -695,7 +695,7 @@ ngx_http_spdy_body_filter(ngx_http_reque ngx_http_spdy_queue_frame(stream->connection, frame); - stream->waiting++; + stream->queued++; r->main->blocked++; @@ -800,7 +800,7 @@ ngx_http_spdy_filter_send(ngx_connection stream->blocked = 0; - if (stream->waiting) { + if (stream->queued) { fc->buffered |= NGX_SPDY_WRITE_BUFFERED; fc->write->delayed = 1; return NGX_AGAIN; @@ -932,7 +932,7 @@ ngx_http_spdy_handle_frame(ngx_http_spdy frame->free = stream->free_frames; stream->free_frames = frame; - stream->waiting--; + stream->queued--; } @@ -965,7 +965,7 @@ ngx_http_spdy_filter_cleanup(void *data) ngx_http_request_t *r; ngx_http_spdy_out_frame_t *frame, **fn; - if (stream->waiting == 0) { + if (stream->queued == 0) { return; } @@ -982,7 +982,7 @@ ngx_http_spdy_filter_cleanup(void *data) if (frame->stream == stream && !frame->blocked) { - stream->waiting--; + stream->queued--; r->blocked--; *fn = frame->next; From vbart at nginx.com Tue Jan 14 12:58:02 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:02 +0000 Subject: [nginx] SPDY: elimination of r->blocked counter usage for queuin... Message-ID: details: http://hg.nginx.org/nginx/rev/3ff29c30effb branches: changeset: 5510:3ff29c30effb user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: elimination of r->blocked counter usage for queuing frames. It was used to prevent destroying of request object when there are unsent frames in queue for the stream. Since it was incremented for each frame and is only 8 bits long, so it was not very hard to overflow the counter. Now the stream->queued counter is checked instead. diffstat: src/http/ngx_http_spdy.c | 16 ++++++++++------ src/http/ngx_http_spdy_filter_module.c | 10 ---------- 2 files changed, 10 insertions(+), 16 deletions(-) diffs (95 lines): diff -r 877a7bd72070 -r 3ff29c30effb src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 @@ -2642,9 +2642,16 @@ ngx_http_spdy_close_stream(ngx_http_spdy sc = stream->connection; - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, - "spdy close stream %ui, processing %ui", - stream->id, sc->processing); + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, + "spdy close stream %ui, queued %ui, processing %ui", + stream->id, stream->queued, sc->processing); + + fc = stream->request->connection; + + if (stream->queued) { + fc->write->handler = ngx_http_spdy_close_stream_handler; + return; + } if (!stream->out_closed) { if (ngx_http_spdy_send_rst_stream(sc, stream->id, @@ -2685,8 +2692,6 @@ ngx_http_spdy_close_stream(ngx_http_spdy index = &s->index; } - fc = stream->request->connection; - ngx_http_free_request(stream->request, rc); ev = fc->read; @@ -2862,7 +2867,6 @@ ngx_http_spdy_finalize_connection(ngx_ht fc->error = 1; if (stream->queued) { - r->blocked -= stream->queued; stream->queued = 0; ev = fc->write; diff -r 877a7bd72070 -r 3ff29c30effb src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -597,8 +597,6 @@ ngx_http_spdy_header_filter(ngx_http_req ngx_http_spdy_queue_blocked_frame(sc, frame); - r->blocked++; - cln = ngx_http_cleanup_add(r, 0); if (cln == NULL) { return NGX_ERROR; @@ -697,8 +695,6 @@ ngx_http_spdy_body_filter(ngx_http_reque stream->queued++; - r->main->blocked++; - return ngx_http_spdy_filter_send(r->connection, stream); } @@ -923,7 +919,6 @@ ngx_http_spdy_handle_frame(ngx_http_spdy r = stream->request; r->connection->sent += frame->size; - r->blocked--; if (frame->fin) { stream->out_closed = 1; @@ -962,15 +957,12 @@ ngx_http_spdy_filter_cleanup(void *data) { ngx_http_spdy_stream_t *stream = data; - ngx_http_request_t *r; ngx_http_spdy_out_frame_t *frame, **fn; if (stream->queued == 0) { return; } - r = stream->request; - fn = &stream->connection->last_out; for ( ;; ) { @@ -981,9 +973,7 @@ ngx_http_spdy_filter_cleanup(void *data) } if (frame->stream == stream && !frame->blocked) { - stream->queued--; - r->blocked--; *fn = frame->next; continue; From vbart at nginx.com Tue Jan 14 12:58:03 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:03 +0000 Subject: [nginx] SPDY: refactored ngx_http_spdy_body_filter(). Message-ID: details: http://hg.nginx.org/nginx/rev/dfb52d25cefb branches: changeset: 5511:dfb52d25cefb user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: refactored ngx_http_spdy_body_filter(). A local pointer to fake connection is introduced to slightly reduce further patches. No functional changes. diffstat: src/http/ngx_http_spdy_filter_module.c | 13 ++++++++----- 1 files changed, 8 insertions(+), 5 deletions(-) diffs (58 lines): diff -r 3ff29c30effb -r dfb52d25cefb src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -617,6 +617,7 @@ ngx_http_spdy_body_filter(ngx_http_reque off_t size; ngx_buf_t *b; ngx_chain_t *cl, *out, **ln; + ngx_connection_t *fc; ngx_http_spdy_stream_t *stream; ngx_http_spdy_out_frame_t *frame; @@ -626,7 +627,9 @@ ngx_http_spdy_body_filter(ngx_http_reque return ngx_http_next_body_filter(r, in); } - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + fc = r->connection; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, fc->log, 0, "spdy body filter \"%V?%V\"", &r->uri, &r->args); if (in == NULL || r->header_only) { @@ -635,7 +638,7 @@ ngx_http_spdy_body_filter(ngx_http_reque return NGX_AGAIN; } - r->connection->buffered &= ~NGX_SPDY_WRITE_BUFFERED; + fc->buffered &= ~NGX_SPDY_WRITE_BUFFERED; return NGX_OK; } @@ -647,7 +650,7 @@ ngx_http_spdy_body_filter(ngx_http_reque b = in->buf; #if 1 if (ngx_buf_size(b) == 0 && !ngx_buf_special(b)) { - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + ngx_log_error(NGX_LOG_ALERT, fc->log, 0, "zero size buf in spdy body filter " "t:%d r:%d f:%d %p %p-%p %p %O-%O", b->temporary, @@ -680,7 +683,7 @@ ngx_http_spdy_body_filter(ngx_http_reque } while (in); if (size > NGX_SPDY_MAX_FRAME_SIZE) { - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, + ngx_log_error(NGX_LOG_ALERT, fc->log, 0, "FIXME: chain too big in spdy filter: %O", size); return NGX_ERROR; } @@ -695,7 +698,7 @@ ngx_http_spdy_body_filter(ngx_http_reque stream->queued++; - return ngx_http_spdy_filter_send(r->connection, stream); + return ngx_http_spdy_filter_send(fc, stream); } From vbart at nginx.com Tue Jan 14 12:58:05 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:05 +0000 Subject: [nginx] SPDY: fixed possible premature close of stream. Message-ID: details: http://hg.nginx.org/nginx/rev/9fffc0c46e5c branches: changeset: 5512:9fffc0c46e5c user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: fixed possible premature close of stream. The "delayed" flag always should be set if there are unsent frames, but this might not be the case if ngx_http_spdy_body_filter() was called with NULL chain. As a result, the "send_timeout" timer could be set on a stream in ngx_http_writer(). And if the timeout occurred before all the stream data has been sent, then the request was finalized with the "client timed out" error. diffstat: src/http/ngx_http_spdy_filter_module.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r dfb52d25cefb -r 9fffc0c46e5c src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -635,6 +635,7 @@ ngx_http_spdy_body_filter(ngx_http_reque if (in == NULL || r->header_only) { if (stream->queued) { + fc->write->delayed = 1; return NGX_AGAIN; } From vbart at nginx.com Tue Jan 14 12:58:06 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:06 +0000 Subject: [nginx] SPDY: body filter was replaced by c->send_chain() function. Message-ID: details: http://hg.nginx.org/nginx/rev/311803b21504 branches: changeset: 5513:311803b21504 user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: body filter was replaced by c->send_chain() function. It allows to use ngx_http_write_filter() and all its rate limiting logic. diffstat: src/core/ngx_connection.h | 3 + src/http/ngx_http_spdy_filter_module.c | 90 ++++++++++++-------------------- src/http/ngx_http_write_filter_module.c | 5 +- 3 files changed, 42 insertions(+), 56 deletions(-) diffs (232 lines): diff -r 9fffc0c46e5c -r 311803b21504 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Tue Jan 14 16:24:45 2014 +0400 +++ b/src/core/ngx_connection.h Tue Jan 14 16:24:45 2014 +0400 @@ -112,6 +112,7 @@ typedef enum { #define NGX_LOWLEVEL_BUFFERED 0x0f #define NGX_SSL_BUFFERED 0x01 +#define NGX_SPDY_BUFFERED 0x02 struct ngx_connection_s { @@ -171,6 +172,8 @@ struct ngx_connection_s { unsigned tcp_nodelay:2; /* ngx_connection_tcp_nodelay_e */ unsigned tcp_nopush:2; /* ngx_connection_tcp_nopush_e */ + unsigned need_last_buf:1; + #if (NGX_HAVE_IOCP) unsigned accept_context_updated:1; #endif diff -r 9fffc0c46e5c -r 311803b21504 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -14,8 +14,6 @@ #include -#define NGX_SPDY_WRITE_BUFFERED NGX_HTTP_WRITE_BUFFERED - #define ngx_http_spdy_nv_nsize(h) (NGX_SPDY_NV_NLEN_SIZE + sizeof(h) - 1) #define ngx_http_spdy_nv_vsize(h) (NGX_SPDY_NV_VLEN_SIZE + sizeof(h) - 1) @@ -29,6 +27,10 @@ #define ngx_http_spdy_nv_write_val(p, h) \ ngx_cpymem(ngx_http_spdy_nv_write_vlen(p, sizeof(h) - 1), h, sizeof(h) - 1) + +static ngx_chain_t *ngx_http_spdy_send_chain(ngx_connection_t *fc, + ngx_chain_t *in, off_t limit); + static ngx_inline ngx_int_t ngx_http_spdy_filter_send( ngx_connection_t *fc, ngx_http_spdy_stream_t *stream); @@ -82,7 +84,6 @@ ngx_module_t ngx_http_spdy_filter_modul static ngx_http_output_header_filter_pt ngx_http_next_header_filter; -static ngx_http_output_body_filter_pt ngx_http_next_body_filter; static ngx_int_t @@ -607,41 +608,35 @@ ngx_http_spdy_header_filter(ngx_http_req stream->queued = 1; + c->send_chain = ngx_http_spdy_send_chain; + c->need_last_buf = 1; + return ngx_http_spdy_filter_send(c, stream); } -static ngx_int_t -ngx_http_spdy_body_filter(ngx_http_request_t *r, ngx_chain_t *in) +static ngx_chain_t * +ngx_http_spdy_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t limit) { off_t size; ngx_buf_t *b; ngx_chain_t *cl, *out, **ln; - ngx_connection_t *fc; + ngx_http_request_t *r; ngx_http_spdy_stream_t *stream; ngx_http_spdy_out_frame_t *frame; + r = fc->data; stream = r->spdy_stream; - if (stream == NULL) { - return ngx_http_next_body_filter(r, in); - } - - fc = r->connection; - - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, fc->log, 0, - "spdy body filter \"%V?%V\"", &r->uri, &r->args); - - if (in == NULL || r->header_only) { + if (in == NULL) { if (stream->queued) { fc->write->delayed = 1; - return NGX_AGAIN; + } else { + fc->buffered &= ~NGX_SPDY_BUFFERED; } - fc->buffered &= ~NGX_SPDY_WRITE_BUFFERED; - - return NGX_OK; + return NULL; } size = 0; @@ -649,28 +644,10 @@ ngx_http_spdy_body_filter(ngx_http_reque do { b = in->buf; -#if 1 - if (ngx_buf_size(b) == 0 && !ngx_buf_special(b)) { - ngx_log_error(NGX_LOG_ALERT, fc->log, 0, - "zero size buf in spdy body filter " - "t:%d r:%d f:%d %p %p-%p %p %O-%O", - b->temporary, - b->recycled, - b->in_file, - b->start, - b->pos, - b->last, - b->file, - b->file_pos, - b->file_last); - ngx_debug_point(); - return NGX_ERROR; - } -#endif cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { - return NGX_ERROR; + return NGX_CHAIN_ERROR; } size += ngx_buf_size(b); @@ -686,20 +663,24 @@ ngx_http_spdy_body_filter(ngx_http_reque if (size > NGX_SPDY_MAX_FRAME_SIZE) { ngx_log_error(NGX_LOG_ALERT, fc->log, 0, "FIXME: chain too big in spdy filter: %O", size); - return NGX_ERROR; + return NGX_CHAIN_ERROR; } frame = ngx_http_spdy_filter_get_data_frame(stream, (size_t) size, out, cl); if (frame == NULL) { - return NGX_ERROR; + return NGX_CHAIN_ERROR; } ngx_http_spdy_queue_frame(stream->connection, frame); stream->queued++; - return ngx_http_spdy_filter_send(fc, stream); + if (ngx_http_spdy_filter_send(fc, stream) == NGX_ERROR) { + return NGX_CHAIN_ERROR; + } + + return NULL; } @@ -801,12 +782,12 @@ ngx_http_spdy_filter_send(ngx_connection stream->blocked = 0; if (stream->queued) { - fc->buffered |= NGX_SPDY_WRITE_BUFFERED; + fc->buffered |= NGX_SPDY_BUFFERED; fc->write->delayed = 1; return NGX_AGAIN; } - fc->buffered &= ~NGX_SPDY_WRITE_BUFFERED; + fc->buffered &= ~NGX_SPDY_BUFFERED; return NGX_OK; } @@ -939,20 +920,22 @@ static ngx_inline void ngx_http_spdy_handle_stream(ngx_http_spdy_connection_t *sc, ngx_http_spdy_stream_t *stream) { - ngx_connection_t *fc; - - fc = stream->request->connection; - - fc->write->delayed = 0; + ngx_event_t *wev; if (stream->handled || stream->blocked) { return; } - stream->handled = 1; + wev = stream->request->connection->write; - stream->next = sc->last_stream; - sc->last_stream = stream; + if (!wev->timer_set) { + wev->delayed = 0; + + stream->handled = 1; + + stream->next = sc->last_stream; + sc->last_stream = stream; + } } @@ -994,8 +977,5 @@ ngx_http_spdy_filter_init(ngx_conf_t *cf ngx_http_next_header_filter = ngx_http_top_header_filter; ngx_http_top_header_filter = ngx_http_spdy_header_filter; - ngx_http_next_body_filter = ngx_http_top_body_filter; - ngx_http_top_body_filter = ngx_http_spdy_body_filter; - return NGX_OK; } diff -r 9fffc0c46e5c -r 311803b21504 src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_write_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -184,7 +184,10 @@ ngx_http_write_filter(ngx_http_request_t return NGX_AGAIN; } - if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) { + if (size == 0 + && !(c->buffered & NGX_LOWLEVEL_BUFFERED) + && !(last && c->need_last_buf)) + { if (last || flush) { for (cl = r->out; cl; /* void */) { ln = cl; From vbart at nginx.com Tue Jan 14 12:58:07 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:07 +0000 Subject: [nginx] SPDY: implemented buffers chain splitting. Message-ID: details: http://hg.nginx.org/nginx/rev/b7ee1bae0ffa branches: changeset: 5514:b7ee1bae0ffa user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: implemented buffers chain splitting. It fixes "chain too big in spdy filter" alerts, and adds full support for rate limiting of SPDY streams. diffstat: src/http/ngx_http_spdy.h | 1 + src/http/ngx_http_spdy_filter_module.c | 191 ++++++++++++++++++++++++++++---- 2 files changed, 164 insertions(+), 28 deletions(-) diffs (280 lines): diff -r 311803b21504 -r b7ee1bae0ffa src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 @@ -123,6 +123,7 @@ struct ngx_http_spdy_stream_s { ngx_http_spdy_out_frame_t *free_frames; ngx_chain_t *free_data_headers; + ngx_chain_t *free_bufs; unsigned priority:2; unsigned handled:1; diff -r 311803b21504 -r b7ee1bae0ffa src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -34,6 +34,9 @@ static ngx_chain_t *ngx_http_spdy_send_c static ngx_inline ngx_int_t ngx_http_spdy_filter_send( ngx_connection_t *fc, ngx_http_spdy_stream_t *stream); +static ngx_chain_t *ngx_http_spdy_filter_get_shadow( + ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, size_t offset, + size_t size); static ngx_http_spdy_out_frame_t *ngx_http_spdy_filter_get_data_frame( ngx_http_spdy_stream_t *stream, size_t len, ngx_chain_t *first, ngx_chain_t *last); @@ -618,8 +621,8 @@ ngx_http_spdy_header_filter(ngx_http_req static ngx_chain_t * ngx_http_spdy_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t limit) { - off_t size; - ngx_buf_t *b; + off_t size, offset; + size_t rest, frame_size; ngx_chain_t *cl, *out, **ln; ngx_http_request_t *r; ngx_http_spdy_stream_t *stream; @@ -639,48 +642,161 @@ ngx_http_spdy_send_chain(ngx_connection_ return NULL; } - size = 0; - ln = &out; + size = ngx_buf_size(in->buf); - do { - b = in->buf; - + if (in->buf->tag == (ngx_buf_tag_t) &ngx_http_spdy_filter_get_shadow) { cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { return NGX_CHAIN_ERROR; } - size += ngx_buf_size(b); - cl->buf = b; + cl->buf = in->buf; + in->buf = cl->buf->shadow; - *ln = cl; - ln = &cl->next; + offset = ngx_buf_in_memory(in->buf) + ? (cl->buf->pos - in->buf->pos) + : (cl->buf->file_pos - in->buf->file_pos); - in = in->next; + cl->next = stream->free_bufs; + stream->free_bufs = cl; - } while (in); - - if (size > NGX_SPDY_MAX_FRAME_SIZE) { - ngx_log_error(NGX_LOG_ALERT, fc->log, 0, - "FIXME: chain too big in spdy filter: %O", size); - return NGX_CHAIN_ERROR; + } else { + offset = 0; } - frame = ngx_http_spdy_filter_get_data_frame(stream, (size_t) size, - out, cl); - if (frame == NULL) { - return NGX_CHAIN_ERROR; + frame_size = (limit && limit < NGX_SPDY_MAX_FRAME_SIZE) + ? limit : NGX_SPDY_MAX_FRAME_SIZE; + + for ( ;; ) { + ln = &out; + rest = frame_size; + + while ((off_t) rest >= size) { + + if (offset) { + cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, + offset, size); + if (cl == NULL) { + return NGX_CHAIN_ERROR; + } + + offset = 0; + + } else { + cl = ngx_alloc_chain_link(r->pool); + if (cl == NULL) { + return NGX_CHAIN_ERROR; + } + + cl->buf = in->buf; + } + + *ln = cl; + ln = &cl->next; + + rest -= size; + in = in->next; + + if (in == NULL) { + frame_size -= rest; + rest = 0; + break; + } + + size = ngx_buf_size(in->buf); + } + + if (rest) { + cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, + offset, rest); + if (cl == NULL) { + return NGX_CHAIN_ERROR; + } + + cl->buf->flush = 0; + cl->buf->last_buf = 0; + + *ln = cl; + + offset += rest; + size -= rest; + } + + frame = ngx_http_spdy_filter_get_data_frame(stream, frame_size, + out, cl); + if (frame == NULL) { + return NGX_CHAIN_ERROR; + } + + ngx_http_spdy_queue_frame(stream->connection, frame); + + stream->queued++; + + if (in == NULL) { + break; + } + + if (limit) { + limit -= frame_size; + + if (limit == 0) { + break; + } + + if (limit < NGX_SPDY_MAX_FRAME_SIZE) { + frame_size = limit; + } + } } - ngx_http_spdy_queue_frame(stream->connection, frame); + if (offset) { + cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, offset, size); + if (cl == NULL) { + return NGX_CHAIN_ERROR; + } - stream->queued++; + in->buf = cl->buf; + ngx_free_chain(r->pool, cl); + } if (ngx_http_spdy_filter_send(fc, stream) == NGX_ERROR) { return NGX_CHAIN_ERROR; } - return NULL; + return in; +} + + +static ngx_chain_t * +ngx_http_spdy_filter_get_shadow(ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, + size_t offset, size_t size) +{ + ngx_buf_t *chunk; + ngx_chain_t *cl; + + cl = ngx_chain_get_free_buf(stream->request->pool, &stream->free_bufs); + if (cl == NULL) { + return NULL; + } + + chunk = cl->buf; + + ngx_memcpy(chunk, buf, sizeof(ngx_buf_t)); + + chunk->tag = (ngx_buf_tag_t) &ngx_http_spdy_filter_get_shadow; + chunk->shadow = buf; + + if (ngx_buf_in_memory(chunk)) { + chunk->pos += offset; + chunk->last = chunk->pos + size; + } + + if (chunk->in_file) { + chunk->file_pos += offset; + chunk->file_last = chunk->file_pos + size; + } + + return cl; } @@ -747,7 +863,7 @@ ngx_http_spdy_filter_get_data_frame(ngx_ buf->last = p; buf->end = p; - buf->tag = (ngx_buf_tag_t) &ngx_http_spdy_filter_module; + buf->tag = (ngx_buf_tag_t) &ngx_http_spdy_filter_get_data_frame; buf->memory = 1; } @@ -825,6 +941,7 @@ static ngx_int_t ngx_http_spdy_data_frame_handler(ngx_http_spdy_connection_t *sc, ngx_http_spdy_out_frame_t *frame) { + ngx_buf_t *buf; ngx_chain_t *cl, *ln; ngx_http_spdy_stream_t *stream; @@ -832,7 +949,7 @@ ngx_http_spdy_data_frame_handler(ngx_htt cl = frame->first; - if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_spdy_filter_module) { + if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_spdy_filter_get_data_frame) { if (cl->buf->pos != cl->buf->last) { ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, @@ -855,6 +972,18 @@ ngx_http_spdy_data_frame_handler(ngx_htt } for ( ;; ) { + if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_spdy_filter_get_shadow) { + buf = cl->buf->shadow; + + if (ngx_buf_in_memory(buf)) { + buf->pos = cl->buf->pos; + } + + if (buf->in_file) { + buf->file_pos = cl->buf->file_pos; + } + } + if (ngx_buf_size(cl->buf) != 0) { if (cl != frame->first) { @@ -871,7 +1000,13 @@ ngx_http_spdy_data_frame_handler(ngx_htt ln = cl->next; - ngx_free_chain(stream->request->pool, cl); + if (cl->buf->tag == (ngx_buf_tag_t) &ngx_http_spdy_filter_get_shadow) { + cl->next = stream->free_bufs; + stream->free_bufs = cl; + + } else { + ngx_free_chain(stream->request->pool, cl); + } if (cl == frame->last) { goto done; From vbart at nginx.com Tue Jan 14 12:58:08 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:08 +0000 Subject: [nginx] SPDY: added the "spdy_chunk_size" directive. Message-ID: details: http://hg.nginx.org/nginx/rev/e5fb14e85040 branches: changeset: 5515:e5fb14e85040 user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: added the "spdy_chunk_size" directive. diffstat: src/http/ngx_http_spdy_filter_module.c | 10 +++- src/http/ngx_http_spdy_module.c | 65 ++++++++++++++++++++++++++++++++- src/http/ngx_http_spdy_module.h | 5 ++ 3 files changed, 74 insertions(+), 6 deletions(-) diffs (165 lines): diff -r b7ee1bae0ffa -r e5fb14e85040 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -626,6 +626,7 @@ ngx_http_spdy_send_chain(ngx_connection_ ngx_chain_t *cl, *out, **ln; ngx_http_request_t *r; ngx_http_spdy_stream_t *stream; + ngx_http_spdy_loc_conf_t *slcf; ngx_http_spdy_out_frame_t *frame; r = fc->data; @@ -664,8 +665,11 @@ ngx_http_spdy_send_chain(ngx_connection_ offset = 0; } - frame_size = (limit && limit < NGX_SPDY_MAX_FRAME_SIZE) - ? limit : NGX_SPDY_MAX_FRAME_SIZE; + slcf = ngx_http_get_module_loc_conf(r, ngx_http_spdy_module); + + frame_size = (limit && limit <= (off_t) slcf->chunk_size) + ? (size_t) limit + : slcf->chunk_size; for ( ;; ) { ln = &out; @@ -743,7 +747,7 @@ ngx_http_spdy_send_chain(ngx_connection_ break; } - if (limit < NGX_SPDY_MAX_FRAME_SIZE) { + if (limit < (off_t) slcf->chunk_size) { frame_size = limit; } } diff -r b7ee1bae0ffa -r e5fb14e85040 src/http/ngx_http_spdy_module.c --- a/src/http/ngx_http_spdy_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_module.c Tue Jan 14 16:24:45 2014 +0400 @@ -22,16 +22,19 @@ static ngx_int_t ngx_http_spdy_module_in static void *ngx_http_spdy_create_main_conf(ngx_conf_t *cf); static char *ngx_http_spdy_init_main_conf(ngx_conf_t *cf, void *conf); - static void *ngx_http_spdy_create_srv_conf(ngx_conf_t *cf); static char *ngx_http_spdy_merge_srv_conf(ngx_conf_t *cf, void *parent, void *child); +static void *ngx_http_spdy_create_loc_conf(ngx_conf_t *cf); +static char *ngx_http_spdy_merge_loc_conf(ngx_conf_t *cf, void *parent, + void *child); static char *ngx_http_spdy_recv_buffer_size(ngx_conf_t *cf, void *post, void *data); static char *ngx_http_spdy_pool_size(ngx_conf_t *cf, void *post, void *data); static char *ngx_http_spdy_streams_index_mask(ngx_conf_t *cf, void *post, void *data); +static char *ngx_http_spdy_chunk_size(ngx_conf_t *cf, void *post, void *data); static ngx_conf_num_bounds_t ngx_http_spdy_headers_comp_bounds = { @@ -44,6 +47,8 @@ static ngx_conf_post_t ngx_http_spdy_po { ngx_http_spdy_pool_size }; static ngx_conf_post_t ngx_http_spdy_streams_index_mask_post = { ngx_http_spdy_streams_index_mask }; +static ngx_conf_post_t ngx_http_spdy_chunk_size_post = + { ngx_http_spdy_chunk_size }; static ngx_command_t ngx_http_spdy_commands[] = { @@ -97,6 +102,13 @@ static ngx_command_t ngx_http_spdy_comm offsetof(ngx_http_spdy_srv_conf_t, headers_comp), &ngx_http_spdy_headers_comp_bounds }, + { ngx_string("spdy_chunk_size"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_size_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_spdy_loc_conf_t, chunk_size), + &ngx_http_spdy_chunk_size_post }, + ngx_null_command }; @@ -111,8 +123,8 @@ static ngx_http_module_t ngx_http_spdy_ ngx_http_spdy_create_srv_conf, /* create server configuration */ ngx_http_spdy_merge_srv_conf, /* merge server configuration */ - NULL, /* create location configuration */ - NULL /* merge location configuration */ + ngx_http_spdy_create_loc_conf, /* create location configuration */ + ngx_http_spdy_merge_loc_conf /* merge location configuration */ }; @@ -296,6 +308,34 @@ ngx_http_spdy_merge_srv_conf(ngx_conf_t } +static void * +ngx_http_spdy_create_loc_conf(ngx_conf_t *cf) +{ + ngx_http_spdy_loc_conf_t *slcf; + + slcf = ngx_pcalloc(cf->pool, sizeof(ngx_http_spdy_loc_conf_t)); + if (slcf == NULL) { + return NULL; + } + + slcf->chunk_size = NGX_CONF_UNSET_SIZE; + + return slcf; +} + + +static char * +ngx_http_spdy_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child) +{ + ngx_http_spdy_loc_conf_t *prev = parent; + ngx_http_spdy_loc_conf_t *conf = child; + + ngx_conf_merge_size_value(conf->chunk_size, prev->chunk_size, 8 * 1024); + + return NGX_CONF_OK; +} + + static char * ngx_http_spdy_recv_buffer_size(ngx_conf_t *cf, void *post, void *data) { @@ -349,3 +389,22 @@ ngx_http_spdy_streams_index_mask(ngx_con return NGX_CONF_OK; } + + +static char * +ngx_http_spdy_chunk_size(ngx_conf_t *cf, void *post, void *data) +{ + size_t *sp = data; + + if (*sp == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "the spdy chunk size cannot be zero"); + return NGX_CONF_ERROR; + } + + if (*sp > NGX_SPDY_MAX_FRAME_SIZE) { + *sp = NGX_SPDY_MAX_FRAME_SIZE; + } + + return NGX_CONF_OK; +} diff -r b7ee1bae0ffa -r e5fb14e85040 src/http/ngx_http_spdy_module.h --- a/src/http/ngx_http_spdy_module.h Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_module.h Tue Jan 14 16:24:45 2014 +0400 @@ -30,6 +30,11 @@ typedef struct { } ngx_http_spdy_srv_conf_t; +typedef struct { + size_t chunk_size; +} ngx_http_spdy_loc_conf_t; + + extern ngx_module_t ngx_http_spdy_module; From vbart at nginx.com Tue Jan 14 12:58:00 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 12:58:00 +0000 Subject: [nginx] SPDY: better name for flag that indicates incomplete fra... Message-ID: details: http://hg.nginx.org/nginx/rev/877a7bd72070 branches: changeset: 5509:877a7bd72070 user: Valentin Bartenev date: Tue Jan 14 16:24:45 2014 +0400 description: SPDY: better name for flag that indicates incomplete frame state. No functional changes. diffstat: src/http/ngx_http_spdy.c | 8 ++++---- src/http/ngx_http_spdy.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diffs (51 lines): diff -r 9053fdcea4b7 -r 877a7bd72070 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.c Tue Jan 14 16:24:45 2014 +0400 @@ -346,7 +346,7 @@ ngx_http_spdy_read_handler(ngx_event_t * break; } - if (n == 0 && (sc->waiting || sc->processing)) { + if (n == 0 && (sc->incomplete || sc->processing)) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client closed prematurely connection"); } @@ -360,7 +360,7 @@ ngx_http_spdy_read_handler(ngx_event_t * end += n; sc->buffer_used = 0; - sc->waiting = 0; + sc->incomplete = 0; do { p = sc->handler(sc, p, end); @@ -567,7 +567,7 @@ ngx_http_spdy_handle_connection(ngx_http sscf = ngx_http_get_module_srv_conf(sc->http_connection->conf_ctx, ngx_http_spdy_module); - if (sc->waiting) { + if (sc->incomplete) { ngx_add_timer(c->read, sscf->recv_timeout); return; } @@ -1489,7 +1489,7 @@ ngx_http_spdy_state_save(ngx_http_spdy_c sc->buffer_used = end - pos; sc->handler = handler; - sc->waiting = 1; + sc->incomplete = 1; return end; } diff -r 9053fdcea4b7 -r 877a7bd72070 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy.h Tue Jan 14 16:24:45 2014 +0400 @@ -107,7 +107,7 @@ struct ngx_http_spdy_connection_s { ngx_uint_t last_sid; unsigned blocked:1; - unsigned waiting:1; /* FIXME better name */ + unsigned incomplete:1; }; From faskiri.devel at gmail.com Tue Jan 14 13:24:44 2014 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 14 Jan 2014 18:54:44 +0530 Subject: Rewrite handling order In-Reply-To: <20140114111120.GH1835@mdounin.ru> References: <20140114111120.GH1835@mdounin.ru> Message-ID: Thanks! Could you please explain why this is done? On Tue, Jan 14, 2014 at 4:41 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jan 14, 2014 at 04:15:32PM +0530, Fasih wrote: > > > Hi > > > > I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). > There > > is another plugin compiled before my plugin that also handles rewrite > > (HttpLuaModule). I was expecting to see that my module would rewrite > after > > lua is done, however that is not the case. Some debugging showed that > > whereas my module pushed into the > > cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers after lua, the > > cmcf.phase_engine.handlers had lua *after* my module. The culprit seems > to > > be the following: > > > > static ngx_int_t > > ngx_http_init_phase_handlers(ngx_conf_t *cf, ngx_http_core_main_conf_t > > *cmcf) > > { > > .. > > ph = cmcf->phase_engine.handlers; > > .. > > n += cmcf->phases[i].handlers.nelts; > > > > for (j = cmcf->phases[i].handlers.nelts - 1; j >=0; j--) { > > ph->checker = checker; > > ph->handler = h[j]; > > ph->next = n; > > ph++; > > } > > } > > > > The order is inverted here (h[j] before h[j-1]). Is this intentional or a > > bug? > > It's intentional. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sven at ha.cki.ng Tue Jan 14 13:41:15 2014 From: sven at ha.cki.ng (Sven Peter) Date: Tue, 14 Jan 2014 14:41:15 +0100 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates In-Reply-To: <20140114120805.GJ1835@mdounin.ru> References: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> <869DE836-5635-40C8-A86C-53AFD4B1FD30@ha.cki.ng> <20140114120805.GJ1835@mdounin.ru> Message-ID: <4EADB372-AE1D-4F3D-A150-BAF3D77574CD@ha.cki.ng> Hi, Thanks for the feedback! On Jan 14, 2014, at 1:08 PM, Maxim Dounin wrote: > > Better summary line would be: > > Mail: added support for SSL client certificate. Agreed. > >> >> This patch adds support for SSL client certificates to the mail proxy >> capabilities of nginx both for STARTTLS and SSL mode. >> Just like the HTTP SSL module a root CA is defined in the mail section >> of the configuration file. Verification can be optional or mandatory. >> Additionally, the result of the verification is exposed to the >> auth http backend via the SSL-Verify, SSL-Subject-DN, SSL-Issuer-DN >> and SSL-Serial HTTP headers. > > It would be good idea to add a list of configuration directives > added. Agreed, I'll fix that. > >> >> diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_auth_http_module.c >> --- a/src/mail/ngx_mail_auth_http_module.c Sat Jan 04 03:32:22 2014 +0400 >> +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jan 13 16:14:12 2014 +0100 >> @@ -1145,6 +1145,16 @@ >> ngx_str_t login, passwd; >> ngx_mail_core_srv_conf_t *cscf; >> >> +#if (NGX_MAIL_SSL) >> + ngx_str_t ssl_client_verify; >> + ngx_str_t ssl_client_raw_s_dn; >> + ngx_str_t ssl_client_raw_i_dn; >> + ngx_str_t ssl_client_raw_serial; >> + ngx_str_t ssl_client_s_dn; >> + ngx_str_t ssl_client_i_dn; >> + ngx_str_t ssl_client_serial; >> +#endif >> + > > This diverges from the style used. Additionally, variable names > seems to be too verbose. How does this diverge from the style exactly? Because of the #ifdef? Are the variable names acceptable when then the ssl_client prefix is removed? Or is the problem that there are client_raw_ and client_ variables? > >> if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { >> return NULL; >> } >> @@ -1153,6 +1163,51 @@ >> return NULL; >> } >> >> +#if (NGX_MAIL_SSL) >> + if (s->connection->ssl) { >> + /* ssl_client_verify comes from nginx itself - no need to escape */ > > This comment looks obvious. Yup, I'll remove it. > >> + if (ngx_ssl_get_client_verify(s->connection, pool, >> + &ssl_client_verify) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_ssl_get_subject_dn(s->connection, pool, >> + &ssl_client_raw_s_dn) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_ssl_get_issuer_dn(s->connection, pool, >> + &ssl_client_raw_i_dn) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_ssl_get_serial_number(s->connection, pool, >> + &ssl_client_raw_serial) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_s_dn, >> + &ssl_client_s_dn) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_i_dn, >> + &ssl_client_i_dn) != NGX_OK) { >> + return NULL; >> + } >> + >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_serial, >> + &ssl_client_serial) != NGX_OK) { >> + return NULL; >> + } > > On the other hand, escaping of at least client certificate serial > number looks unneeded. Yes, it's unneeded. I'll remove it. > >> + } else { >> + ngx_str_set(&ssl_client_verify, "NONE"); >> + ssl_client_i_dn.len = 0; >> + ssl_client_s_dn.len = 0; >> + ssl_client_serial.len = 0; > > Using fake values here looks wrong. In http, nginx marks $ssl_* > variables as "not found" for non-ssl connections, which is > essentially equivalent to empty strings, i.e., this contradicts to > the use of "NONE". Yup. > >> + } >> +#endif >> + >> cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); >> >> len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 >> @@ -1173,6 +1228,16 @@ >> + sizeof("Auth-SMTP-Helo: ") - 1 + s->smtp_helo.len >> + sizeof("Auth-SMTP-From: ") - 1 + s->smtp_from.len >> + sizeof("Auth-SMTP-To: ") - 1 + s->smtp_to.len >> +#if (NGX_MAIL_SSL) >> + + sizeof("SSL-Verify: ") - 1 + ssl_client_verify.len >> + + sizeof(CRLF) - 1 >> + + sizeof("SSL-Subject-DN: ") - 1 + ssl_client_s_dn.len >> + + sizeof(CRLF) - 1 >> + + sizeof("SSL-Issuer-DN: ") - 1 + ssl_client_i_dn.len >> + + sizeof(CRLF) - 1 >> + + sizeof("SSL-Serial: ") - 1 + ssl_client_serial.len >> + + sizeof(CRLF) - 1 >> +#endif > > Using common prefix "Auth-" might be a good idea. That indeed sounds like a good idea. > >> + ahcf->header.len >> + sizeof(CRLF) - 1; >> >> @@ -1255,6 +1320,34 @@ >> >> } >> >> +#if (NGX_MAIL_SSL) >> + if (ssl_client_verify.len) { >> + b->last = ngx_cpymem(b->last, "SSL-Verify: ", >> + sizeof("SSL-Verify: ") - 1); >> + b->last = ngx_copy(b->last, ssl_client_verify.data, >> + ssl_client_verify.len); >> + *b->last++ = CR; *b->last++ = LF; >> + >> + b->last = ngx_cpymem(b->last, "SSL-Subject-DN: ", >> + sizeof("SSL-Subject-DN: ") - 1); >> + b->last = ngx_copy(b->last, ssl_client_s_dn.data, >> + ssl_client_s_dn.len); >> + *b->last++ = CR; *b->last++ = LF; >> + >> + b->last = ngx_cpymem(b->last, "SSL-Issuer-DN: ", >> + sizeof("SSL-Issuer-DN: ") - 1); >> + b->last = ngx_copy(b->last, ssl_client_i_dn.data, >> + ssl_client_i_dn.len); >> + *b->last++ = CR; *b->last++ = LF; >> + >> + b->last = ngx_cpymem(b->last, "SSL-Serial: ", >> + sizeof("SSL-Serial: ") - 1); >> + b->last = ngx_copy(b->last, ssl_client_serial.data, >> + ssl_client_serial.len); >> + *b->last++ = CR; *b->last++ = LF; >> + } >> +#endif >> + > > I don't think that these headers should be sent if there is no > SSL connection. Any empty headers should be probably ommitted, > too. Agreed. > >> if (ahcf->header.len) { >> b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); >> } >> diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_handler.c >> --- a/src/mail/ngx_mail_handler.c Sat Jan 04 03:32:22 2014 +0400 >> +++ b/src/mail/ngx_mail_handler.c Mon Jan 13 16:14:12 2014 +0100 >> @@ -236,11 +236,40 @@ >> { >> ngx_mail_session_t *s; >> ngx_mail_core_srv_conf_t *cscf; >> + ngx_mail_ssl_conf_t *sslcf; >> >> if (c->ssl->handshaked) { >> >> s = c->data; >> >> + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); >> + if (sslcf->verify != NGX_MAIL_SSL_VERIFY_OFF) { > > The use of the != check looks silly. You may want to preserve the > same code as used in http, where ->verify is more or less boolean > with some special true values to differentiate submodes when > verify is used. I'll convert the verify field to look just like it does in http then. > >> + long rc; >> + rc = SSL_get_verify_result(c->ssl->connection); >> + >> + if (rc != X509_V_OK && >> + (sslcf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA && ngx_ssl_verify_error_optional(rc))) { >> + ngx_log_error(NGX_LOG_INFO, c->log, 0, >> + "client SSL certificate verify error: (%l:%s)", >> + rc, X509_verify_cert_error_string(rc)); >> + ngx_mail_close_connection(c); >> + return; >> + } > > Minor note: a 80+ line here due to use of long names. Yup, I must've overlooked that line when fixing my 80+ ones. > > Maror problem: you allow "optional_no_ca" here, but this is for > sure not secure due to no certificate passed to a backend. > >> + >> + if (sslcf->verify == NGX_MAIL_SSL_VERIFY_ON) { >> + X509 *cert; >> + cert = SSL_get_peer_certificate(c->ssl->connection); >> + >> + if (cert == NULL) { >> + ngx_log_error(NGX_LOG_INFO, c->log, 0, >> + "client sent no required SSL certificate"); >> + ngx_mail_close_connection(c); >> + return; >> + } >> + X509_free(cert); >> + } >> + } >> + >> if (s->starttls) { >> cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); >> >> diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.c >> --- a/src/mail/ngx_mail_ssl_module.c Sat Jan 04 03:32:22 2014 +0400 >> +++ b/src/mail/ngx_mail_ssl_module.c Mon Jan 13 16:14:12 2014 +0100 >> @@ -43,6 +43,13 @@ >> { ngx_null_string, 0 } >> }; >> >> +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { >> + { ngx_string("off"), NGX_MAIL_SSL_VERIFY_OFF }, >> + { ngx_string("on"), NGX_MAIL_SSL_VERIFY_ON }, >> + { ngx_string("optional"), NGX_MAIL_SSL_VERIFY_OPTIONAL }, >> + { ngx_string("optional_no_ca"), NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA }, > > As noted above, "optional_no_ca" makes no sense without a way to > pass a certificate to some backend. I've adapted this code from the http module (which exposes the raw certificate) without thinking about the use cases. This leaves two options imho: 1) Remove the optional_no_ca entirely and let someone else submit another patch if required :-) 2) Add an additional HTTP header for the raw certificate I'd go for the first one. Comments? > >> + { ngx_null_string, 0 } >> +}; >> >> static ngx_command_t ngx_mail_ssl_commands[] = { > > You may note that previously there were 2 empty lines between > blocks. With your change, there is just 1 empty line. > >> >> @@ -130,7 +137,40 @@ >> offsetof(ngx_mail_ssl_conf_t, session_timeout), >> NULL }, >> >> - ngx_null_command >> + { > > The change here indicate you did something wrong with style. > >> + ngx_string("ssl_verify_client"), >> + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_enum_slot, >> + NGX_MAIL_SRV_CONF_OFFSET, >> + offsetof(ngx_mail_ssl_conf_t, verify), >> + &ngx_mail_ssl_verify >> + }, >> + { > > The style here is certainly wrong. > >> + ngx_string("ssl_verify_depth"), >> + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, > > Hm, "1MORE" is wrong here, should be "TAKE1". Fixed this in http > ssl module. Ah, yeah. Thanks. > >> + ngx_conf_set_num_slot, >> + NGX_MAIL_SRV_CONF_OFFSET, >> + offsetof(ngx_mail_ssl_conf_t, verify_depth), >> + NULL >> + }, >> + { >> + ngx_string("ssl_client_certificate"), >> + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_str_slot, >> + NGX_MAIL_SRV_CONF_OFFSET, >> + offsetof(ngx_mail_ssl_conf_t, client_certificate), >> + NULL >> + }, >> + { >> + ngx_string("ssl_trusted_certificate"), >> + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, >> + ngx_conf_set_str_slot, >> + NGX_MAIL_SRV_CONF_OFFSET, >> + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), >> + NULL >> + }, >> + >> + ngx_null_command >> }; >> >> >> @@ -184,6 +224,8 @@ >> * scf->ecdh_curve = { 0, NULL }; >> * scf->ciphers = { 0, NULL }; >> * scf->shm_zone = NULL; >> + * scf->client_certificate = { 0, NULL }; >> + * scf->trusted_certificate = { 0, NULL }; >> */ >> >> scf->enable = NGX_CONF_UNSET; >> @@ -192,6 +234,8 @@ >> scf->builtin_session_cache = NGX_CONF_UNSET; >> scf->session_timeout = NGX_CONF_UNSET; >> scf->session_ticket_keys = NGX_CONF_UNSET_PTR; >> + scf->verify = NGX_CONF_UNSET_UINT; >> + scf->verify_depth = NGX_CONF_UNSET_UINT; >> >> return scf; >> } >> @@ -230,6 +274,11 @@ >> >> ngx_conf_merge_str_value(conf->ciphers, prev->ciphers, NGX_DEFAULT_CIPHERS); >> >> + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_MAIL_SSL_VERIFY_OFF); >> + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); >> + >> + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, ""); >> + ngx_conf_merge_str_value(conf->trusted_certificate, prev->trusted_certificate, ""); >> >> conf->ssl.log = cf->log; >> >> @@ -310,6 +359,21 @@ >> return NGX_CONF_ERROR; >> } >> >> + if (conf->verify) { >> + if (conf->client_certificate.len == 0 && conf->verify != NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA) { >> + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, >> + "no ssl_client_certificate for ssl_client_verify"); >> + return NGX_CONF_ERROR; >> + } >> + >> + if (ngx_ssl_client_certificate(cf, &conf->ssl, >> + &conf->client_certificate, >> + conf->verify_depth) >> + != NGX_OK) { >> + return NGX_CONF_ERROR; >> + } >> + } >> + > > This code looks incomplete (and there are style issues). E.g., it > doesn't looks like trusted certificates are loaded at all. > > It also lacks ssl_crl support, which is also directly related to > client certificates. Possibly, I'll take a closer look at ngx_ssl_* and the http module again. I might've misunderstood something there. > >> if (conf->prefer_server_ciphers) { >> SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); >> } >> diff -r 4aa64f695031 -r a444733105e8 src/mail/ngx_mail_ssl_module.h >> --- a/src/mail/ngx_mail_ssl_module.h Sat Jan 04 03:32:22 2014 +0400 >> +++ b/src/mail/ngx_mail_ssl_module.h Mon Jan 13 16:14:12 2014 +0100 >> @@ -37,8 +37,14 @@ >> ngx_str_t dhparam; >> ngx_str_t ecdh_curve; >> >> + ngx_str_t client_certificate; >> + ngx_str_t trusted_certificate; >> + >> ngx_str_t ciphers; >> >> + ngx_uint_t verify; >> + ngx_uint_t verify_depth; >> + >> ngx_shm_zone_t *shm_zone; >> >> ngx_array_t *session_ticket_keys; >> @@ -47,6 +53,13 @@ >> ngx_uint_t line; >> } ngx_mail_ssl_conf_t; >> >> +enum ngx_mail_ssl_verify_enum { >> + NGX_MAIL_SSL_VERIFY_OFF = 0, >> + NGX_MAIL_SSL_VERIFY_ON, >> + NGX_MAIL_SSL_VERIFY_OPTIONAL, >> + NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA, >> +}; >> + >> >> extern ngx_module_t ngx_mail_ssl_module; > > We usually avoid using enum types for enumerated configuration > slots. While questionable, it's currently mostly style. > > There are also other style problems here - indentation is wrong, > as well as number of empty lines. I'll remove the enum and fix the style problems. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel Cheers, Sven From ru at nginx.com Tue Jan 14 14:57:41 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 14 Jan 2014 18:57:41 +0400 Subject: Rewrite handling order In-Reply-To: References: <20140114111120.GH1835@mdounin.ru> Message-ID: <20140114145741.GP40401@lo0.su> On Tue, Jan 14, 2014 at 06:54:44PM +0530, Fasih wrote: > Thanks! Could you please explain why this is done? Modules register their handlers (at different phases of request processing) one by one, by adding an element into the corresponding array of handlers. The order in which modules do this is somewhat important. For example, let's take a look at three standard "index" modules: autoindex, index, and random_index. They are listed in auto/modules in the above mentioned sequence, and thus register their handlers in this sequence too. Their handlers are called in a reverse sequence, so it's either random index, or an index file, or an automatically generated directory listing, in this order. There are also 3rd party modules that are added to the list of modules at the end, and consequently register their handlers after the standard modules. Now imagine you wrote a custom my_index module. By processing handlers in a reverse order we give better chance for the 3rd party module to run. If we did the opposite, then nginx would check the "index.html" file existence (the index module) before even calling your handler. > On Tue, Jan 14, 2014 at 4:41 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Jan 14, 2014 at 04:15:32PM +0530, Fasih wrote: > > > > > Hi > > > > > > I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). > > There > > > is another plugin compiled before my plugin that also handles rewrite > > > (HttpLuaModule). I was expecting to see that my module would rewrite > > after > > > lua is done, however that is not the case. Some debugging showed that > > > whereas my module pushed into the > > > cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers after lua, the > > > cmcf.phase_engine.handlers had lua *after* my module. The culprit seems > > to > > > be the following: > > > > > > static ngx_int_t > > > ngx_http_init_phase_handlers(ngx_conf_t *cf, ngx_http_core_main_conf_t > > > *cmcf) > > > { > > > .. > > > ph = cmcf->phase_engine.handlers; > > > .. > > > n += cmcf->phases[i].handlers.nelts; > > > > > > for (j = cmcf->phases[i].handlers.nelts - 1; j >=0; j--) { > > > ph->checker = checker; > > > ph->handler = h[j]; > > > ph->next = n; > > > ph++; > > > } > > > } > > > > > > The order is inverted here (h[j] before h[j-1]). Is this intentional or a > > > bug? > > > > It's intentional. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > -- Ruslan Ermilov From mdounin at mdounin.ru Tue Jan 14 15:31:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jan 2014 19:31:16 +0400 Subject: [PATCH] mail_{ssl, auth_http}_module: add support for SSL client certificates In-Reply-To: <4EADB372-AE1D-4F3D-A150-BAF3D77574CD@ha.cki.ng> References: <87104D36-7406-4AA9-B701-9676738969D2@ha.cki.ng> <869DE836-5635-40C8-A86C-53AFD4B1FD30@ha.cki.ng> <20140114120805.GJ1835@mdounin.ru> <4EADB372-AE1D-4F3D-A150-BAF3D77574CD@ha.cki.ng> Message-ID: <20140114153116.GL1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 02:41:15PM +0100, Sven Peter wrote: [...] > >> @@ -1145,6 +1145,16 @@ > >> ngx_str_t login, passwd; > >> ngx_mail_core_srv_conf_t *cscf; > >> > >> +#if (NGX_MAIL_SSL) > >> + ngx_str_t ssl_client_verify; > >> + ngx_str_t ssl_client_raw_s_dn; > >> + ngx_str_t ssl_client_raw_i_dn; > >> + ngx_str_t ssl_client_raw_serial; > >> + ngx_str_t ssl_client_s_dn; > >> + ngx_str_t ssl_client_i_dn; > >> + ngx_str_t ssl_client_serial; > >> +#endif > >> + > > > > This diverges from the style used. Additionally, variable names > > seems to be too verbose. > > > How does this diverge from the style exactly? Because of the #ifdef? No, because of placement, indentation of variable names, and use of types. With names unchanged, it should look like this: ngx_str_t login, passwd; #if (NGX_MAIL_SSL) ngx_str_t ssl_client_verify, ssl_client_raw_s_dn, ssl_client_raw_i_dn, ssl_client_raw_serial, ssl_client_s_dn, ssl_client_i_dn, ssl_client_serial; #endif ngx_mail_core_srv_conf_t *cscf; > Are the variable names acceptable when then the ssl_client prefix is removed? > Or is the problem that there are client_raw_ and client_ variables? With "ssl_client" removed it looks better, but I would probably use something like "subject" instead of "s_dn", "issuer" instead of "i_dn" and so on. [...] > >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_s_dn, > >> + &ssl_client_s_dn) != NGX_OK) { > >> + return NULL; > >> + } > >> + > >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_i_dn, > >> + &ssl_client_i_dn) != NGX_OK) { > >> + return NULL; > >> + } > >> + > >> + if (ngx_mail_auth_http_escape(pool, &ssl_client_raw_serial, > >> + &ssl_client_serial) != NGX_OK) { > >> + return NULL; > >> + } > > > > On the other hand, escaping of at least client certificate serial > > number looks unneeded. > > Yes, it's unneeded. I'll remove it. The question is more about whether other parts currently escaped in your code really need escaping. Unless they can contatin CR/LF (and I think they can't, even with specially crafter certificates, but it needs checking) - it would be good idea to avoid escaping altogether. [...] > >> + { ngx_string("on"), NGX_MAIL_SSL_VERIFY_ON }, > >> + { ngx_string("optional"), NGX_MAIL_SSL_VERIFY_OPTIONAL }, > >> + { ngx_string("optional_no_ca"), NGX_MAIL_SSL_VERIFY_OPTIONAL_NO_CA }, > > > > As noted above, "optional_no_ca" makes no sense without a way to > > pass a certificate to some backend. > > I've adapted this code from the http module (which exposes the raw certificate) without > thinking about the use cases. > > This leaves two options imho: > > 1) Remove the optional_no_ca entirely and let someone else submit another patch if required :-) > 2) Add an additional HTTP header for the raw certificate > > I'd go for the first one. Comments? I'm ok with both variants. [...] -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Jan 14 23:25:32 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 14 Jan 2014 23:25:32 +0000 Subject: [nginx] SPDY: fixed build, broken by b7ee1bae0ffa. Message-ID: details: http://hg.nginx.org/nginx/rev/439d05a037a3 branches: changeset: 5516:439d05a037a3 user: Valentin Bartenev date: Wed Jan 15 01:44:52 2014 +0400 description: SPDY: fixed build, broken by b7ee1bae0ffa. False positive warning about the "cl" variable may be uninitialized in the ngx_http_spdy_filter_get_data_frame() call was suppressed. It is always initialized either in the "while" cycle or in the following "if" condition since frame_size cannot be zero. diffstat: src/http/ngx_http_spdy_filter_module.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r e5fb14e85040 -r 439d05a037a3 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:52 2014 +0400 @@ -665,6 +665,10 @@ ngx_http_spdy_send_chain(ngx_connection_ offset = 0; } +#if (NGX_SUPPRESS_WARN) + cl = NULL; +#endif + slcf = ngx_http_get_module_loc_conf(r, ngx_http_spdy_module); frame_size = (limit && limit <= (off_t) slcf->chunk_size) From vbart at nginx.com Wed Jan 15 09:32:03 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 15 Jan 2014 09:32:03 +0000 Subject: [nginx] SPDY: fixed off_t/size_t type conversions on 32 bits pla... Message-ID: details: http://hg.nginx.org/nginx/rev/9d1479234f3c branches: changeset: 5517:9d1479234f3c user: Valentin Bartenev date: Wed Jan 15 13:23:31 2014 +0400 description: SPDY: fixed off_t/size_t type conversions on 32 bits platforms. Parameters of ngx_http_spdy_filter_get_shadow() are changed from size_t to off_t since the last call of the function may get size and offset from the rest of a file buffer. This fixes possible data loss rightfully complained by MSVC on 32 bits systems where off_t is 8 bytes long while size_t is only 4 bytes. The other two type casts are needed just to suppress warnings about possible data loss also complained by MSVC but false positive in these cases. diffstat: src/http/ngx_http_spdy_filter_module.c | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diffs (40 lines): diff -r 439d05a037a3 -r 9d1479234f3c src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:52 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 13:23:31 2014 +0400 @@ -35,8 +35,7 @@ static ngx_inline ngx_int_t ngx_http_spd ngx_connection_t *fc, ngx_http_spdy_stream_t *stream); static ngx_chain_t *ngx_http_spdy_filter_get_shadow( - ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, size_t offset, - size_t size); + ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, off_t offset, off_t size); static ngx_http_spdy_out_frame_t *ngx_http_spdy_filter_get_data_frame( ngx_http_spdy_stream_t *stream, size_t len, ngx_chain_t *first, ngx_chain_t *last); @@ -702,7 +701,7 @@ ngx_http_spdy_send_chain(ngx_connection_ *ln = cl; ln = &cl->next; - rest -= size; + rest -= (size_t) size; in = in->next; if (in == NULL) { @@ -752,7 +751,7 @@ ngx_http_spdy_send_chain(ngx_connection_ } if (limit < (off_t) slcf->chunk_size) { - frame_size = limit; + frame_size = (size_t) limit; } } } @@ -777,7 +776,7 @@ ngx_http_spdy_send_chain(ngx_connection_ static ngx_chain_t * ngx_http_spdy_filter_get_shadow(ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, - size_t offset, size_t size) + off_t offset, off_t size) { ngx_buf_t *chunk; ngx_chain_t *cl; From faskiri.devel at gmail.com Wed Jan 15 12:00:54 2014 From: faskiri.devel at gmail.com (Fasih) Date: Wed, 15 Jan 2014 17:30:54 +0530 Subject: Rewrite handling order In-Reply-To: <20140114145741.GP40401@lo0.su> References: <20140114111120.GH1835@mdounin.ru> <20140114145741.GP40401@lo0.su> Message-ID: I see, thanks for the explanation. On Tue, Jan 14, 2014 at 8:27 PM, Ruslan Ermilov wrote: > On Tue, Jan 14, 2014 at 06:54:44PM +0530, Fasih wrote: > > Thanks! Could you please explain why this is done? > > Modules register their handlers (at different phases > of request processing) one by one, by adding an element > into the corresponding array of handlers. The order > in which modules do this is somewhat important. > > For example, let's take a look at three standard "index" > modules: autoindex, index, and random_index. They are > listed in auto/modules in the above mentioned sequence, > and thus register their handlers in this sequence too. > Their handlers are called in a reverse sequence, so it's > either random index, or an index file, or an automatically > generated directory listing, in this order. > > There are also 3rd party modules that are added to the > list of modules at the end, and consequently register > their handlers after the standard modules. Now imagine > you wrote a custom my_index module. By processing > handlers in a reverse order we give better chance for > the 3rd party module to run. If we did the opposite, > then nginx would check the "index.html" file existence > (the index module) before even calling your handler. > > > On Tue, Jan 14, 2014 at 4:41 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Tue, Jan 14, 2014 at 04:15:32PM +0530, Fasih wrote: > > > > > > > Hi > > > > > > > > I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). > > > There > > > > is another plugin compiled before my plugin that also handles rewrite > > > > (HttpLuaModule). I was expecting to see that my module would rewrite > > > after > > > > lua is done, however that is not the case. Some debugging showed that > > > > whereas my module pushed into the > > > > cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers after lua, the > > > > cmcf.phase_engine.handlers had lua *after* my module. The culprit > seems > > > to > > > > be the following: > > > > > > > > static ngx_int_t > > > > ngx_http_init_phase_handlers(ngx_conf_t *cf, > ngx_http_core_main_conf_t > > > > *cmcf) > > > > { > > > > .. > > > > ph = cmcf->phase_engine.handlers; > > > > .. > > > > n += cmcf->phases[i].handlers.nelts; > > > > > > > > for (j = cmcf->phases[i].handlers.nelts - 1; j >=0; j--) { > > > > ph->checker = checker; > > > > ph->handler = h[j]; > > > > ph->next = n; > > > > ph++; > > > > } > > > > } > > > > > > > > The order is inverted here (h[j] before h[j-1]). Is this intentional > or a > > > > bug? > > > > > > It's intentional. > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > -- > Ruslan Ermilov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Jan 15 17:34:28 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 15 Jan 2014 17:34:28 +0000 Subject: [nginx] SPDY: fixed possible uninitialized memory access. Message-ID: details: http://hg.nginx.org/nginx/rev/ec9e9da4c1fb branches: changeset: 5518:ec9e9da4c1fb user: Valentin Bartenev date: Wed Jan 15 17:16:38 2014 +0400 description: SPDY: fixed possible uninitialized memory access. The frame->stream pointer should always be initialized for control frames since the check against it can be performed in ngx_http_spdy_filter_cleanup(). diffstat: src/http/ngx_http_spdy.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (29 lines): diff -r 9d1479234f3c -r ec9e9da4c1fb src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 15 13:23:31 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 15 17:16:38 2014 +0400 @@ -1633,8 +1633,8 @@ ngx_http_spdy_send_settings(ngx_http_spd frame->first = cl; frame->last = cl; frame->handler = ngx_http_spdy_settings_frame_handler; + frame->stream = NULL; #if (NGX_DEBUG) - frame->stream = NULL; frame->size = NGX_SPDY_FRAME_HEADER_SIZE + NGX_SPDY_SETTINGS_NUM_SIZE + NGX_SPDY_SETTINGS_PAIR_SIZE; @@ -1722,6 +1722,7 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd frame->first = cl; frame->last = cl; frame->handler = ngx_http_spdy_ctl_frame_handler; + frame->stream = NULL; } frame->free = NULL; @@ -1733,7 +1734,6 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd return NULL; } - frame->stream = NULL; frame->size = size; #endif From vbart at nginx.com Wed Jan 15 17:34:30 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 15 Jan 2014 17:34:30 +0000 Subject: [nginx] SPDY: the SETTINGS frame should be allocated from sc->pool. Message-ID: details: http://hg.nginx.org/nginx/rev/22c249dac7c1 branches: changeset: 5519:22c249dac7c1 user: Valentin Bartenev date: Wed Jan 15 17:16:38 2014 +0400 description: SPDY: the SETTINGS frame should be allocated from sc->pool. There is no reason to allocate it from connection pool that more like just a bug especially since ngx_http_spdy_settings_frame_handler() already uses sc->pool to free a chain. diffstat: src/http/ngx_http_spdy.c | 13 +++++-------- 1 files changed, 5 insertions(+), 8 deletions(-) diffs (38 lines): diff -r ec9e9da4c1fb -r 22c249dac7c1 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 15 17:16:38 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 15 17:16:38 2014 +0400 @@ -1598,7 +1598,6 @@ ngx_http_spdy_send_settings(ngx_http_spd { u_char *p; ngx_buf_t *buf; - ngx_pool_t *pool; ngx_chain_t *cl; ngx_http_spdy_srv_conf_t *sscf; ngx_http_spdy_out_frame_t *frame; @@ -1606,21 +1605,19 @@ ngx_http_spdy_send_settings(ngx_http_spd ngx_log_debug0(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, "spdy create SETTINGS frame"); - pool = sc->connection->pool; - - frame = ngx_palloc(pool, sizeof(ngx_http_spdy_out_frame_t)); + frame = ngx_palloc(sc->pool, sizeof(ngx_http_spdy_out_frame_t)); if (frame == NULL) { return NGX_ERROR; } - cl = ngx_alloc_chain_link(pool); + cl = ngx_alloc_chain_link(sc->pool); if (cl == NULL) { return NGX_ERROR; } - buf = ngx_create_temp_buf(pool, NGX_SPDY_FRAME_HEADER_SIZE - + NGX_SPDY_SETTINGS_NUM_SIZE - + NGX_SPDY_SETTINGS_PAIR_SIZE); + buf = ngx_create_temp_buf(sc->pool, NGX_SPDY_FRAME_HEADER_SIZE + + NGX_SPDY_SETTINGS_NUM_SIZE + + NGX_SPDY_SETTINGS_PAIR_SIZE); if (buf == NULL) { return NGX_ERROR; } From vbart at nginx.com Wed Jan 15 17:34:31 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 15 Jan 2014 17:34:31 +0000 Subject: [nginx] SPDY: send output queue after processing of read event. Message-ID: details: http://hg.nginx.org/nginx/rev/a336cbc3dd44 branches: changeset: 5520:a336cbc3dd44 user: Valentin Bartenev date: Wed Jan 15 17:16:38 2014 +0400 description: SPDY: send output queue after processing of read event. During the processing of input some control frames can be added to the queue. And if there were no writing streams at the moment, these control frames might be left unsent for a long time (or even forever). This long delay is especially critical for PING replies since a client can consider connection as broken and then resend exactly the same request over a new connection, which is not safe in case of non-idempotent HTTP methods. diffstat: src/http/ngx_http_spdy.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff -r 22c249dac7c1 -r a336cbc3dd44 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 15 17:16:38 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 15 17:16:38 2014 +0400 @@ -378,6 +378,11 @@ ngx_http_spdy_read_handler(ngx_event_t * return; } + if (sc->last_out && ngx_http_spdy_send_output_queue(sc) == NGX_ERROR) { + ngx_http_spdy_finalize_connection(sc, NGX_HTTP_CLIENT_CLOSED_REQUEST); + return; + } + sc->blocked = 0; if (sc->processing) { From ravivsn at gmail.com Thu Jan 16 22:56:53 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Thu, 16 Jan 2014 14:56:53 -0800 Subject: question on ngx_reset_pool() 'current' pointer not reset Message-ID: Hi Nginx experts, I am new to nginx and started looking into the code to understand the architecture. Currently, I am looking into nginx pool implementation. I have a question on ngx_reset_pool(). It seems to set back 'last' to the location as expected. But why 'current' and 'failed' are not reset. Does it not make those memory blocks which are not no more referenced by parsing from 'current' made useless? Thanks, -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 17 00:07:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 04:07:24 +0400 Subject: question on ngx_reset_pool() 'current' pointer not reset In-Reply-To: References: Message-ID: <20140117000724.GK1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 02:56:53PM -0800, Ravi Chunduru wrote: > Hi Nginx experts, > I am new to nginx and started looking into the code to understand the > architecture. > > Currently, I am looking into nginx pool implementation. I have a question > on ngx_reset_pool(). > It seems to set back 'last' to the location as expected. But why 'current' > and 'failed' are not reset. > > Does it not make those memory blocks which are not no more referenced by > parsing from 'current' made useless? It looks like a bug. Mostly harmless though - the ngx_reset_pool() is a function introduced specifically to save some memory while reading big geo databases. Since introduction it's only used in the geo module, and I don't think it's possible to hit p->current != pool case there. -- Maxim Dounin http://nginx.org/ From ravivsn at gmail.com Fri Jan 17 00:21:12 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Thu, 16 Jan 2014 16:21:12 -0800 Subject: question on ngx_reset_pool() 'current' pointer not reset In-Reply-To: <20140117000724.GK1835@mdounin.ru> References: <20140117000724.GK1835@mdounin.ru> Message-ID: Hi Maxim, Thanks for the reply. If any module consumes lot of memory, resets and still goes on consume more memory than earlier we could end up with lots of wasted memory. You are right that we would not end with current as NULL as long as memory available. Thanks, -Ravi. On Thu, Jan 16, 2014 at 4:07 PM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 16, 2014 at 02:56:53PM -0800, Ravi Chunduru wrote: > > > Hi Nginx experts, > > I am new to nginx and started looking into the code to understand the > > architecture. > > > > Currently, I am looking into nginx pool implementation. I have a question > > on ngx_reset_pool(). > > It seems to set back 'last' to the location as expected. But why > 'current' > > and 'failed' are not reset. > > > > Does it not make those memory blocks which are not no more referenced by > > parsing from 'current' made useless? > > It looks like a bug. Mostly harmless though - the > ngx_reset_pool() is a function introduced specifically to save > some memory while reading big geo databases. Since introduction > it's only used in the geo module, and I don't think it's possible > to hit p->current != pool case there. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravivsn at gmail.com Fri Jan 17 02:22:58 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Thu, 16 Jan 2014 18:22:58 -0800 Subject: nginx array utility pool usage Message-ID: Hi Nginx experts, Thanks for the prompt reply to my earlier email on ngx_reset_pool() Now, I am looking into ngx_array.c. I found an issue ngx_array_push(). Here are the details. nginx will check if number of elements is equal to capacity of the array. If there is no space in the memory block, it allocates a new memory block with twice the size of array and copies over the elements. So far so good. Assume that pool utility got entirely new memory block then a->pool is not updated with that of 'pool->current'. I got an assumption from the code that a->pool is always the memory block that has the array elements by seeing the code in ngx_array_push(), ngx_array_push_n() or ngx_array_destroy() where checks were always done with pool pointer in array. Functionalities issues would come up once there is an array overflow. I think for every new push of element after first crossing/overflow of the capacity, nginx will keep on creating new array. Thus it results in wastage of memory. Please let me know if its a issue or correct my understanding. Thanks, -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravivsn at gmail.com Fri Jan 17 02:38:51 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Thu, 16 Jan 2014 18:38:51 -0800 Subject: nginx array utility pool usage In-Reply-To: References: Message-ID: A little correction, a->pool is nothing to do with pool->current. But array need to have a pointer to pool data of that of memory block that holds the array elements. Then all checks done in ngx_array_push() etc., will be correct. On Thu, Jan 16, 2014 at 6:22 PM, Ravi Chunduru wrote: > Hi Nginx experts, > Thanks for the prompt reply to my earlier email on ngx_reset_pool() > > Now, I am looking into ngx_array.c. I found an issue ngx_array_push(). > Here are the details. > nginx will check if number of elements is equal to capacity of the array. > If there is no space in the memory block, it allocates a new memory block > with twice the size of array and copies over the elements. So far so good. > Assume that pool utility got entirely new memory block then a->pool is not > updated with that of 'pool->current'. > > I got an assumption from the code that a->pool is always the memory block > that has the array elements by seeing the code in ngx_array_push(), > ngx_array_push_n() or ngx_array_destroy() where checks were always done > with pool pointer in array. > > Functionalities issues would come up once there is an array overflow. I > think for every new push of element after first crossing/overflow of the > capacity, nginx will keep on creating new array. Thus it results in wastage > of memory. > > Please let me know if its a issue or correct my understanding. > > Thanks, > -- > Ravi > -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 17 02:40:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 06:40:27 +0400 Subject: nginx array utility pool usage In-Reply-To: References: Message-ID: <20140117024026.GL1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 06:22:58PM -0800, Ravi Chunduru wrote: > Hi Nginx experts, > Thanks for the prompt reply to my earlier email on ngx_reset_pool() > > Now, I am looking into ngx_array.c. I found an issue ngx_array_push(). Here > are the details. > nginx will check if number of elements is equal to capacity of the array. > If there is no space in the memory block, it allocates a new memory block > with twice the size of array and copies over the elements. So far so good. > Assume that pool utility got entirely new memory block then a->pool is not > updated with that of 'pool->current'. > > I got an assumption from the code that a->pool is always the memory block > that has the array elements by seeing the code in ngx_array_push(), > ngx_array_push_n() or ngx_array_destroy() where checks were always done > with pool pointer in array. > > Functionalities issues would come up once there is an array overflow. I > think for every new push of element after first crossing/overflow of the > capacity, nginx will keep on creating new array. Thus it results in wastage > of memory. > > Please let me know if its a issue or correct my understanding. That's expected behaviour. Arrays are implemented in a way that allocates additional memory on overflows, and it's expected to happen. There is a ngx_list_t structure to be used if such additional memory allocations are undesired. Optimization of allocations which uses pool internals is just an optimization and it's not expected to always succeed. -- Maxim Dounin http://nginx.org/ From ravivsn at gmail.com Fri Jan 17 03:16:50 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Thu, 16 Jan 2014 19:16:50 -0800 Subject: nginx array utility pool usage In-Reply-To: <20140117024026.GL1835@mdounin.ru> References: <20140117024026.GL1835@mdounin.ru> Message-ID: Hi Maxim, I understand on array overflow nginx creates a new memory block. That is perfectly alright. Say I have a Pool and array is allocated from the first memory block and it happend such a way that array elements end at pool->d.last. Now, say the pool is used for some other purposes such a way that pool->current is now pointing to a different memory block say '2' And if we want to allocate a few more array elements, nginx has to use second memory block. Now the elements are moved to second memory block. At this stage, if any new element is requested that results in overflow, nginx does the below checks if ((u_char *) a->elts + size == p->d.last && p->d.last + a->size <= p->d.end) In the above code, p->d.last was actually pointing to the end of first memory block but not second memory block. Hence *even though there is memory available in second memory block* it will go ahead and create a new memory block. This will repeat on each overflow. And, the code in ngx_array_destroy() will actually set the pointer(p->d.last) wrongly once there is a overflow. This is a critical issue. First memory block would have say 'n' elements but after overflow number of elements become 2 times of n. Lets say after second overflow, I destroyed the array, then p->d.last will be set backwards by 2 times in the first memory block. But, in actuality it was size 'n'. Nginx never faces that situation because, once a memory block is set to 'failed', it wont be used for allocation any more. But, if the 'failed' count is less than 4 then we may have issue and also pool destroy may have potential issues. Sorry for long email, but I wanted to explain that in detail. Thanks, -Ravi. On Thu, Jan 16, 2014 at 6:40 PM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 16, 2014 at 06:22:58PM -0800, Ravi Chunduru wrote: > > > Hi Nginx experts, > > Thanks for the prompt reply to my earlier email on ngx_reset_pool() > > > > Now, I am looking into ngx_array.c. I found an issue ngx_array_push(). > Here > > are the details. > > nginx will check if number of elements is equal to capacity of the array. > > If there is no space in the memory block, it allocates a new memory block > > with twice the size of array and copies over the elements. So far so > good. > > Assume that pool utility got entirely new memory block then a->pool is > not > > updated with that of 'pool->current'. > > > > I got an assumption from the code that a->pool is always the memory block > > that has the array elements by seeing the code in ngx_array_push(), > > ngx_array_push_n() or ngx_array_destroy() where checks were always done > > with pool pointer in array. > > > > Functionalities issues would come up once there is an array overflow. I > > think for every new push of element after first crossing/overflow of the > > capacity, nginx will keep on creating new array. Thus it results in > wastage > > of memory. > > > > Please let me know if its a issue or correct my understanding. > > That's expected behaviour. Arrays are implemented in a way that > allocates additional memory on overflows, and it's expected to > happen. There is a ngx_list_t structure to be used if such > additional memory allocations are undesired. Optimization of > allocations which uses pool internals is just an optimization and > it's not expected to always succeed. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 17 03:54:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 07:54:27 +0400 Subject: [PATCH 1 of 7] Mail: add IMAP ID command support In-Reply-To: <0ff28c3c519125db11ae.1389700458@HPC> References: <0ff28c3c519125db11ae.1389700458@HPC> Message-ID: <20140117035427.GM1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 12:54:18PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1389700210 -3600 > # Tue Jan 14 12:50:10 2014 +0100 > # Node ID 0ff28c3c519125db11ae3c56fbf34a7a5975a452 > # Parent d049b0ea00a388c142627f10a0ee01c5b1bedc43 > Mail: add IMAP ID command support. > add parsing of IMAP ID command and his parameter list, see RFC2971 Summary line. Detailed description, if needed, with proper capitalization and punctuation, please. > > diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail.h > --- a/src/mail/ngx_mail.h Fri Jan 10 16:12:40 2014 +0100 > +++ b/src/mail/ngx_mail.h Tue Jan 14 12:50:10 2014 +0100 > @@ -215,6 +215,7 @@ > unsigned quoted:1; > unsigned backslash:1; > unsigned no_sync_literal:1; > + unsigned params_list:1; > unsigned starttls:1; > unsigned esmtp:1; > unsigned auth_method:3; > @@ -233,6 +234,7 @@ > ngx_str_t smtp_helo; > ngx_str_t smtp_from; > ngx_str_t smtp_to; > + ngx_str_t imap_client_id; Shouldn't it be "imap_id" instead? > > ngx_str_t cmd; > > @@ -284,6 +286,7 @@ > > #define NGX_IMAP_AUTHENTICATE 7 > > +#define NGX_IMAP_ID 8 > Probably both NGX_IMAP_AUTHENTICATE and NGX_IMAP_ID should be moved to other IMAP commands, with only NGX_IMAP_NEXT preserved distinct - as it's a special case of fake command to handle literals. > #define NGX_SMTP_HELO 1 > #define NGX_SMTP_EHLO 2 > diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail_imap_handler.c > --- a/src/mail/ngx_mail_imap_handler.c Fri Jan 10 16:12:40 2014 +0100 > +++ b/src/mail/ngx_mail_imap_handler.c Tue Jan 14 12:50:10 2014 +0100 > @@ -16,6 +16,8 @@ > ngx_connection_t *c); > static ngx_int_t ngx_mail_imap_authenticate(ngx_mail_session_t *s, > ngx_connection_t *c); > +static ngx_int_t ngx_mail_imap_id(ngx_mail_session_t *s, > + ngx_connection_t *c); > static ngx_int_t ngx_mail_imap_capability(ngx_mail_session_t *s, > ngx_connection_t *c); > static ngx_int_t ngx_mail_imap_starttls(ngx_mail_session_t *s, > @@ -32,6 +34,9 @@ > static u_char imap_bye[] = "* BYE" CRLF; > static u_char imap_invalid_command[] = "BAD invalid command" CRLF; > > +static ngx_str_t ngx_mail_imap_client_id_nil = ngx_string("ID NIL"); > +static ngx_str_t ngx_mail_imap_server_id_nil = ngx_string("* ID NIL" CRLF); > + > Using strings in a same way as the other code here may be a good idea. > void > ngx_mail_imap_init_session(ngx_mail_session_t *s, ngx_connection_t *c) > @@ -179,6 +184,10 @@ Please use [diff] showfunc=1 in your hgrc. Having function names in diffs greatly simplifies reading patches. > tag = (rc != NGX_OK); > break; > > + case NGX_IMAP_ID: > + rc = ngx_mail_imap_id(s, c); > + break; > + > case NGX_IMAP_CAPABILITY: > rc = ngx_mail_imap_capability(s, c); > break; I would rather recommend moving this after the CAPABILITY command handling. > @@ -292,6 +301,60 @@ > ngx_mail_send(c->write); > } > > +static ngx_int_t Two empty lines between functions, please. http://nginx.org/en/docs/contributing_changes.html > +ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) > +{ This is certainly a wrong place to put this function. Both prototype and switch suggests it should be after ngx_mail_imap_authenticate(), but for some reason you've placed the function body in an arbitrary place. > + ngx_str_t *arg; > + size_t size, i; > + u_char *p, *data; Proper order would be: u_char *p, *data; size_t size, i; ngx_str_t *arg; > + > + arg = s->args.elts; > + if (s->args.nelts < 1 || arg[0].len == 0) { > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } The arg[0].len == 0 check seems unneeded. > + > + // Client sends ID NIL or ID ( ... ) No C99 comments, please. > + if (s->args.nelts == 1) { > + > + if (ngx_strncasecmp(arg[0].data, (u_char *) "NIL", 3) != 0) > + return NGX_MAIL_PARSE_INVALID_COMMAND; Never ever use if() without curly brackets, please. It's not only a style issue but also may easily result in wrong code as there are macros which aren't single statements. > + > + s->imap_client_id = ngx_mail_imap_client_id_nil; > + > + } else { > + size = sizeof("ID (") - 1; > + for (i = 0; i < s->args.nelts; i++) { > + size += 1 + arg[i].len + 2; // 1 space plus 2 quotes > + } > + > + data = ngx_pnalloc(c->pool, size); > + if (data == NULL) { > + return NGX_ERROR; > + } > + > + p = ngx_cpymem(data, "ID (", sizeof("ID (") - 1); > + for (i = 0; i < s->args.nelts; i++) { > + *p++ = '"'; > + p = ngx_cpymem(p, arg[i].data, arg[i].len); > + *p++ = '"'; > + *p++ = ' '; > + } > + *--p = ')'; // replace last space > + > + s->imap_client_id.len = size; > + s->imap_client_id.data = data; This doesn't look correct. E.g., arguments may contain quotes. > + } > + > + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "imap client ID:\"%V%V\"", > + &s->tag, &s->imap_client_id); > + > + // Prepare server response to ID command > + s->text = ngx_mail_imap_server_id_nil; > + > + return NGX_OK; > +} > + > > static ngx_int_t > ngx_mail_imap_login(ngx_mail_session_t *s, ngx_connection_t *c) > diff -r d049b0ea00a3 -r 0ff28c3c5191 src/mail/ngx_mail_parse.c > --- a/src/mail/ngx_mail_parse.c Fri Jan 10 16:12:40 2014 +0100 > +++ b/src/mail/ngx_mail_parse.c Tue Jan 14 12:50:10 2014 +0100 > @@ -279,6 +279,16 @@ > c = s->cmd_start; > > switch (p - c) { > + case 2: > + if ((c[0] == 'I' || c[0] == 'i') > + && (c[1] == 'D'|| c[1] == 'd')) > + { > + s->command = NGX_IMAP_ID; > + > + } else { > + goto invalid; > + } > + break; > > case 4: > if ((c[0] == 'N' || c[0] == 'n') The empty line between "switch" and "case" was here intentionally. > @@ -409,14 +419,31 @@ > case ' ': > break; > case CR: > + if (s->params_list == 1) > + goto invalid; > state = sw_almost_done; > s->arg_end = p; > break; > case LF: > + if (s->params_list == 1) > + goto invalid; > s->arg_end = p; > goto done; > + case '(': // params list begin > + if (!s->params_list && s->args.nelts == 0) { > + s->params_list = 1; > + break; > + } > + goto invalid; It's good idea to use one form of tests, either "s->params_list / !s->params_list", or "s->params_list == 0 / s->params_list == 1". > + case ')': // params list closing > + if (s->params_list == 1 && s->args.nelts > 0) { The ID command allows an empty list as per it's formal syntax. > + s->params_list = 0; > + state = sw_spaces_before_argument; > + break; > + } > + goto invalid; > case '"': > - if (s->args.nelts <= 2) { > + if (s->args.nelts <= 2 || s->params_list) { Completely unlimited number of arguments looks wrong, especially considering "Implementations MUST NOT send more than 30 field-value pairs" clause in RFC 2971. > s->quoted = 1; > s->arg_start = p + 1; > state = sw_argument; > @@ -430,7 +457,7 @@ > } > goto invalid; > default: > - if (s->args.nelts <= 2) { > + if (s->args.nelts <= 2 && !s->params_list) { This effectively forbids atoms within lists, and also forbids NIL as there is no special handling. The NIL is perfectly allowed even in ID command parameters list, not even talking about lists in general. > s->arg_start = p; > state = sw_argument; > break; > @@ -602,6 +629,7 @@ > s->quoted = 0; > s->no_sync_literal = 0; > s->literal_len = 0; > + s->params_list = 0; > } > > s->state = (s->command != NGX_IMAP_AUTHENTICATE) ? sw_start : sw_argument; > @@ -614,6 +642,7 @@ > s->quoted = 0; > s->no_sync_literal = 0; > s->literal_len = 0; > + s->params_list = 0; > > return NGX_MAIL_PARSE_INVALID_COMMAND; > } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 17 04:04:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 08:04:45 +0400 Subject: nginx array utility pool usage In-Reply-To: References: <20140117024026.GL1835@mdounin.ru> Message-ID: <20140117040445.GN1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 07:16:50PM -0800, Ravi Chunduru wrote: > Hi Maxim, > I understand on array overflow nginx creates a new memory block. That is > perfectly alright. > > Say I have a Pool and array is allocated from the first memory block and it > happend such a way that array elements end at pool->d.last. > > Now, say the pool is used for some other purposes such a way that > pool->current is now pointing to a different memory block say '2' > > And if we want to allocate a few more array elements, nginx has to use > second memory block. Now the elements are moved to second memory block. > > At this stage, if any new element is requested that results in overflow, > nginx does the below checks > > if ((u_char *) a->elts + size == p->d.last > > && p->d.last + a->size <= p->d.end) > > In the above code, p->d.last was actually pointing to the end of first > memory block but not second memory block. Hence *even though there is > memory available in second memory block* it will go ahead and create a new > memory block. This will repeat on each overflow. Yes, I understand what you are talking about. As array never knows from which exactly block the memory was allocated, and doesn't want to dig into pool internals, it only checks an obvious case - if the memory was allocated from the first block, and if there is a room there. > And, the code in ngx_array_destroy() will actually set the > pointer(p->d.last) wrongly once there is a overflow. This is a critical > issue. > First memory block would have say 'n' elements but after overflow number of > elements become 2 times of n. > Lets say after second overflow, I destroyed the array, then p->d.last will > be set backwards by 2 times in the first memory block. But, in actuality it > was size 'n'. The code in ngx_array_destroy() does the following: if ((u_char *) a->elts + a->size * a->nalloc == p->d.last) { p->d.last -= a->size * a->nalloc; } If a memory was allocated from another block, the "a->elts + ... == p->d.last" check will fail and p->d.last will not be moved. > > Nginx never faces that situation because, once a memory block is set to > 'failed', it wont be used for allocation any more. But, if the 'failed' > count is less than 4 then we may have issue and also pool destroy may have > potential issues. > > Sorry for long email, but I wanted to explain that in detail. > > Thanks, > -Ravi. > > > > On Thu, Jan 16, 2014 at 6:40 PM, Maxim Dounin wrote: > > > Hello! > > > > On Thu, Jan 16, 2014 at 06:22:58PM -0800, Ravi Chunduru wrote: > > > > > Hi Nginx experts, > > > Thanks for the prompt reply to my earlier email on ngx_reset_pool() > > > > > > Now, I am looking into ngx_array.c. I found an issue ngx_array_push(). > > Here > > > are the details. > > > nginx will check if number of elements is equal to capacity of the array. > > > If there is no space in the memory block, it allocates a new memory block > > > with twice the size of array and copies over the elements. So far so > > good. > > > Assume that pool utility got entirely new memory block then a->pool is > > not > > > updated with that of 'pool->current'. > > > > > > I got an assumption from the code that a->pool is always the memory block > > > that has the array elements by seeing the code in ngx_array_push(), > > > ngx_array_push_n() or ngx_array_destroy() where checks were always done > > > with pool pointer in array. > > > > > > Functionalities issues would come up once there is an array overflow. I > > > think for every new push of element after first crossing/overflow of the > > > capacity, nginx will keep on creating new array. Thus it results in > > wastage > > > of memory. > > > > > > Please let me know if its a issue or correct my understanding. > > > > That's expected behaviour. Arrays are implemented in a way that > > allocates additional memory on overflows, and it's expected to > > happen. There is a ngx_list_t structure to be used if such > > additional memory allocations are undesired. Optimization of > > allocations which uses pool internals is just an optimization and > > it's not expected to always succeed. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > -- > Ravi > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 17 04:07:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 08:07:39 +0400 Subject: [PATCH 2 of 7] Mail: add IMAP client ID value to mail auth script In-Reply-To: References: Message-ID: <20140117040739.GO1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 12:54:19PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1389700230 -3600 > # Tue Jan 14 12:50:30 2014 +0100 > # Node ID ece46b257e8d31a1a7a81bf5fcdd0271c1dc2318 > # Parent 0ff28c3c519125db11ae3c56fbf34a7a5975a452 > Mail: add IMAP client ID value to mail auth script. > > diff -r 0ff28c3c5191 -r ece46b257e8d src/mail/ngx_mail_auth_http_module.c > --- a/src/mail/ngx_mail_auth_http_module.c Tue Jan 14 12:50:10 2014 +0100 > +++ b/src/mail/ngx_mail_auth_http_module.c Tue Jan 14 12:50:30 2014 +0100 > @@ -1176,6 +1176,11 @@ > + ahcf->header.len > + sizeof(CRLF) - 1; > > + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { > + len += sizeof("Client-IMAP-ID: ") - 1 > + + s->imap_client_id.len + sizeof(CRLF) - 1; > + } > + Auth-IMAP-ID would be more in-line with other names used. > b = ngx_create_temp_buf(pool, len); > if (b == NULL) { > return NULL; > @@ -1254,6 +1259,13 @@ > *b->last++ = CR; *b->last++ = LF; > > } > + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { > + b->last = ngx_cpymem(b->last, "Client-IMAP-ID: ", > + sizeof("Client-IMAP-ID: ") - 1); > + b->last = ngx_copy(b->last, > + s->imap_client_id.data, s->imap_client_id.len); > + *b->last++ = CR; *b->last++ = LF; > + } This will create a security hole, as ID parameters may contain anything. > > if (ahcf->header.len) { > b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 17 04:11:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 08:11:28 +0400 Subject: [PATCH 3 of 7] Mail: add ID to the 'ngx_mail_imap_default_capabilities' list In-Reply-To: <56df02d0dad9e7746fed.1389700460@HPC> References: <56df02d0dad9e7746fed.1389700460@HPC> Message-ID: <20140117041127.GP1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 12:54:20PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1389700241 -3600 > # Tue Jan 14 12:50:41 2014 +0100 > # Node ID 56df02d0dad9e7746fed311c88787fcb3ea902d7 > # Parent ece46b257e8d31a1a7a81bf5fcdd0271c1dc2318 > Mail: add ID to the 'ngx_mail_imap_default_capabilities' list. > > diff -r ece46b257e8d -r 56df02d0dad9 src/mail/ngx_mail_imap_module.c > --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:30 2014 +0100 > +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:41 2014 +0100 > @@ -18,6 +18,7 @@ > > > static ngx_str_t ngx_mail_imap_default_capabilities[] = { > + ngx_string("ID"), > ngx_string("IMAP4"), > ngx_string("IMAP4rev1"), > ngx_string("UIDPLUS"), > @@ -122,7 +123,7 @@ > > iscf->client_buffer_size = NGX_CONF_UNSET_SIZE; > > - if (ngx_array_init(&iscf->capabilities, cf->pool, 4, sizeof(ngx_str_t)) > + if (ngx_array_init(&iscf->capabilities, cf->pool, 5, sizeof(ngx_str_t)) > != NGX_OK) > { > return NULL; > # HG changeset patch > # User Filipe da Silva > # Date 1389700241 -3600 > # Tue Jan 14 12:50:41 2014 +0100 > # Node ID 56df02d0dad9e7746fed311c88787fcb3ea902d7 > # Parent ece46b257e8d31a1a7a81bf5fcdd0271c1dc2318 > Mail: add ID to the 'ngx_mail_imap_default_capabilities' list. > > diff -r ece46b257e8d -r 56df02d0dad9 src/mail/ngx_mail_imap_module.c > --- a/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:30 2014 +0100 > +++ b/src/mail/ngx_mail_imap_module.c Tue Jan 14 12:50:41 2014 +0100 > @@ -18,6 +18,7 @@ > > > static ngx_str_t ngx_mail_imap_default_capabilities[] = { > + ngx_string("ID"), > ngx_string("IMAP4"), > ngx_string("IMAP4rev1"), > ngx_string("UIDPLUS"), I don't think that "ID" should be listed first. > @@ -122,7 +123,7 @@ > > iscf->client_buffer_size = NGX_CONF_UNSET_SIZE; > > - if (ngx_array_init(&iscf->capabilities, cf->pool, 4, sizeof(ngx_str_t)) > + if (ngx_array_init(&iscf->capabilities, cf->pool, 5, sizeof(ngx_str_t)) > != NGX_OK) > { > return NULL; > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 17 04:14:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 08:14:20 +0400 Subject: [PATCH 4 of 7] Mail: add IMAP ID command response settings to customize server response In-Reply-To: <2d3ff21b5373a83dec32.1389700461@HPC> References: <2d3ff21b5373a83dec32.1389700461@HPC> Message-ID: <20140117041420.GQ1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 12:54:21PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1389700251 -3600 > # Tue Jan 14 12:50:51 2014 +0100 > # Node ID 2d3ff21b5373a83dec32759062c4e04a14567c6e > # Parent 56df02d0dad9e7746fed311c88787fcb3ea902d7 > Mail: add IMAP ID command response settings to customize server response. I don't think that ID response should be customizeable. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 17 04:15:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 08:15:43 +0400 Subject: [PATCH 5 of 7] Mail: add support for dynamic ID field value : $version, $remote-host In-Reply-To: <147c57844b913f2b1a4d.1389700462@HPC> References: <147c57844b913f2b1a4d.1389700462@HPC> Message-ID: <20140117041543.GR1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 12:54:22PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1389700272 -3600 > # Tue Jan 14 12:51:12 2014 +0100 > # Node ID 147c57844b913f2b1a4dafb44d58e1128039ea03 > # Parent 2d3ff21b5373a83dec32759062c4e04a14567c6e > Mail: add support for dynamic ID field value : $version, $remote-host. > This two keyword are replaced at glance. For sure no. These are too easily confused with variables as currently available in http only. -- Maxim Dounin http://nginx.org/ From tattipravins at gmail.com Fri Jan 17 08:27:12 2014 From: tattipravins at gmail.com (Pravin Tatti) Date: Fri, 17 Jan 2014 13:57:12 +0530 Subject: Question on pre-read under 'ngx_http_read_client_request_body' Message-ID: I am beginner to nginx and trying to understand it can any one help to understand the below query: In ngx_http_read_client_request_body() there is a pre-read part of the request body this is basically used to populate the request->body with body filters right? why we are not doing the same in case of spdy any specific reason. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilvayy at gmail.com Fri Jan 17 09:57:39 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Fri, 17 Jan 2014 10:57:39 +0100 Subject: [PATCH 1 of 7] Mail: add IMAP ID command support Message-ID: Hi ! Many thanks for the reviews. I will rework on my patch quickly. I've just one question. 2014/1/17 : > Message: 1 > Date: Fri, 17 Jan 2014 07:54:27 +0400 > From: Maxim Dounin > To: nginx-devel at nginx.org > Subject: Re: [PATCH 1 of 7] Mail: add IMAP ID command support > Message-ID: <20140117035427.GM1835 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Tue, Jan 14, 2014 at 12:54:18PM +0100, Filipe da Silva wrote: > >> # HG changeset patch > > Please use > > [diff] > showfunc=1 > > in your hgrc. Having function names in diffs greatly simplifies > reading patches. > As I read it, i remember seeing it in some previous threads. You better add this point to http://nginx.org/en/docs/contributing_changes.html >> + >> + s->imap_client_id = ngx_mail_imap_client_id_nil; >> + >> + } else { >> + size = sizeof("ID (") - 1; >> + for (i = 0; i < s->args.nelts; i++) { >> + size += 1 + arg[i].len + 2; // 1 space plus 2 quotes >> + } >> + >> + data = ngx_pnalloc(c->pool, size); >> + if (data == NULL) { >> + return NGX_ERROR; >> + } >> + >> + p = ngx_cpymem(data, "ID (", sizeof("ID (") - 1); >> + for (i = 0; i < s->args.nelts; i++) { >> + *p++ = '"'; >> + p = ngx_cpymem(p, arg[i].data, arg[i].len); >> + *p++ = '"'; >> + *p++ = ' '; >> + } >> + *--p = ')'; // replace last space >> + >> + s->imap_client_id.len = size; >> + s->imap_client_id.data = data; > > This doesn't look correct. E.g., arguments may contain quotes. > I think I better add @@ -249,6 +249,8 @@ typedef struct { u_char *cmd_start; u_char *arg_start; u_char *arg_end; + u_char *list_start; + u_char *list_end; ngx_uint_t literal_len; } ngx_mail_session_t; And use this point pointers to extract the whole parameter list at once . But I'm worried about the fact that a very long client command could be split/sparse in separated buffers ? Am I wrong ? ' cause I haven't dig enough into nginx internals, to answer to this point. > >> + case ')': // params list closing >> + if (s->params_list == 1 && s->args.nelts > 0) { > > The ID command allows an empty list as per it's formal syntax. > >> + s->params_list = 0; >> + state = sw_spaces_before_argument; >> + break; >> + } >> + goto invalid; >> case '"': >> - if (s->args.nelts <= 2) { >> + if (s->args.nelts <= 2 || s->params_list) { > > Completely unlimited number of arguments looks wrong, especially > considering "Implementations MUST NOT send more than 30 > field-value pairs" clause in RFC 2971. > >> s->quoted = 1; >> s->arg_start = p + 1; >> state = sw_argument; >> @@ -430,7 +457,7 @@ >> } >> goto invalid; >> default: >> - if (s->args.nelts <= 2) { >> + if (s->args.nelts <= 2 && !s->params_list) { > > This effectively forbids atoms within lists, and also forbids NIL > as there is no special handling. The NIL is perfectly allowed > even in ID command parameters list, not even talking about lists > in general. > I'm agree, I have to extend a bit more the IMAP parsing, as I made some tests on gmail, and these commands are valid : dd ID ( KEY VALUE ) dd ID ( KEY NIL ) dd ID ("KEY" NIL ) dd ID ("KEY" NIL) but these are'nt : dd ID ( NIL NIL ) dd ID ( NIL NIL) tag ID ( KEY VALUE NIL ) neither : ddd ID ( ) -> ddd BAD Invalid argument supplied to ID. zzzzzzzzzzzzzz.25 >> } > > > -- > Maxim Dounin > http://nginx.org/ > --- Filipe da Silva From mdounin at mdounin.ru Fri Jan 17 11:19:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 15:19:15 +0400 Subject: [PATCH 1 of 7] Mail: add IMAP ID command support In-Reply-To: References: Message-ID: <20140117111915.GT1835@mdounin.ru> Hello! On Fri, Jan 17, 2014 at 10:57:39AM +0100, Filipe Da Silva wrote: > Hi ! > > Many thanks for the reviews. > I will rework on my patch quickly. > > I've just one question. > > 2014/1/17 : > > Message: 1 > > Date: Fri, 17 Jan 2014 07:54:27 +0400 > > From: Maxim Dounin > > To: nginx-devel at nginx.org > > Subject: Re: [PATCH 1 of 7] Mail: add IMAP ID command support > > Message-ID: <20140117035427.GM1835 at mdounin.ru> > > Content-Type: text/plain; charset=us-ascii > > > > Hello! > > > > On Tue, Jan 14, 2014 at 12:54:18PM +0100, Filipe da Silva wrote: > > > >> # HG changeset patch > > > > > Please use > > > > [diff] > > showfunc=1 > > > > in your hgrc. Having function names in diffs greatly simplifies > > reading patches. > > > > As I read it, i remember seeing it in some previous threads. > You better add this point to http://nginx.org/en/docs/contributing_changes.html Sure, we'll consider it. On the other hand, we try to keep this page short to simplify reading, and this is already implicitly suggested by the patch provided as an example. > >> + > >> + s->imap_client_id = ngx_mail_imap_client_id_nil; > >> + > >> + } else { > >> + size = sizeof("ID (") - 1; > >> + for (i = 0; i < s->args.nelts; i++) { > >> + size += 1 + arg[i].len + 2; // 1 space plus 2 quotes > >> + } > >> + > >> + data = ngx_pnalloc(c->pool, size); > >> + if (data == NULL) { > >> + return NGX_ERROR; > >> + } > >> + > >> + p = ngx_cpymem(data, "ID (", sizeof("ID (") - 1); > >> + for (i = 0; i < s->args.nelts; i++) { > >> + *p++ = '"'; > >> + p = ngx_cpymem(p, arg[i].data, arg[i].len); > >> + *p++ = '"'; > >> + *p++ = ' '; > >> + } > >> + *--p = ')'; // replace last space > >> + > >> + s->imap_client_id.len = size; > >> + s->imap_client_id.data = data; > > > > This doesn't look correct. E.g., arguments may contain quotes. > > > > I think I better add > @@ -249,6 +249,8 @@ typedef struct { > u_char *cmd_start; > u_char *arg_start; > u_char *arg_end; > + u_char *list_start; > + u_char *list_end; > ngx_uint_t literal_len; > } ngx_mail_session_t; > > And use this point pointers to extract the whole parameter list at once . > > But I'm worried about the fact that a very long client command could > be split/sparse in separated buffers ? > Am I wrong ? ' cause I haven't dig enough into nginx internals, to > answer to this point. There is only one buffer, so using whole list should be fine. On the other hand, there is no need to add additional pointers - the s->args should be enough, see e.g. how ngx_mail_smtp_rcpt() does it. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Fri Jan 17 11:38:34 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 17 Jan 2014 15:38:34 +0400 Subject: Question on pre-read under 'ngx_http_read_client_request_body' In-Reply-To: References: Message-ID: <2343180.sTxbCx2eZP@vbart-laptop> On Friday 17 January 2014 13:57:12 Pravin Tatti wrote: > I am beginner to nginx and trying to understand it can any one help to > understand the below query: > > In ngx_http_read_client_request_body() there is a pre-read part of the > request body this is basically used to populate the request->body with body > filters right? why we are not doing the same in case of spdy any specific > reason. This "pre-read part" is just a part of body that accidentally was read into headers_in buffer, because in http the size of request header is unknown. It's not the case in spdy, where headers and body comes in different frames with known length. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Fri Jan 17 15:53:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 15:53:17 +0000 Subject: [nginx] Core: improved ngx_reset_pool() (ticket #490). Message-ID: details: http://hg.nginx.org/nginx/rev/320abeb364e6 branches: changeset: 5521:320abeb364e6 user: Maxim Dounin date: Fri Jan 17 06:24:53 2014 +0400 description: Core: improved ngx_reset_pool() (ticket #490). Previously pool->current wasn't moved back to pool, resulting in blocks not used for further allocations if pool->current was already moved at the time of ngx_reset_pool(). Additionally, to preserve logic of moving pool->current, the p->d.failed counters are now properly cleared. While here, pool->chain is also cleared. This change is essentially a nop with current code, but generally improves things. diffstat: src/core/ngx_palloc.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diffs (20 lines): diff --git a/src/core/ngx_palloc.c b/src/core/ngx_palloc.c --- a/src/core/ngx_palloc.c +++ b/src/core/ngx_palloc.c @@ -105,11 +105,14 @@ ngx_reset_pool(ngx_pool_t *pool) } } - pool->large = NULL; - for (p = pool; p; p = p->d.next) { p->d.last = (u_char *) p + sizeof(ngx_pool_t); + p->d.failed = 0; } + + pool->current = pool; + pool->chain = NULL; + pool->large = NULL; } From ru at nginx.com Fri Jan 17 18:10:47 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 17 Jan 2014 18:10:47 +0000 Subject: [nginx] Mail: fixed passing of IPv6 client address in XCLIENT. Message-ID: details: http://hg.nginx.org/nginx/rev/bb3dc21c89ef branches: changeset: 5522:bb3dc21c89ef user: Ruslan Ermilov date: Fri Jan 17 22:06:04 2014 +0400 description: Mail: fixed passing of IPv6 client address in XCLIENT. diffstat: src/mail/ngx_mail_proxy_module.c | 33 ++++++++++++++++++++++++++++----- 1 files changed, 28 insertions(+), 5 deletions(-) diffs (49 lines): diff -r 320abeb364e6 -r bb3dc21c89ef src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Fri Jan 17 06:24:53 2014 +0400 +++ b/src/mail/ngx_mail_proxy_module.c Fri Jan 17 22:06:04 2014 +0400 @@ -542,17 +542,40 @@ ngx_mail_proxy_smtp_handler(ngx_event_t CRLF) - 1 + s->connection->addr_text.len + s->login.len + s->host.len; +#if (NGX_HAVE_INET6) + if (s->connection->sockaddr->sa_family == AF_INET6) { + line.len += sizeof("IPV6:") - 1; + } +#endif + line.data = ngx_pnalloc(c->pool, line.len); if (line.data == NULL) { ngx_mail_proxy_internal_server_error(s); return; } - line.len = ngx_sprintf(line.data, - "XCLIENT ADDR=%V%s%V NAME=%V" CRLF, - &s->connection->addr_text, - (s->login.len ? " LOGIN=" : ""), &s->login, &s->host) - - line.data; + p = ngx_cpymem(line.data, "XCLIENT ADDR=", sizeof("XCLIENT ADDR=") - 1); + +#if (NGX_HAVE_INET6) + if (s->connection->sockaddr->sa_family == AF_INET6) { + p = ngx_cpymem(p, "IPV6:", sizeof("IPV6:") - 1); + } +#endif + + p = ngx_copy(p, s->connection->addr_text.data, + s->connection->addr_text.len); + + if (s->login.len) { + p = ngx_cpymem(p, " LOGIN=", sizeof(" LOGIN=") - 1); + p = ngx_copy(p, s->login.data, s->login.len); + } + + p = ngx_cpymem(p, " NAME=", sizeof(" NAME=") - 1); + p = ngx_copy(p, s->host.data, s->host.len); + + *p++ = CR; *p++ = LF; + + line.len = p - line.data; if (s->smtp_helo.len) { s->mail_state = ngx_smtp_xclient_helo; From reeteshr at outlook.com Fri Jan 17 18:35:49 2014 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Sat, 18 Jan 2014 00:05:49 +0530 Subject: Open Text Summarizer Upstream Module 1.0 Release Message-ID: Hi, I have developed a highly efficient version of OTS - the popular open source text summarizer s/w. For few documents, while OTS takes about 40ms to produce text summary, my version takes around 8ms only. I created a service using my version that listens to summary requests and provide summaries. I have developed an nginx upstream module for this service. You can use it in web sites that involve showing summaries of documents and would be thinking about performance due to scale and other features. Performance note: the service uses select and non-blocking socket I/O for communicating to client. Nginx upstream module for Summarizer:https://github.com/reeteshranjan/summarizer-nginx-module Highly efficient version of OTS:https://github.com/reeteshranjan/summarizer Original OTS:https://github.com/neopunisher/Open-Text-Summarizerhttp://libots.sourceforge.net/ Regards,Reetesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From kipcoul at gmail.com Fri Jan 17 20:13:57 2014 From: kipcoul at gmail.com (Kip Coul) Date: Fri, 17 Jan 2014 21:13:57 +0100 Subject: Nginx + asynchronous ZeroMQ backend Message-ID: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> Hello everyone! I am currently trying to write a massively concurrent web application using Nginx and a message-based approach. Basically, what I intend to do is the following: - let Nginx receive the HTTP request, parse it and extract info - pass this info to a Message queue - let a backend consume the message queue and send the response to the message queue - let Nginx receive the response asynchronously and send it to the client. I intend to use ZeroMQ with a push/pull approach, though I'm not sure this is the right thing to do. Do you have any thoughts on this? How would you proceed? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravivsn at gmail.com Fri Jan 17 21:19:06 2014 From: ravivsn at gmail.com (Ravi Chunduru) Date: Fri, 17 Jan 2014 13:19:06 -0800 Subject: nginx array utility pool usage In-Reply-To: <20140117040445.GN1835@mdounin.ru> References: <20140117024026.GL1835@mdounin.ru> <20140117040445.GN1835@mdounin.ru> Message-ID: Thanks Maxim. I did not notice that check of a->elts. I understand to not to fix the memory optimization as you do not want to bring in pool intelligence into array utility. Still, I think we used pool data to some extend. If there is advantage with memory optimization I think we should think about handling it. Thanks, -Ravi. On Thu, Jan 16, 2014 at 8:04 PM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 16, 2014 at 07:16:50PM -0800, Ravi Chunduru wrote: > > > Hi Maxim, > > I understand on array overflow nginx creates a new memory block. That > is > > perfectly alright. > > > > Say I have a Pool and array is allocated from the first memory block and > it > > happend such a way that array elements end at pool->d.last. > > > > Now, say the pool is used for some other purposes such a way that > > pool->current is now pointing to a different memory block say '2' > > > > And if we want to allocate a few more array elements, nginx has to use > > second memory block. Now the elements are moved to second memory block. > > > > At this stage, if any new element is requested that results in overflow, > > nginx does the below checks > > > > if ((u_char *) a->elts + size == p->d.last > > > > && p->d.last + a->size <= p->d.end) > > > > In the above code, p->d.last was actually pointing to the end of first > > memory block but not second memory block. Hence *even though there is > > memory available in second memory block* it will go ahead and create a > new > > memory block. This will repeat on each overflow. > > Yes, I understand what you are talking about. As array never > knows from which exactly block the memory was allocated, and > doesn't want to dig into pool internals, it only checks an obvious > case - if the memory was allocated from the first block, and if > there is a room there. > > > And, the code in ngx_array_destroy() will actually set the > > pointer(p->d.last) wrongly once there is a overflow. This is a critical > > issue. > > First memory block would have say 'n' elements but after overflow number > of > > elements become 2 times of n. > > Lets say after second overflow, I destroyed the array, then p->d.last > will > > be set backwards by 2 times in the first memory block. But, in actuality > it > > was size 'n'. > > The code in ngx_array_destroy() does the following: > > if ((u_char *) a->elts + a->size * a->nalloc == p->d.last) { > p->d.last -= a->size * a->nalloc; > } > > If a memory was allocated from another block, the "a->elts + ... > == p->d.last" check will fail and p->d.last will not be moved. > > > > > Nginx never faces that situation because, once a memory block is set to > > 'failed', it wont be used for allocation any more. But, if the 'failed' > > count is less than 4 then we may have issue and also pool destroy may > have > > potential issues. > > > > Sorry for long email, but I wanted to explain that in detail. > > > > Thanks, > > -Ravi. > > > > > > > > On Thu, Jan 16, 2014 at 6:40 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Thu, Jan 16, 2014 at 06:22:58PM -0800, Ravi Chunduru wrote: > > > > > > > Hi Nginx experts, > > > > Thanks for the prompt reply to my earlier email on ngx_reset_pool() > > > > > > > > Now, I am looking into ngx_array.c. I found an issue > ngx_array_push(). > > > Here > > > > are the details. > > > > nginx will check if number of elements is equal to capacity of the > array. > > > > If there is no space in the memory block, it allocates a new memory > block > > > > with twice the size of array and copies over the elements. So far so > > > good. > > > > Assume that pool utility got entirely new memory block then a->pool > is > > > not > > > > updated with that of 'pool->current'. > > > > > > > > I got an assumption from the code that a->pool is always the memory > block > > > > that has the array elements by seeing the code in ngx_array_push(), > > > > ngx_array_push_n() or ngx_array_destroy() where checks were always > done > > > > with pool pointer in array. > > > > > > > > Functionalities issues would come up once there is an array > overflow. I > > > > think for every new push of element after first crossing/overflow of > the > > > > capacity, nginx will keep on creating new array. Thus it results in > > > wastage > > > > of memory. > > > > > > > > Please let me know if its a issue or correct my understanding. > > > > > > That's expected behaviour. Arrays are implemented in a way that > > > allocates additional memory on overflows, and it's expected to > > > happen. There is a ngx_list_t structure to be used if such > > > additional memory allocations are undesired. Optimization of > > > allocations which uses pool internals is just an optimization and > > > it's not expected to always succeed. > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > > > > > -- > > Ravi > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Fri Jan 17 21:29:03 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 17 Jan 2014 13:29:03 -0800 Subject: Nginx + asynchronous ZeroMQ backend In-Reply-To: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> References: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> Message-ID: Hey, it's not really feature-complete, but take a look at: https://github.com/FRiCKLE/ngx_zeromq Best regards, Piotr Sikora From kipcoul at gmail.com Fri Jan 17 21:31:20 2014 From: kipcoul at gmail.com (Kip Coul) Date: Fri, 17 Jan 2014 22:31:20 +0100 Subject: Nginx + asynchronous ZeroMQ backend In-Reply-To: References: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> Message-ID: Great, thanks! What features is it missing? Cheers On Friday, January 17, 2014, Piotr Sikora wrote: > Hey, > it's not really feature-complete, but take a look at: > https://github.com/FRiCKLE/ngx_zeromq > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From indtiny at gmail.com Sat Jan 18 04:58:02 2014 From: indtiny at gmail.com (Indtiny S) Date: Fri, 17 Jan 2014 23:58:02 -0500 Subject: nginx size after including openssl Message-ID: Hi, I need to compile nginx with openssl , I am able to compile the nginx with ssl using the steps given in the http://wiki.nginx.org/InstallOptions . it compiles well , but only the problem I am facing its size , i.e nginx binary coming to almost 6MB . Since I have to run the nginx on the embedded target , I tried cross compiling with for arm based board . I am to cross compile and the size is coming to 1.3MB for the target. If I disable openssl and compile its coming to 400KB , I think nginx is statically linking the openssl . if so how to dynamically include the openssl to nginx , because my target libs already has the support for crypto and ssl libs. I tried all the option but could not succeeded . Rgds Indra -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilvayy at gmail.com Sun Jan 19 11:10:55 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Sun, 19 Jan 2014 12:10:55 +0100 Subject: [PATCH 1 of 3] Mail: add IMAP ID command support (RFC2971) In-Reply-To: References: Message-ID: <3ad4498760c6fcd2ba24.1390129855@HPC> # HG changeset patch # User Filipe da Silva # Date 1390129333 -3600 # Sun Jan 19 12:02:13 2014 +0100 # Node ID 3ad4498760c6fcd2ba24ae84f6d924b3a1a35a31 # Parent bb3dc21c89efa8cfd1b9f661fcfd2f155687b99a Mail: add IMAP ID command support (RFC2971). Parse the ID command and its arguments. Handle the server response to ID command. diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Fri Jan 17 22:06:04 2014 +0400 +++ b/src/mail/ngx_mail.h Sun Jan 19 12:02:13 2014 +0100 @@ -215,6 +215,7 @@ typedef struct { unsigned quoted:1; unsigned backslash:1; unsigned no_sync_literal:1; + unsigned params_list:1; unsigned starttls:1; unsigned esmtp:1; unsigned auth_method:3; @@ -233,6 +234,7 @@ typedef struct { ngx_str_t smtp_helo; ngx_str_t smtp_from; ngx_str_t smtp_to; + ngx_str_t imap_id; ngx_str_t cmd; @@ -279,10 +281,10 @@ typedef struct { #define NGX_IMAP_CAPABILITY 3 #define NGX_IMAP_NOOP 4 #define NGX_IMAP_STARTTLS 5 +#define NGX_IMAP_AUTHENTICATE 6 +#define NGX_IMAP_ID 7 -#define NGX_IMAP_NEXT 6 - -#define NGX_IMAP_AUTHENTICATE 7 +#define NGX_IMAP_NEXT 8 #define NGX_SMTP_HELO 1 diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail_imap_handler.c --- a/src/mail/ngx_mail_imap_handler.c Fri Jan 17 22:06:04 2014 +0400 +++ b/src/mail/ngx_mail_imap_handler.c Sun Jan 19 12:02:13 2014 +0100 @@ -18,6 +18,8 @@ static ngx_int_t ngx_mail_imap_authentic ngx_connection_t *c); static ngx_int_t ngx_mail_imap_capability(ngx_mail_session_t *s, ngx_connection_t *c); +static ngx_int_t ngx_mail_imap_id(ngx_mail_session_t *s, + ngx_connection_t *c); static ngx_int_t ngx_mail_imap_starttls(ngx_mail_session_t *s, ngx_connection_t *c); @@ -31,6 +33,7 @@ static u_char imap_username[] = "+ VXNl static u_char imap_password[] = "+ UGFzc3dvcmQ6" CRLF; static u_char imap_bye[] = "* BYE" CRLF; static u_char imap_invalid_command[] = "BAD invalid command" CRLF; +static u_char imap_server_id_nil[] = "* ID NIL" CRLF; void @@ -183,6 +186,10 @@ ngx_mail_imap_auth_state(ngx_event_t *re rc = ngx_mail_imap_capability(s, c); break; + case NGX_IMAP_ID: + rc = ngx_mail_imap_id(s, c); + break; + case NGX_IMAP_LOGOUT: s->quit = 1; ngx_str_set(&s->text, imap_bye); @@ -438,6 +445,86 @@ ngx_mail_imap_capability(ngx_mail_sessio static ngx_int_t +ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) +{ + ngx_uint_t i; + ngx_str_t *arg, cmd; + + if (s->args.nelts == 0) { + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + + arg = s->args.elts; + cmd.data = s->tag.data + s->tag.len; + cmd.len = s->arg_end - cmd.data; + + /* Client may send ID NIL or ID ( "key" "value" ... ) */ + if (s->args.nelts == 1) { + if (cmd.len != 6 + || ngx_strncasecmp(cmd.data, (u_char *) "ID NIL", 6) != 0) { + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ID invalid argument:\"%V\"", + &cmd); + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + + goto valid; + } + + /* more than one and not an even item count */ + if (s->args.nelts % 2 != 0) { + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + + for (i = 0; i < s->args.nelts; i += 2) { + + switch (arg[i].len) { + + case 0: + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ID empty key #%ui name : \"\"", i ); + return NGX_MAIL_PARSE_INVALID_COMMAND; + + case 3: + if (ngx_strncasecmp(arg[i].data, (u_char *) "NIL", 3) == 0) { + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ID NIL Key #%ui name \"%V\"", i, + &arg[i]); + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + break; + + default: + if (arg[i].len > 30) { + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ID Key #%ui name \"%V\" is too long", + i, &arg[i]); + return NGX_MAIL_PARSE_INVALID_COMMAND; + } + break; + } + } + +valid: + s->imap_id.len = cmd.len; + s->imap_id.data = ngx_pnalloc(c->pool, cmd.len); + if (s->imap_id.data == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(s->imap_id.data, cmd.data, cmd.len); + + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, + "imap client ID:\"%V%V\"", &s->tag, &s->imap_id); + + /* Prepare server response to ID command */ + ngx_str_set(&s->text, imap_server_id_nil); + + return NGX_OK; +} + + +static ngx_int_t ngx_mail_imap_starttls(ngx_mail_session_t *s, ngx_connection_t *c) { #if (NGX_MAIL_SSL) diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail_parse.c --- a/src/mail/ngx_mail_parse.c Fri Jan 17 22:06:04 2014 +0400 +++ b/src/mail/ngx_mail_parse.c Sun Jan 19 12:02:13 2014 +0100 @@ -280,6 +280,17 @@ ngx_mail_imap_parse_command(ngx_mail_ses switch (p - c) { + case 2: + if ((c[0] == 'I' || c[0] == 'i') + && (c[1] == 'D'|| c[1] == 'd')) + { + s->command = NGX_IMAP_ID; + + } else { + goto invalid; + } + break; + case 4: if ((c[0] == 'N' || c[0] == 'n') && (c[1] == 'O'|| c[1] == 'o') @@ -409,14 +420,32 @@ ngx_mail_imap_parse_command(ngx_mail_ses case ' ': break; case CR: + if (s->params_list) + goto invalid; state = sw_almost_done; s->arg_end = p; break; case LF: + if (s->params_list) + goto invalid; s->arg_end = p; goto done; + case '(': // params list begin + if (!s->params_list && s->args.nelts == 0) { + s->params_list = 1; + break; + } + goto invalid; + case ')': // params list closing + if (s->params_list && s->args.nelts > 0) { + s->params_list = 0; + state = sw_spaces_before_argument; + break; + } + goto invalid; case '"': - if (s->args.nelts <= 2) { + if (s->args.nelts <= 2 + || (s->params_list && s->args.nelts < 60)) { s->quoted = 1; s->arg_start = p + 1; state = sw_argument; @@ -430,7 +459,8 @@ ngx_mail_imap_parse_command(ngx_mail_ses } goto invalid; default: - if (s->args.nelts <= 2) { + if (s->args.nelts <= 2 + || (s->params_list && s->args.nelts < 60)) { s->arg_start = p; state = sw_argument; break; @@ -443,6 +473,9 @@ ngx_mail_imap_parse_command(ngx_mail_ses if (ch == ' ' && s->quoted) { break; } + if (ch == ')' && s->quoted) { + break; + } switch (ch) { case '"': @@ -451,6 +484,7 @@ ngx_mail_imap_parse_command(ngx_mail_ses } s->quoted = 0; /* fall through */ + case ')': case ' ': case CR: case LF: @@ -463,14 +497,25 @@ ngx_mail_imap_parse_command(ngx_mail_ses s->arg_start = NULL; switch (ch) { + case ')': + if (!s->params_list) { + goto invalid; + } + s->params_list = 0; case '"': case ' ': state = sw_spaces_before_argument; break; case CR: + if (s->params_list) + goto invalid; state = sw_almost_done; + s->arg_end = p; break; case LF: + if (s->params_list) + goto invalid; + s->arg_end = p; goto done; } break; @@ -602,6 +647,7 @@ done: s->quoted = 0; s->no_sync_literal = 0; s->literal_len = 0; + s->params_list = 0; } s->state = (s->command != NGX_IMAP_AUTHENTICATE) ? sw_start : sw_argument; @@ -614,6 +660,7 @@ invalid: s->quoted = 0; s->no_sync_literal = 0; s->literal_len = 0; + s->params_list = 0; return NGX_MAIL_PARSE_INVALID_COMMAND; } -------------- next part -------------- A non-text attachment was scrubbed... Name: 000-ImapID_CommandSupport.diff Type: text/x-patch Size: 9414 bytes Desc: not available URL: From fdasilvayy at gmail.com Sun Jan 19 11:10:56 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Sun, 19 Jan 2014 12:10:56 +0100 Subject: [PATCH 2 of 3] Mail: add IMAP client ID value to mail auth script In-Reply-To: References: Message-ID: # HG changeset patch # User Filipe da Silva # Date 1390129340 -3600 # Sun Jan 19 12:02:20 2014 +0100 # Node ID cb11ea53da365c2debed88d8d72864cdc3ae07d2 # Parent 3ad4498760c6fcd2ba24ae84f6d924b3a1a35a31 Mail: add IMAP client ID value to mail auth script. Push the ID command argument received from client to auth script. So this client information is available for custom auth script. diff -r 3ad4498760c6 -r cb11ea53da36 src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Sun Jan 19 12:02:13 2014 +0100 +++ b/src/mail/ngx_mail_auth_http_module.c Sun Jan 19 12:02:20 2014 +0100 @@ -1142,7 +1142,7 @@ ngx_mail_auth_http_create_request(ngx_ma { size_t len; ngx_buf_t *b; - ngx_str_t login, passwd; + ngx_str_t login, passwd, imap_id; ngx_mail_core_srv_conf_t *cscf; if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { @@ -1153,6 +1153,10 @@ ngx_mail_auth_http_create_request(ngx_ma return NULL; } + if (ngx_mail_auth_http_escape(pool, &s->imap_id, &imap_id) != NGX_OK) { + return NULL; + } + cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 @@ -1176,6 +1180,11 @@ ngx_mail_auth_http_create_request(ngx_ma + ahcf->header.len + sizeof(CRLF) - 1; + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { + len += sizeof("Auth-IMAP-Id: ") - 1 + + imap_id.len + sizeof(CRLF) - 1; + } + b = ngx_create_temp_buf(pool, len); if (b == NULL) { return NULL; @@ -1255,6 +1264,13 @@ ngx_mail_auth_http_create_request(ngx_ma } + if (s->protocol == NGX_MAIL_IMAP_PROTOCOL) { + b->last = ngx_cpymem(b->last, "Auth-IMAP-Id: ", + sizeof("Auth-IMAP-Id: ") - 1); + b->last = ngx_copy(b->last, imap_id.data, imap_id.len); + *b->last++ = CR; *b->last++ = LF; + } + if (ahcf->header.len) { b->last = ngx_copy(b->last, ahcf->header.data, ahcf->header.len); } -------------- next part -------------- A non-text attachment was scrubbed... Name: 001-ImapID_AuthScriptSupport.diff Type: text/x-patch Size: 2177 bytes Desc: not available URL: From fdasilvayy at gmail.com Sun Jan 19 11:10:57 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Sun, 19 Jan 2014 12:10:57 +0100 Subject: [PATCH 3 of 3] Mail: add ID to imap capability list In-Reply-To: References: Message-ID: <6754a7a65b68dd40b1a3.1390129857@HPC> # HG changeset patch # User Filipe da Silva # Date 1390129340 -3600 # Sun Jan 19 12:02:20 2014 +0100 # Node ID 6754a7a65b68dd40b1a37542d359211f4c63a004 # Parent cb11ea53da365c2debed88d8d72864cdc3ae07d2 Mail: add ID to imap capability list. Allow to declared the IMAP ID command support. diff -r cb11ea53da36 -r 6754a7a65b68 src/mail/ngx_mail_imap_module.c --- a/src/mail/ngx_mail_imap_module.c Sun Jan 19 12:02:20 2014 +0100 +++ b/src/mail/ngx_mail_imap_module.c Sun Jan 19 12:02:20 2014 +0100 @@ -176,6 +176,8 @@ ngx_mail_imap_merge_srv_conf(ngx_conf_t size += 1 + c[i].len; } + size += sizeof(" ID") - 1; + for (m = NGX_MAIL_AUTH_PLAIN_ENABLED, i = 0; m <= NGX_MAIL_AUTH_CRAM_MD5_ENABLED; m <<= 1, i++) @@ -200,6 +202,8 @@ ngx_mail_imap_merge_srv_conf(ngx_conf_t p = ngx_cpymem(p, c[i].data, c[i].len); } + p = ngx_cpymem(p, " ID", sizeof(" ID") - 1); + auth = p; for (m = NGX_MAIL_AUTH_PLAIN_ENABLED, i = 0; -------------- next part -------------- A non-text attachment was scrubbed... Name: 002-ImapID_AsDefaultCapability.diff Type: text/x-patch Size: 1019 bytes Desc: not available URL: From fdasilvayy at gmail.com Sun Jan 19 11:10:54 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Sun, 19 Jan 2014 12:10:54 +0100 Subject: [PATCH 0 of 3 ] [PATCH] Support of IMAP ID command in mail proxy module In-Reply-To: <20140117111915.GT1835@mdounin.ru> References: <20140117111915.GT1835@mdounin.ru> Message-ID: Hello, I've rework my patches following the review comments. Please find attached the result. --- Filipe Da Silva From steve at stevemorin.com Sun Jan 19 22:14:07 2014 From: steve at stevemorin.com (Steve Morin) Date: Sun, 19 Jan 2014 14:14:07 -0800 Subject: Nginx + asynchronous ZeroMQ backend In-Reply-To: References: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> Message-ID: Kip, What application are you using it for? -Steve On Fri, Jan 17, 2014 at 1:31 PM, Kip Coul wrote: > Great, thanks! > What features is it missing? > > Cheers > > > On Friday, January 17, 2014, Piotr Sikora wrote: > >> Hey, >> it's not really feature-complete, but take a look at: >> https://github.com/FRiCKLE/ngx_zeromq >> >> Best regards, >> Piotr Sikora >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reeteshr at outlook.com Mon Jan 20 07:59:37 2014 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Mon, 20 Jan 2014 13:29:37 +0530 Subject: How to implement handshake in an upstream module? In-Reply-To: References: , , <20131204223835.GZ93176@mdounin.ru>, , , , , , , Message-ID: Hi, This is about using keepalive. As I worked on another nginx module, I saw more about 'keepalive'. I see how you have used 'keepalive=1' on 'length == 0' and similar stuff can be seen in memcached module as well. When I was doing the sphinx module I got away with working behavior with the default nginx unbuffered filter because the sphinx service closed the socket after the response and I did not have to play with upstream in_headers to say how much is remaining to read. So it seems if I had to get the advantage of 'keepalive', I need to modify the sphinx service behavior. :-( Regards,Reetesh From: reeteshr at outlook.com To: nginx-devel at nginx.org Subject: RE: How to implement handshake in an upstream module? Date: Fri, 6 Dec 2013 11:56:41 +0530 Hi! > Date: Thu, 5 Dec 2013 13:23:55 -0800 > Subject: Re: How to implement handshake in an upstream module? > From: agentzh at gmail.com > To: nginx-devel at nginx.org > > You're essentially doing pipelined requests here and you will run into > the following limitation in the nginx core: > > http://mailman.nginx.org/pipermail/nginx-devel/2012-March/002040.html > > I ran into this when testing my ngx_redis2 module's pipelined request > feature and my patch in that thread will help you :) Thanks for the info. :) > Also, you may also want to deal with keepalive connections in your > module in the future and you will want to save the handshake for > reused connections (from the connection pool). Yeah, this is on the table for obvious performance reasons. I saw how you have put that upfront for redis, and that is really good. :) > > Regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From kipcoul at gmail.com Mon Jan 20 10:25:15 2014 From: kipcoul at gmail.com (Kip Coul) Date: Mon, 20 Jan 2014 11:25:15 +0100 Subject: Nginx + asynchronous ZeroMQ backend In-Reply-To: References: <796E25D2-DB58-4D8C-8A1D-ABBDB78EC712@gmail.com> Message-ID: Hello Steve, It would be for a parallel computation service, giving you access to many calculations of mathematical formulas and the like. The results would be cached in a Memcache, but what I really want is the ability to launch these computations from Nginx and get the results asynchronously Thanks On Sun, Jan 19, 2014 at 11:14 PM, Steve Morin wrote: > Kip, > What application are you using it for? > -Steve > > > On Fri, Jan 17, 2014 at 1:31 PM, Kip Coul wrote: >> >> Great, thanks! >> What features is it missing? >> >> Cheers >> >> >> On Friday, January 17, 2014, Piotr Sikora wrote: >>> >>> Hey, >>> it's not really feature-complete, but take a look at: >>> https://github.com/FRiCKLE/ngx_zeromq >>> >>> Best regards, >>> Piotr Sikora >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Jan 21 13:40:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Jan 2014 13:40:17 +0000 Subject: [nginx] Typo fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/905841c461fa branches: changeset: 5523:905841c461fa user: Maxim Dounin date: Tue Jan 21 17:39:49 2014 +0400 description: Typo fixed. diffstat: docs/xml/nginx/changes.xml | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -9174,7 +9174,7 @@ Thanks to Maxim Dounin. -??? ????????????? SMPT nginx ??????? ????????? +??? ????????????? SMTP nginx ??????? ????????? "250 2.0.0 OK" ?????? "235 2.0.0 OK"; ?????? ????????? ? 0.7.22.
??????? ??????? ??????. From rand at sent.com Tue Jan 21 18:55:58 2014 From: rand at sent.com (rand at sent.com) Date: Tue, 21 Jan 2014 10:55:58 -0800 Subject: proxy_redirect can't be def'd in an include file; ok in main config? Message-ID: <1390330558.8638.73589457.42CFA935@webmail.messagingengine.com> i've nginx 1.5.8 If I check a config containing cat sites/test.conf ... location / { proxy_pass http://VARNISH; include includes/varnish.conf; } ... cat includes/varnish.conf proxy_redirect default; proxy_connect_timeout 600s; proxy_read_timeout 600s; ... I get an error nginx: [emerg] "proxy_redirect default" should be placed after the "proxy_pass" directive in //etc/nginx/includes/varnish.conf:1 nginx: configuration file //etc/nginx/nginx.conf test failed but if I change to, cat sites/test.conf ... location / { proxy_pass http://VARNISH; + proxy_redirect default; include includes/varnish.conf; } ... cat includes/varnish.conf - proxy_redirect default; + #proxy_redirect default; proxy_connect_timeout 600s; proxy_read_timeout 600s; ... then config check returns nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Why isn't the proxy_redirect viable in the include file? Intended, or a bug? From mdounin at mdounin.ru Tue Jan 21 19:10:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Jan 2014 23:10:48 +0400 Subject: proxy_redirect can't be def'd in an include file; ok in main config? In-Reply-To: <1390330558.8638.73589457.42CFA935@webmail.messagingengine.com> References: <1390330558.8638.73589457.42CFA935@webmail.messagingengine.com> Message-ID: <20140121191048.GC1835@mdounin.ru> Hello! On Tue, Jan 21, 2014 at 10:55:58AM -0800, rand at sent.com wrote: > i've nginx 1.5.8 > > If I check a config containing > > cat sites/test.conf > ... > location / { > proxy_pass http://VARNISH; > include includes/varnish.conf; > } > ... > cat includes/varnish.conf > proxy_redirect default; > proxy_connect_timeout 600s; > proxy_read_timeout 600s; > ... > > I get an error > > nginx: [emerg] "proxy_redirect default" should be placed after > the "proxy_pass" directive in > //etc/nginx/includes/varnish.conf:1 > nginx: configuration file //etc/nginx/nginx.conf test failed > > but if I change to, > > cat sites/test.conf > ... > location / { > proxy_pass http://VARNISH; > + proxy_redirect default; > include includes/varnish.conf; > } > ... > cat includes/varnish.conf > - proxy_redirect default; > + #proxy_redirect default; > proxy_connect_timeout 600s; > proxy_read_timeout 600s; > ... > > then config check returns > > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is > successful > > Why isn't the proxy_redirect viable in the include file? Intended, or a > bug? Most likely, there are other uses of the "includes/varnish.conf" include file in your config, and they cause error reported. And it's certainly a wrong list to write such questions. Thank you for cooperation. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Jan 22 03:10:08 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:08 +0000 Subject: [nginx] SPDY: fixed possible segfault. Message-ID: details: http://hg.nginx.org/nginx/rev/03c198bb2acf branches: changeset: 5524:03c198bb2acf user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: fixed possible segfault. While processing a DATA frame, the link to related stream is stored in spdy connection object as part of connection state. But this stream can be closed between receiving parts of the frame. diffstat: src/http/ngx_http_spdy.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r 905841c461fa -r 03c198bb2acf src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Jan 21 17:39:49 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -2665,6 +2665,10 @@ ngx_http_spdy_close_stream(ngx_http_spdy } } + if (sc->stream == stream) { + sc->stream = NULL; + } + if (stream->handled) { for (s = sc->last_stream; s; s = s->next) { if (s->next == stream) { From vbart at nginx.com Wed Jan 22 03:10:09 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:09 +0000 Subject: [nginx] SPDY: better name for frame entries counter. Message-ID: details: http://hg.nginx.org/nginx/rev/206c56e23a96 branches: changeset: 5525:206c56e23a96 user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: better name for frame entries counter. The "headers" is not a good term, since it is used not only to count name/value pairs in the HEADERS block but to count SETTINGS entries too. Moreover, one name/value pair in HEADERS can contain multiple http headers with the same name. No functional changes. diffstat: src/http/ngx_http_spdy.c | 25 +++++++++++++------------ src/http/ngx_http_spdy.h | 2 +- 2 files changed, 14 insertions(+), 13 deletions(-) diffs (94 lines): diff -r 03c198bb2acf -r 206c56e23a96 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -862,14 +862,15 @@ ngx_http_spdy_state_headers(ngx_http_spd ngx_http_spdy_state_headers); } - sc->headers = ngx_spdy_frame_parse_uint16(buf->pos); + sc->entries = ngx_spdy_frame_parse_uint16(buf->pos); buf->pos += NGX_SPDY_NV_NUM_SIZE; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "spdy headers count: %ui", sc->headers); - - if (ngx_list_init(&r->headers_in.headers, r->pool, sc->headers + 3, + "spdy HEADERS block consists of %ui entries", + sc->entries); + + if (ngx_list_init(&r->headers_in.headers, r->pool, sc->entries + 3, sizeof(ngx_table_elt_t)) != NGX_OK) { @@ -888,14 +889,14 @@ ngx_http_spdy_state_headers(ngx_http_spd } } - while (sc->headers) { + while (sc->entries) { rc = ngx_http_spdy_parse_header(r); switch (rc) { case NGX_DONE: - sc->headers--; + sc->entries--; case NGX_OK: break; @@ -1401,35 +1402,35 @@ ngx_http_spdy_state_settings(ngx_http_sp ngx_uint_t v; ngx_http_spdy_srv_conf_t *sscf; - if (sc->headers == 0) { + if (sc->entries == 0) { if (end - pos < NGX_SPDY_SETTINGS_NUM_SIZE) { return ngx_http_spdy_state_save(sc, pos, end, ngx_http_spdy_state_settings); } - sc->headers = ngx_spdy_frame_parse_uint32(pos); + sc->entries = ngx_spdy_frame_parse_uint32(pos); pos += NGX_SPDY_SETTINGS_NUM_SIZE; sc->length -= NGX_SPDY_SETTINGS_NUM_SIZE; - if (sc->length < sc->headers * NGX_SPDY_SETTINGS_PAIR_SIZE) { + if (sc->length < sc->entries * NGX_SPDY_SETTINGS_PAIR_SIZE) { /* TODO logging */ return ngx_http_spdy_state_protocol_error(sc); } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, "spdy SETTINGS frame consists of %ui entries", - sc->headers); + sc->entries); } - while (sc->headers) { + while (sc->entries) { if (end - pos < NGX_SPDY_SETTINGS_PAIR_SIZE) { return ngx_http_spdy_state_save(sc, pos, end, ngx_http_spdy_state_settings); } - sc->headers--; + sc->entries--; if (pos[0] != NGX_SPDY_SETTINGS_MAX_STREAMS) { pos += NGX_SPDY_SETTINGS_PAIR_SIZE; diff -r 03c198bb2acf -r 206c56e23a96 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 @@ -100,7 +100,7 @@ struct ngx_http_spdy_connection_s { ngx_http_spdy_stream_t *stream; - ngx_uint_t headers; + ngx_uint_t entries; size_t length; u_char flags; From vbart at nginx.com Wed Jan 22 03:10:10 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:10 +0000 Subject: [nginx] SPDY: removed state to check first SETTINGS frame. Message-ID: details: http://hg.nginx.org/nginx/rev/2c6f82c0cec2 branches: changeset: 5526:2c6f82c0cec2 user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: removed state to check first SETTINGS frame. That code was based on misunderstanding of spdy specification about configuration applicability in the SETTINGS frames. The original interpretation was that configuration is assigned for the whole SPDY connection, while it is only for the endpoint. Moreover, the strange thing is that specification forbids multiple entries in the SETTINGS frame with the same ID even if flags are different. As a result, Chrome sends two SETTINGS frames: one with its own configuration, and another one with configuration stored for a server (when the FLAG_SETTINGS_PERSIST_VALUE flags were used by the server). To simplify implementation we refuse to use the persistent settings feature and thereby avoid all the complexity related with its proper support. diffstat: src/http/ngx_http_spdy.c | 73 +++++++++-------------------------------------- 1 files changed, 15 insertions(+), 58 deletions(-) diffs (143 lines): diff -r 206c56e23a96 -r 2c6f82c0cec2 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -81,8 +81,6 @@ static void ngx_http_spdy_read_handler(n static void ngx_http_spdy_write_handler(ngx_event_t *wev); static void ngx_http_spdy_handle_connection(ngx_http_spdy_connection_t *sc); -static u_char *ngx_http_spdy_state_detect_settings( - ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end); static u_char *ngx_http_spdy_state_head(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end); static u_char *ngx_http_spdy_state_syn_stream(ngx_http_spdy_connection_t *sc, @@ -101,8 +99,10 @@ static u_char *ngx_http_spdy_state_ping( u_char *pos, u_char *end); static u_char *ngx_http_spdy_state_skip(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end); +#if 0 static u_char *ngx_http_spdy_state_settings(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end); +#endif static u_char *ngx_http_spdy_state_noop(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end); static u_char *ngx_http_spdy_state_complete(ngx_http_spdy_connection_t *sc, @@ -235,7 +235,7 @@ ngx_http_spdy_init(ngx_event_t *rev) sc->connection = c; sc->http_connection = hc; - sc->handler = ngx_http_spdy_state_detect_settings; + sc->handler = ngx_http_spdy_state_head; sc->zstream_in.zalloc = ngx_http_spdy_zalloc; sc->zstream_in.zfree = ngx_http_spdy_zfree; @@ -297,6 +297,11 @@ ngx_http_spdy_init(ngx_event_t *rev) return; } + if (ngx_http_spdy_send_settings(sc) == NGX_ERROR) { + ngx_http_close_connection(c); + return; + } + c->data = sc; rev->handler = ngx_http_spdy_read_handler; @@ -610,38 +615,6 @@ ngx_http_spdy_handle_connection(ngx_http static u_char * -ngx_http_spdy_state_detect_settings(ngx_http_spdy_connection_t *sc, - u_char *pos, u_char *end) -{ - if (end - pos < NGX_SPDY_FRAME_HEADER_SIZE) { - return ngx_http_spdy_state_save(sc, pos, end, - ngx_http_spdy_state_detect_settings); - } - - /* - * Since this is the first frame in a buffer, - * then it is properly aligned - */ - - if (*(uint32_t *) pos == htonl(ngx_spdy_ctl_frame_head(NGX_SPDY_SETTINGS))) - { - sc->length = ngx_spdy_frame_length(htonl(((uint32_t *) pos)[1])); - - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, - "spdy SETTINGS frame received, size: %uz", sc->length); - - pos += NGX_SPDY_FRAME_HEADER_SIZE; - - return ngx_http_spdy_state_settings(sc, pos, end); - } - - ngx_http_spdy_send_settings(sc); - - return ngx_http_spdy_state_head(sc, pos, end); -} - - -static u_char * ngx_http_spdy_state_head(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end) { @@ -1395,13 +1368,12 @@ ngx_http_spdy_state_skip(ngx_http_spdy_c } +#if 0 + static u_char * ngx_http_spdy_state_settings(ngx_http_spdy_connection_t *sc, u_char *pos, u_char *end) { - ngx_uint_t v; - ngx_http_spdy_srv_conf_t *sscf; - if (sc->entries == 0) { if (end - pos < NGX_SPDY_SETTINGS_NUM_SIZE) { @@ -1432,29 +1404,15 @@ ngx_http_spdy_state_settings(ngx_http_sp sc->entries--; - if (pos[0] != NGX_SPDY_SETTINGS_MAX_STREAMS) { - pos += NGX_SPDY_SETTINGS_PAIR_SIZE; - sc->length -= NGX_SPDY_SETTINGS_PAIR_SIZE; - continue; - } - - v = ngx_spdy_frame_parse_uint32(pos + NGX_SPDY_SETTINGS_IDF_SIZE); - - sscf = ngx_http_get_module_srv_conf(sc->http_connection->conf_ctx, - ngx_http_spdy_module); - - if (v != sscf->concurrent_streams) { - ngx_http_spdy_send_settings(sc); - } - - return ngx_http_spdy_state_skip(sc, pos, end); + pos += NGX_SPDY_SETTINGS_PAIR_SIZE; + sc->length -= NGX_SPDY_SETTINGS_PAIR_SIZE; } - ngx_http_spdy_send_settings(sc); - return ngx_http_spdy_state_complete(sc, pos, end); } +#endif + static u_char * ngx_http_spdy_state_noop(ngx_http_spdy_connection_t *sc, u_char *pos, @@ -1654,8 +1612,7 @@ ngx_http_spdy_send_settings(ngx_http_spd p = ngx_spdy_frame_aligned_write_uint32(p, 1); p = ngx_spdy_frame_aligned_write_uint32(p, - NGX_SPDY_SETTINGS_MAX_STREAMS << 24 - | NGX_SPDY_SETTINGS_FLAG_PERSIST); + NGX_SPDY_SETTINGS_MAX_STREAMS << 24); sscf = ngx_http_get_module_srv_conf(sc->http_connection->conf_ctx, ngx_http_spdy_module); From vbart at nginx.com Wed Jan 22 03:10:12 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:12 +0000 Subject: [nginx] SPDY: proper handling of all RST_STREAM statuses. Message-ID: details: http://hg.nginx.org/nginx/rev/f3f7b72ca6e9 branches: changeset: 5527:f3f7b72ca6e9 user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: proper handling of all RST_STREAM statuses. Previously, only stream CANCEL and INTERNAL_ERROR were handled right. diffstat: src/http/ngx_http_spdy.c | 71 ++++++++++++++++++++--------------------------- 1 files changed, 30 insertions(+), 41 deletions(-) diffs (105 lines): diff -r 2c6f82c0cec2 -r f3f7b72ca6e9 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -1227,7 +1227,6 @@ ngx_http_spdy_state_rst_stream(ngx_http_ ngx_uint_t sid, status; ngx_event_t *ev; ngx_connection_t *fc; - ngx_http_request_t *r; ngx_http_spdy_stream_t *stream; if (end - pos < NGX_SPDY_RST_STREAM_SIZE) { @@ -1236,7 +1235,10 @@ ngx_http_spdy_state_rst_stream(ngx_http_ } if (sc->length != NGX_SPDY_RST_STREAM_SIZE) { - /* TODO logging */ + ngx_log_error(NGX_LOG_INFO, sc->connection->log, 0, + "client sent RST_STREAM frame with incorrect length %uz", + sc->length); + return ngx_http_spdy_state_protocol_error(sc); } @@ -1251,55 +1253,42 @@ ngx_http_spdy_state_rst_stream(ngx_http_ ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, "spdy RST_STREAM sid:%ui st:%ui", sid, status); + stream = ngx_http_spdy_get_stream_by_id(sc, sid); + if (stream == NULL) { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, + "unknown stream, probably it has been closed already"); + return ngx_http_spdy_state_complete(sc, pos, end); + } + + stream->in_closed = 1; + stream->out_closed = 1; + + fc = stream->request->connection; + fc->error = 1; switch (status) { - case NGX_SPDY_PROTOCOL_ERROR: - /* TODO logging */ - return ngx_http_spdy_state_protocol_error(sc); - - case NGX_SPDY_INVALID_STREAM: - /* TODO */ + case NGX_SPDY_CANCEL: + ngx_log_error(NGX_LOG_INFO, fc->log, 0, + "client canceled stream %ui", sid); break; - case NGX_SPDY_REFUSED_STREAM: - /* TODO */ + case NGX_SPDY_INTERNAL_ERROR: + ngx_log_error(NGX_LOG_INFO, fc->log, 0, + "client terminated stream %ui because of internal error", + sid); break; - case NGX_SPDY_UNSUPPORTED_VERSION: - /* TODO logging */ - return ngx_http_spdy_state_protocol_error(sc); - - case NGX_SPDY_CANCEL: - case NGX_SPDY_INTERNAL_ERROR: - stream = ngx_http_spdy_get_stream_by_id(sc, sid); - if (stream == NULL) { - /* TODO false cancel */ - break; - } - - stream->in_closed = 1; - stream->out_closed = 1; - - r = stream->request; - - fc = r->connection; - fc->error = 1; - - ev = fc->read; - ev->handler(ev); - + default: + ngx_log_error(NGX_LOG_INFO, fc->log, 0, + "client terminated stream %ui with status %ui", + sid, status); break; - - case NGX_SPDY_FLOW_CONTROL_ERROR: - /* TODO logging */ - return ngx_http_spdy_state_protocol_error(sc); - - default: - /* TODO */ - return ngx_http_spdy_state_protocol_error(sc); } + ev = fc->read; + ev->handler(ev); + return ngx_http_spdy_state_complete(sc, pos, end); } From vbart at nginx.com Wed Jan 22 03:10:13 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:13 +0000 Subject: [nginx] SPDY: use frame->next pointer to chain free frames. Message-ID: details: http://hg.nginx.org/nginx/rev/d5de6c25b759 branches: changeset: 5528:d5de6c25b759 user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: use frame->next pointer to chain free frames. There is no need in separate "free" pointer and like it is for ngx_chain_t the "next" pointer can be used. But after this change successfully handled frame should not be accessed, so the frame handling cycle was improved to store pointer to the next frame before processing. Also worth noting that initializing "free" pointer to NULL in the original code was surplus. diffstat: src/http/ngx_http_spdy.c | 10 +++++----- src/http/ngx_http_spdy.h | 2 -- src/http/ngx_http_spdy_filter_module.c | 6 ++---- 3 files changed, 7 insertions(+), 11 deletions(-) diffs (90 lines): diff -r f3f7b72ca6e9 -r d5de6c25b759 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -527,7 +527,9 @@ ngx_http_spdy_send_output_queue(ngx_http } } - for ( /* void */ ; out; out = out->next) { + for ( /* void */ ; out; out = fn) { + fn = out->next; + if (out->handler(sc, out) != NGX_OK) { out->blocked = 1; out->priority = NGX_SPDY_HIGHEST_PRIORITY; @@ -1644,7 +1646,7 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd frame = sc->free_ctl_frames; if (frame) { - sc->free_ctl_frames = frame->free; + sc->free_ctl_frames = frame->next; cl = frame->first; cl->buf->pos = cl->buf->start; @@ -1674,8 +1676,6 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd frame->stream = NULL; } - frame->free = NULL; - #if (NGX_DEBUG) if (size > NGX_SPDY_CTL_FRAME_BUFFER_SIZE - NGX_SPDY_FRAME_HEADER_SIZE) { ngx_log_error(NGX_LOG_ALERT, sc->pool->log, 0, @@ -1705,7 +1705,7 @@ ngx_http_spdy_ctl_frame_handler(ngx_http return NGX_AGAIN; } - frame->free = sc->free_ctl_frames; + frame->next = sc->free_ctl_frames; sc->free_ctl_frames = frame; return NGX_OK; diff -r f3f7b72ca6e9 -r d5de6c25b759 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 @@ -141,8 +141,6 @@ struct ngx_http_spdy_out_frame_s { ngx_int_t (*handler)(ngx_http_spdy_connection_t *sc, ngx_http_spdy_out_frame_t *frame); - ngx_http_spdy_out_frame_t *free; - ngx_http_spdy_stream_t *stream; size_t size; diff -r f3f7b72ca6e9 -r d5de6c25b759 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 22 04:58:19 2014 +0400 @@ -587,7 +587,6 @@ ngx_http_spdy_header_filter(ngx_http_req frame->first = cl; frame->last = cl; frame->handler = ngx_http_spdy_syn_frame_handler; - frame->free = NULL; frame->stream = stream; frame->size = len; frame->priority = stream->priority; @@ -821,7 +820,7 @@ ngx_http_spdy_filter_get_data_frame(ngx_ frame = stream->free_frames; if (frame) { - stream->free_frames = frame->free; + stream->free_frames = frame->next; } else { frame = ngx_palloc(stream->request->pool, @@ -881,7 +880,6 @@ ngx_http_spdy_filter_get_data_frame(ngx_ frame->first = first; frame->last = last; frame->handler = ngx_http_spdy_data_frame_handler; - frame->free = NULL; frame->stream = stream; frame->size = NGX_SPDY_FRAME_HEADER_SIZE + len; frame->priority = stream->priority; @@ -1051,7 +1049,7 @@ ngx_http_spdy_handle_frame(ngx_http_spdy stream->out_closed = 1; } - frame->free = stream->free_frames; + frame->next = stream->free_frames; stream->free_frames = frame; stream->queued--; From vbart at nginx.com Wed Jan 22 03:10:15 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:15 +0000 Subject: [nginx] SPDY: store the length of frame instead of its whole size. Message-ID: details: http://hg.nginx.org/nginx/rev/e4adaa47af65 branches: changeset: 5529:e4adaa47af65 user: Valentin Bartenev date: Wed Jan 22 04:58:19 2014 +0400 description: SPDY: store the length of frame instead of its whole size. The "length" value better corresponds with the specification and reduces confusion about whether frame's header is included in "size" or not. Also this change simplifies some parts of code, since in more cases the length of frame is more useful than its actual size, especially considering that the size of frame header is constant. diffstat: src/http/ngx_http_spdy.c | 20 +++++++++----------- src/http/ngx_http_spdy.h | 2 +- src/http/ngx_http_spdy_filter_module.c | 19 ++++++++++--------- 3 files changed, 20 insertions(+), 21 deletions(-) diffs (134 lines): diff -r d5de6c25b759 -r e4adaa47af65 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 @@ -494,9 +494,9 @@ ngx_http_spdy_send_output_queue(ngx_http out = frame; ngx_log_debug5(NGX_LOG_DEBUG_HTTP, c->log, 0, - "spdy frame out: %p sid:%ui prio:%ui bl:%d size:%uz", + "spdy frame out: %p sid:%ui prio:%ui bl:%d len:%uz", out, out->stream ? out->stream->id : 0, out->priority, - out->blocked, out->size); + out->blocked, out->length); } cl = c->send_chain(c, cl, 0); @@ -537,9 +537,9 @@ ngx_http_spdy_send_output_queue(ngx_http } ngx_log_debug4(NGX_LOG_DEBUG_HTTP, c->log, 0, - "spdy frame sent: %p sid:%ui bl:%d size:%uz", + "spdy frame sent: %p sid:%ui bl:%d len:%uz", out, out->stream ? out->stream->id : 0, - out->blocked, out->size); + out->blocked, out->length); } frame = NULL; @@ -1587,9 +1587,7 @@ ngx_http_spdy_send_settings(ngx_http_spd frame->handler = ngx_http_spdy_settings_frame_handler; frame->stream = NULL; #if (NGX_DEBUG) - frame->size = NGX_SPDY_FRAME_HEADER_SIZE - + NGX_SPDY_SETTINGS_NUM_SIZE - + NGX_SPDY_SETTINGS_PAIR_SIZE; + frame->length = NGX_SPDY_SETTINGS_NUM_SIZE + NGX_SPDY_SETTINGS_PAIR_SIZE; #endif frame->priority = NGX_SPDY_HIGHEST_PRIORITY; frame->blocked = 0; @@ -1637,7 +1635,7 @@ ngx_http_spdy_settings_frame_handler(ngx static ngx_http_spdy_out_frame_t * -ngx_http_spdy_get_ctl_frame(ngx_http_spdy_connection_t *sc, size_t size, +ngx_http_spdy_get_ctl_frame(ngx_http_spdy_connection_t *sc, size_t length, ngx_uint_t priority) { ngx_chain_t *cl; @@ -1677,13 +1675,13 @@ ngx_http_spdy_get_ctl_frame(ngx_http_spd } #if (NGX_DEBUG) - if (size > NGX_SPDY_CTL_FRAME_BUFFER_SIZE - NGX_SPDY_FRAME_HEADER_SIZE) { + if (length > NGX_SPDY_CTL_FRAME_BUFFER_SIZE - NGX_SPDY_FRAME_HEADER_SIZE) { ngx_log_error(NGX_LOG_ALERT, sc->pool->log, 0, - "requested control frame is too big: %uz", size); + "requested control frame is too big: %uz", length); return NULL; } - frame->size = size; + frame->length = length; #endif frame->priority = priority; diff -r d5de6c25b759 -r e4adaa47af65 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 @@ -142,7 +142,7 @@ struct ngx_http_spdy_out_frame_s { ngx_http_spdy_out_frame_t *frame); ngx_http_spdy_stream_t *stream; - size_t size; + size_t length; ngx_uint_t priority; unsigned blocked:1; diff -r d5de6c25b759 -r e4adaa47af65 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 22 04:58:19 2014 +0400 @@ -560,13 +560,14 @@ ngx_http_spdy_header_filter(ngx_http_req r->header_size = len; + len -= NGX_SPDY_FRAME_HEADER_SIZE; + if (r->header_only) { b->last_buf = 1; - p = ngx_spdy_frame_write_flags_and_len(p, NGX_SPDY_FLAG_FIN, - len - NGX_SPDY_FRAME_HEADER_SIZE); + p = ngx_spdy_frame_write_flags_and_len(p, NGX_SPDY_FLAG_FIN, len); + } else { - p = ngx_spdy_frame_write_flags_and_len(p, 0, - len - NGX_SPDY_FRAME_HEADER_SIZE); + p = ngx_spdy_frame_write_flags_and_len(p, 0, len); } (void) ngx_spdy_frame_write_sid(p, stream->id); @@ -588,14 +589,14 @@ ngx_http_spdy_header_filter(ngx_http_req frame->last = cl; frame->handler = ngx_http_spdy_syn_frame_handler; frame->stream = stream; - frame->size = len; + frame->length = len; frame->priority = stream->priority; frame->blocked = 1; frame->fin = r->header_only; ngx_log_debug3(NGX_LOG_DEBUG_HTTP, stream->request->connection->log, 0, - "spdy:%ui create SYN_REPLY frame %p: size:%uz", - stream->id, frame, frame->size); + "spdy:%ui create SYN_REPLY frame %p: len:%uz", + stream->id, frame, frame->length); ngx_http_spdy_queue_blocked_frame(sc, frame); @@ -881,7 +882,7 @@ ngx_http_spdy_filter_get_data_frame(ngx_ frame->last = last; frame->handler = ngx_http_spdy_data_frame_handler; frame->stream = stream; - frame->size = NGX_SPDY_FRAME_HEADER_SIZE + len; + frame->length = len; frame->priority = stream->priority; frame->blocked = 0; frame->fin = last->buf->last_buf; @@ -1043,7 +1044,7 @@ ngx_http_spdy_handle_frame(ngx_http_spdy r = stream->request; - r->connection->sent += frame->size; + r->connection->sent += NGX_SPDY_FRAME_HEADER_SIZE + frame->length; if (frame->fin) { stream->out_closed = 1; From vbart at nginx.com Wed Jan 22 03:10:16 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 22 Jan 2014 03:10:16 +0000 Subject: [nginx] SPDY: use ngx_queue_t to queue streams for post processing. Message-ID: details: http://hg.nginx.org/nginx/rev/827e53c136b0 branches: changeset: 5530:827e53c136b0 user: Valentin Bartenev date: Mon Jan 20 20:56:49 2014 +0400 description: SPDY: use ngx_queue_t to queue streams for post processing. It simplifies the code and allows easy reuse the same queue pointer to store streams in various queues with different requirements. Future implementation of SPDY/3.1 will take advantage of this quality. diffstat: src/http/ngx_http_spdy.c | 38 ++++++++++++++------------------- src/http/ngx_http_spdy.h | 6 +++- src/http/ngx_http_spdy_filter_module.c | 4 +-- 3 files changed, 21 insertions(+), 27 deletions(-) diffs (121 lines): diff -r e4adaa47af65 -r 827e53c136b0 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.c Mon Jan 20 20:56:49 2014 +0400 @@ -302,6 +302,8 @@ ngx_http_spdy_init(ngx_event_t *rev) return; } + ngx_queue_init(&sc->posted); + c->data = sc; rev->handler = ngx_http_spdy_read_handler; @@ -405,8 +407,9 @@ static void ngx_http_spdy_write_handler(ngx_event_t *wev) { ngx_int_t rc; + ngx_queue_t *q; ngx_connection_t *c; - ngx_http_spdy_stream_t *stream, *s, *sn; + ngx_http_spdy_stream_t *stream; ngx_http_spdy_connection_t *sc; c = wev->data; @@ -430,18 +433,13 @@ ngx_http_spdy_write_handler(ngx_event_t return; } - stream = NULL; - - for (s = sc->last_stream; s; s = sn) { - sn = s->next; - s->next = stream; - stream = s; - } - - sc->last_stream = NULL; - - for ( /* void */ ; stream; stream = sn) { - sn = stream->next; + while (!ngx_queue_empty(&sc->posted)) { + q = ngx_queue_head(&sc->posted); + + ngx_queue_remove(q); + + stream = ngx_queue_data(q, ngx_http_spdy_stream_t, queue); + stream->handled = 0; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, @@ -2593,6 +2591,11 @@ ngx_http_spdy_close_stream(ngx_http_spdy "spdy close stream %ui, queued %ui, processing %ui", stream->id, stream->queued, sc->processing); + if (stream->handled) { + stream->handled = 0; + ngx_queue_remove(&stream->queue); + } + fc = stream->request->connection; if (stream->queued) { @@ -2614,15 +2617,6 @@ ngx_http_spdy_close_stream(ngx_http_spdy sc->stream = NULL; } - if (stream->handled) { - for (s = sc->last_stream; s; s = s->next) { - if (s->next == stream) { - s->next = stream->next; - break; - } - } - } - sscf = ngx_http_get_module_srv_conf(sc->http_connection->conf_ctx, ngx_http_spdy_module); diff -r e4adaa47af65 -r 827e53c136b0 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy.h Mon Jan 20 20:56:49 2014 +0400 @@ -96,7 +96,8 @@ struct ngx_http_spdy_connection_s { ngx_http_spdy_stream_t **streams_index; ngx_http_spdy_out_frame_t *last_out; - ngx_http_spdy_stream_t *last_stream; + + ngx_queue_t posted; ngx_http_spdy_stream_t *stream; @@ -116,7 +117,6 @@ struct ngx_http_spdy_stream_s { ngx_http_request_t *request; ngx_http_spdy_connection_t *connection; ngx_http_spdy_stream_t *index; - ngx_http_spdy_stream_t *next; ngx_uint_t header_buffers; ngx_uint_t queued; @@ -125,6 +125,8 @@ struct ngx_http_spdy_stream_s { ngx_chain_t *free_data_headers; ngx_chain_t *free_bufs; + ngx_queue_t queue; + unsigned priority:2; unsigned handled:1; unsigned blocked:1; diff -r e4adaa47af65 -r 827e53c136b0 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 22 04:58:19 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Mon Jan 20 20:56:49 2014 +0400 @@ -1073,9 +1073,7 @@ ngx_http_spdy_handle_stream(ngx_http_spd wev->delayed = 0; stream->handled = 1; - - stream->next = sc->last_stream; - sc->last_stream = stream; + ngx_queue_insert_tail(&sc->posted, &stream->queue); } } From wandenberg at gmail.com Wed Jan 22 03:39:54 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Wed, 22 Jan 2014 01:39:54 -0200 Subject: Help with shared memory usage In-Reply-To: References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> <20131006093708.GY62063@mdounin.ru> <20131220164923.GK95113@mdounin.ru> Message-ID: Hello Maxim, did you have opportunity to take a look on this last patch? Regards, Wandenberg On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto wrote: > Hello Maxim, > > I changed the patch to check only the p->next pointer. > And checking if the page is in an address less than the (pool->pages + > pages). > > + ngx_slab_page_t *prev, *p; > + ngx_uint_t pages; > + size_t size; > + > + size = pool->end - (u_char *) pool - sizeof(ngx_slab_pool_t); > + pages = (ngx_uint_t) (size / (ngx_pagesize + > sizeof(ngx_slab_page_t))); > + > + p = &page[page->slab]; > + > + if ((p < pool->pages + pages) && > + (p->next != NULL) && > + ((p->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { > > > I hope that now I did the right checks. > > Regards, > Wandenberg > > > On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin wrote: > >> Hello! >> >> On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote: >> >> > Hi Maxim, >> > >> > sorry for the long delay. I hope you remember my issue. >> > In attach is the new patch with the changes you suggest. >> > Can you check it again? I hope it can be applied to nginx code now. >> > >> > About this point "2. There is probably no need to check both prev and >> > next.", >> > I check both pointers to avoid a segmentation fault, since in some >> > situations the next can be NULL and the prev may point to pool->free. >> > As far as I could follow the code seems to me that could happen one of >> this >> > pointers, next or prev, point to a NULL. >> > I just made a double check to protect. >> >> As far as I see, all pages in the pool->free list are expected to >> have both p->prev and p->next set. And all pages with type >> NGX_SLAB_PAGE will be either on the pool->free list, or will have >> p->next set to NULL. >> >> [...] >> >> > > > +{ >> > > > + ngx_slab_page_t *neighbour = &page[page->slab]; >> > > >> > > Here "neighbour" may point outside of the allocated page >> > > structures if we are freeing the last page in the pool. >> >> It looks like you've tried to address this problem with the >> following check: >> >> > +static ngx_int_t >> > +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page) >> > +{ >> > + ngx_slab_page_t *prev, *p; >> > + >> > + p = &page[page->slab]; >> > + if ((u_char *) p >= pool->end) { >> > + return NGX_DECLINED; >> > + } >> >> This looks wrong, as pool->end points to the end of the pool, not >> the end of allocated page structures. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 22 12:05:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 12:05:23 +0000 Subject: [nginx] SSL: fixed $ssl_session_id variable. Message-ID: details: http://hg.nginx.org/nginx/rev/97e3769637a7 branches: changeset: 5531:97e3769637a7 user: Maxim Dounin date: Wed Jan 22 16:05:06 2014 +0400 description: SSL: fixed $ssl_session_id variable. Previously, it used to contain full session serialized instead of just a session id, making it almost impossible to use the variable in a safe way. Thanks to Ivan Risti?. diffstat: src/event/ngx_event_openssl.c | 16 +++------------- 1 files changed, 3 insertions(+), 13 deletions(-) diffs (39 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2504,32 +2504,22 @@ ngx_int_t ngx_ssl_get_session_id(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) { int len; - u_char *p, *buf; + u_char *buf; SSL_SESSION *sess; sess = SSL_get0_session(c->ssl->connection); - len = i2d_SSL_SESSION(sess, NULL); - - buf = ngx_alloc(len, c->log); - if (buf == NULL) { - return NGX_ERROR; - } + buf = sess->session_id; + len = sess->session_id_length; s->len = 2 * len; s->data = ngx_pnalloc(pool, 2 * len); if (s->data == NULL) { - ngx_free(buf); return NGX_ERROR; } - p = buf; - i2d_SSL_SESSION(sess, &p); - ngx_hex_dump(s->data, buf, len); - ngx_free(buf); - return NGX_OK; } From mdounin at mdounin.ru Wed Jan 22 12:05:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 12:05:24 +0000 Subject: [nginx] Upstream: reading from a client after connection upgrade. Message-ID: details: http://hg.nginx.org/nginx/rev/17134d29782e branches: changeset: 5532:17134d29782e user: Maxim Dounin date: Wed Jan 22 16:05:07 2014 +0400 description: Upstream: reading from a client after connection upgrade. Read event on a client connection might have been disabled during previous processing, and we at least need to handle events. Calling ngx_http_upstream_process_upgraded() is a simpliest way to do it. Notably this change is needed for select, poll and /dev/poll event methods. Previous version of this patch was posted here: http://mailman.nginx.org/pipermail/nginx/2014-January/041839.html diffstat: src/http/ngx_http_upstream.c | 6 +----- 1 files changed, 1 insertions(+), 5 deletions(-) diffs (16 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2560,11 +2560,7 @@ ngx_http_upstream_upgrade(ngx_http_reque ngx_http_upstream_process_upgraded(r, 1, 1); } - if (c->read->ready - || r->header_in->pos != r->header_in->last) - { - ngx_http_upstream_process_upgraded(r, 0, 1); - } + ngx_http_upstream_process_upgraded(r, 0, 1); } From mdounin at mdounin.ru Wed Jan 22 12:10:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 12:10:33 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/24afe114adeb branches: changeset: 5533:24afe114adeb user: Maxim Dounin date: Wed Jan 22 16:10:13 2014 +0400 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -5,7 +5,7 @@ NGINX = nginx-$(VER) TEMP = tmp OBJS = objs.msvc8 -OPENSSL = openssl-1.0.1e +OPENSSL = openssl-1.0.1f ZLIB = zlib-1.2.8 PCRE = pcre-8.33 From mdounin at mdounin.ru Wed Jan 22 13:58:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 13:58:01 +0000 Subject: [nginx] nginx-1.5.9-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/5a1759f33b7f branches: changeset: 5534:5a1759f33b7f user: Maxim Dounin date: Wed Jan 22 17:42:59 2014 +0400 description: nginx-1.5.9-RELEASE diffstat: docs/xml/nginx/changes.xml | 142 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 142 insertions(+), 0 deletions(-) diffs (152 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,148 @@ + + + + +?????? ? ????????? X-Accel-Redirect nginx ??????? ?????????????? URI. + + +now nginx expects escaped URIs in "X-Accel-Redirect" headers. + + + + + +????????? ssl_buffer_size. + + +the "ssl_buffer_size" directive. + + + + + +????????? limit_rate ?????? ????? ???????????? ??? +??????????? ???????? ???????? ??????? ??????? ? SPDY-???????????. + + +the "limit_rate" directive can now be used to +rate limit responses sent in SPDY connections. + + + + + +????????? spdy_chunk_size. + + +the "spdy_chunk_size" directive. + + + + + +????????? ssl_session_tickets.
+??????? Dirkjan Bussink. +
+ +the "ssl_session_tickets" directive.
+Thanks to Dirkjan Bussink. +
+
+ + + +?????????? $ssl_session_id ????????? ??? ?????? ? ??????????????? ???? +?????? ?? ??????????????.
+??????? Ivan Risti?. +
+ +the $ssl_session_id variable contained full session serialized +instead of just a session id.
+Thanks to Ivan Risti?. +
+
+ + + +nginx ??????????? ??????????? ?????????????? ?????? "?" ? ??????? SSI include. + + +nginx incorrectly handled escaped "?" character in the "include" SSI command. + + + + + +?????? ngx_http_dav_module ?? ???????????? ??????? URI ??? +????????? ??????? COPY ? MOVE. + + +the ngx_http_dav_module did not unescape destination URI +of the COPY and MOVE methods. + + + + + +resolver ?? ??????? ???????? ????? ? ?????? ? ?????. +??????? Yichun Zhang. + + +resolver did not understand domain names with a trailing dot. +Thanks to Yichun Zhang. + + + + + +??? ????????????? ? ????? ????? ?????????? ????????? "zero size buf in output"; +?????? ????????? ? 1.3.9. + + +alerts "zero size buf in output" might appear in logs while proxying; +the bug had appeared in 1.3.9. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ????????????? ?????? ngx_http_spdy_module. + + +a segmentation fault might occur in a worker process +if the ngx_http_spdy_module was used. + + + + + +??? ????????????? ??????? ????????? ?????????? select, poll ? /dev/poll +???????????? WebSocket-?????????? ????? ???????? ????? ????? ????????. + + +proxied WebSocket connections might hang right after handshake +if the select, poll, or /dev/poll methods were used. + + + + + +????????? xclient ????????? ??????-??????? +??????????? ?????????? IPv6-??????. + + +the "xclient" directive of the mail proxy module +incorrectly handled IPv6 client addresses. + + + +
+ + From mdounin at mdounin.ru Wed Jan 22 13:58:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 13:58:02 +0000 Subject: [nginx] release-1.5.9 tag Message-ID: details: http://hg.nginx.org/nginx/rev/fff0f73673c3 branches: changeset: 5535:fff0f73673c3 user: Maxim Dounin date: Wed Jan 22 17:42:59 2014 +0400 description: release-1.5.9 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -364,3 +364,4 @@ 60e0409b9ec7ee194c6d8102f0656598cc4a6cfe 70c5cd3a61cb476c2afb3a61826e59c7cda0b7a7 release-1.5.6 9ba2542d75bf62a3972278c63561fc2ef5ec573a release-1.5.7 eaa76f24975948b0ce8be01838d949122d44ed67 release-1.5.8 +5a1759f33b7fa6270e1617c08d7e655b7b127f26 release-1.5.9 From mdounin at mdounin.ru Wed Jan 22 16:51:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 20:51:50 +0400 Subject: Help with shared memory usage In-Reply-To: References: <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> <20131006093708.GY62063@mdounin.ru> <20131220164923.GK95113@mdounin.ru> Message-ID: <20140122165150.GP1835@mdounin.ru> Hello! On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote: > Hello Maxim, > > did you have opportunity to take a look on this last patch? It looks more or less correct, though I don't happy with the checks done, and there are various style issues. I'm planning to look into it and build a better version as time permits. > > > Regards, > Wandenberg > > > On Thu, Dec 26, 2013 at 10:12 PM, Wandenberg Peixoto > wrote: > > > Hello Maxim, > > > > I changed the patch to check only the p->next pointer. > > And checking if the page is in an address less than the (pool->pages + > > pages). > > > > + ngx_slab_page_t *prev, *p; > > + ngx_uint_t pages; > > + size_t size; > > + > > + size = pool->end - (u_char *) pool - sizeof(ngx_slab_pool_t); > > + pages = (ngx_uint_t) (size / (ngx_pagesize + > > sizeof(ngx_slab_page_t))); > > + > > + p = &page[page->slab]; > > + > > + if ((p < pool->pages + pages) && > > + (p->next != NULL) && > > + ((p->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { > > > > > > I hope that now I did the right checks. > > > > Regards, > > Wandenberg > > > > > > On Fri, Dec 20, 2013 at 2:49 PM, Maxim Dounin wrote: > > > >> Hello! > >> > >> On Tue, Dec 17, 2013 at 09:05:16PM -0200, Wandenberg Peixoto wrote: > >> > >> > Hi Maxim, > >> > > >> > sorry for the long delay. I hope you remember my issue. > >> > In attach is the new patch with the changes you suggest. > >> > Can you check it again? I hope it can be applied to nginx code now. > >> > > >> > About this point "2. There is probably no need to check both prev and > >> > next.", > >> > I check both pointers to avoid a segmentation fault, since in some > >> > situations the next can be NULL and the prev may point to pool->free. > >> > As far as I could follow the code seems to me that could happen one of > >> this > >> > pointers, next or prev, point to a NULL. > >> > I just made a double check to protect. > >> > >> As far as I see, all pages in the pool->free list are expected to > >> have both p->prev and p->next set. And all pages with type > >> NGX_SLAB_PAGE will be either on the pool->free list, or will have > >> p->next set to NULL. > >> > >> [...] > >> > >> > > > +{ > >> > > > + ngx_slab_page_t *neighbour = &page[page->slab]; > >> > > > >> > > Here "neighbour" may point outside of the allocated page > >> > > structures if we are freeing the last page in the pool. > >> > >> It looks like you've tried to address this problem with the > >> following check: > >> > >> > +static ngx_int_t > >> > +ngx_slab_merge_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page) > >> > +{ > >> > + ngx_slab_page_t *prev, *p; > >> > + > >> > + p = &page[page->slab]; > >> > + if ((u_char *) p >= pool->end) { > >> > + return NGX_DECLINED; > >> > + } > >> > >> This looks wrong, as pool->end points to the end of the pool, not > >> the end of allocated page structures. > >> > >> -- > >> Maxim Dounin > >> http://nginx.org/ > >> > >> _______________________________________________ > >> nginx-devel mailing list > >> nginx-devel at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > >> > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jan 22 18:10:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 22:10:58 +0400 Subject: [PATCH 1 of 3] Mail: add IMAP ID command support (RFC2971) In-Reply-To: <3ad4498760c6fcd2ba24.1390129855@HPC> References: <3ad4498760c6fcd2ba24.1390129855@HPC> Message-ID: <20140122181057.GQ1835@mdounin.ru> Hello! On Sun, Jan 19, 2014 at 12:10:55PM +0100, Filipe da Silva wrote: > # HG changeset patch > # User Filipe da Silva > # Date 1390129333 -3600 > # Sun Jan 19 12:02:13 2014 +0100 > # Node ID 3ad4498760c6fcd2ba24ae84f6d924b3a1a35a31 > # Parent bb3dc21c89efa8cfd1b9f661fcfd2f155687b99a > Mail: add IMAP ID command support (RFC2971). > > Parse the ID command and its arguments. > Handle the server response to ID command. > > diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail.h > --- a/src/mail/ngx_mail.h Fri Jan 17 22:06:04 2014 +0400 > +++ b/src/mail/ngx_mail.h Sun Jan 19 12:02:13 2014 +0100 > @@ -215,6 +215,7 @@ typedef struct { > unsigned quoted:1; > unsigned backslash:1; > unsigned no_sync_literal:1; > + unsigned params_list:1; > unsigned starttls:1; > unsigned esmtp:1; > unsigned auth_method:3; > @@ -233,6 +234,7 @@ typedef struct { > ngx_str_t smtp_helo; > ngx_str_t smtp_from; > ngx_str_t smtp_to; > + ngx_str_t imap_id; > > ngx_str_t cmd; > > @@ -279,10 +281,10 @@ typedef struct { > #define NGX_IMAP_CAPABILITY 3 > #define NGX_IMAP_NOOP 4 > #define NGX_IMAP_STARTTLS 5 > +#define NGX_IMAP_AUTHENTICATE 6 > +#define NGX_IMAP_ID 7 > > -#define NGX_IMAP_NEXT 6 > - > -#define NGX_IMAP_AUTHENTICATE 7 > +#define NGX_IMAP_NEXT 8 > > > #define NGX_SMTP_HELO 1 > diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail_imap_handler.c > --- a/src/mail/ngx_mail_imap_handler.c Fri Jan 17 22:06:04 2014 +0400 > +++ b/src/mail/ngx_mail_imap_handler.c Sun Jan 19 12:02:13 2014 +0100 > @@ -18,6 +18,8 @@ static ngx_int_t ngx_mail_imap_authentic > ngx_connection_t *c); > static ngx_int_t ngx_mail_imap_capability(ngx_mail_session_t *s, > ngx_connection_t *c); > +static ngx_int_t ngx_mail_imap_id(ngx_mail_session_t *s, > + ngx_connection_t *c); > static ngx_int_t ngx_mail_imap_starttls(ngx_mail_session_t *s, > ngx_connection_t *c); > > @@ -31,6 +33,7 @@ static u_char imap_username[] = "+ VXNl > static u_char imap_password[] = "+ UGFzc3dvcmQ6" CRLF; > static u_char imap_bye[] = "* BYE" CRLF; > static u_char imap_invalid_command[] = "BAD invalid command" CRLF; > +static u_char imap_server_id_nil[] = "* ID NIL" CRLF; > > > void > @@ -183,6 +186,10 @@ ngx_mail_imap_auth_state(ngx_event_t *re > rc = ngx_mail_imap_capability(s, c); > break; > > + case NGX_IMAP_ID: > + rc = ngx_mail_imap_id(s, c); > + break; > + > case NGX_IMAP_LOGOUT: > s->quit = 1; > ngx_str_set(&s->text, imap_bye); > @@ -438,6 +445,86 @@ ngx_mail_imap_capability(ngx_mail_sessio > > > static ngx_int_t > +ngx_mail_imap_id(ngx_mail_session_t *s, ngx_connection_t *c) > +{ > + ngx_uint_t i; > + ngx_str_t *arg, cmd; > + > + if (s->args.nelts == 0) { > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } > + > + arg = s->args.elts; > + cmd.data = s->tag.data + s->tag.len; > + cmd.len = s->arg_end - cmd.data; > + > + /* Client may send ID NIL or ID ( "key" "value" ... ) */ Comment style looks inconsistent (either don't use capitalization, or properly use trailing dots). And the comment itself is misleading as syntax provided is contradicts one from RFC. > + if (s->args.nelts == 1) { > + if (cmd.len != 6 > + || ngx_strncasecmp(cmd.data, (u_char *) "ID NIL", 6) != 0) { Style: the "{" should be on it's own line. > + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ID invalid argument:\"%V\"", > + &cmd); The message looks not very in-line with other debug messages, and it probably only needs two lines. > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } > + > + goto valid; > + } > + > + /* more than one and not an even item count */ > + if (s->args.nelts % 2 != 0) { > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } The comment seems to be obvious enough to be confusing. > + > + for (i = 0; i < s->args.nelts; i += 2) { > + > + switch (arg[i].len) { > + > + case 0: > + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ID empty key #%ui name : \"\"", i ); > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + > + case 3: > + if (ngx_strncasecmp(arg[i].data, (u_char *) "NIL", 3) == 0) { > + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ID NIL Key #%ui name \"%V\"", i, > + &arg[i]); > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } > + break; > + > + default: > + if (arg[i].len > 30) { > + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ID Key #%ui name \"%V\" is too long", > + i, &arg[i]); > + return NGX_MAIL_PARSE_INVALID_COMMAND; > + } > + break; > + } > + } This code looks unneeded and incorrect. E.g., it will reject something like 't ID ("nil" "foo")'. > + > +valid: > + s->imap_id.len = cmd.len; > + s->imap_id.data = ngx_pnalloc(c->pool, cmd.len); > + if (s->imap_id.data == NULL) { > + return NGX_ERROR; > + } > + > + ngx_memcpy(s->imap_id.data, cmd.data, cmd.len); > + > + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "imap client ID:\"%V%V\"", &s->tag, &s->imap_id); > + > + /* Prepare server response to ID command */ > + ngx_str_set(&s->text, imap_server_id_nil); See above. > + > + return NGX_OK; > +} > + > + > +static ngx_int_t > ngx_mail_imap_starttls(ngx_mail_session_t *s, ngx_connection_t *c) > { > #if (NGX_MAIL_SSL) > diff -r bb3dc21c89ef -r 3ad4498760c6 src/mail/ngx_mail_parse.c > --- a/src/mail/ngx_mail_parse.c Fri Jan 17 22:06:04 2014 +0400 > +++ b/src/mail/ngx_mail_parse.c Sun Jan 19 12:02:13 2014 +0100 > @@ -280,6 +280,17 @@ ngx_mail_imap_parse_command(ngx_mail_ses > > switch (p - c) { > > + case 2: > + if ((c[0] == 'I' || c[0] == 'i') > + && (c[1] == 'D'|| c[1] == 'd')) > + { > + s->command = NGX_IMAP_ID; > + > + } else { > + goto invalid; > + } > + break; > + > case 4: > if ((c[0] == 'N' || c[0] == 'n') > && (c[1] == 'O'|| c[1] == 'o') > @@ -409,14 +420,32 @@ ngx_mail_imap_parse_command(ngx_mail_ses > case ' ': > break; > case CR: > + if (s->params_list) > + goto invalid; I believe I already wrote about ifs without curly brackets in previous review. > state = sw_almost_done; > s->arg_end = p; > break; > case LF: > + if (s->params_list) > + goto invalid; > s->arg_end = p; > goto done; > + case '(': // params list begin > + if (!s->params_list && s->args.nelts == 0) { > + s->params_list = 1; > + break; > + } > + goto invalid; As well as about C99-style comments. > + case ')': // params list closing > + if (s->params_list && s->args.nelts > 0) { > + s->params_list = 0; > + state = sw_spaces_before_argument; > + break; > + } > + goto invalid; And about empty parameters lists allowed by RFC 2971. It may be a good idea to pay a bit more attention to comments already got, as it's quickly becomes boring to repeat the same comments again. [... rest of the patch skipped without actual review ...] -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Wed Jan 22 18:47:09 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 22 Jan 2014 10:47:09 -0800 Subject: Help with shared memory usage In-Reply-To: <20140122165150.GP1835@mdounin.ru> References: <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> <20131006093708.GY62063@mdounin.ru> <20131220164923.GK95113@mdounin.ru> <20140122165150.GP1835@mdounin.ru> Message-ID: Hello Maxim! On Wed, Jan 22, 2014 at 8:51 AM, Maxim Dounin wrote: > > It looks more or less correct, though I don't happy with the > checks done, and there are various style issues. I'm planning to > look into it and build a better version as time permits. > We're also having this issue. Hopefully this patch can get updated and merged soon :) Thanks for your time! Best regards, -agentzh From jimpop at gmail.com Wed Jan 22 19:56:38 2014 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 22 Jan 2014 14:56:38 -0500 Subject: v1.5.9 compiled size is 8x larger than v1.5.8 Message-ID: Hello, I'm seeing a strange problem with v1.5.9, the compiled binary is 8 times larger than it was on v1.5.8. My build environment (debian) is the same that I've used for v1.5.8. Production: ~$ nginx -V nginx version: nginx/1.5.8 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-http_gzip_static_module --with-http_ssl_module --with-http_spdy_module --with-ipv6 --without-http_browser_module --without-http_geo_module --without-http_limit_req_module --without-http_limit_zone_module --without-http_memcached_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --with-http_stub_status_module --without-http_ssi_module --without-http_userid_module --without-http_uwsgi_module ~$ ls -al /usr/sbin/nginx -rwxr-xr-x 1 root root 613080 Dec 30 19:15 /usr/sbin/nginx Testbed: ~$ nginx -V nginx version: nginx/1.5.9 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-http_gzip_static_module --with-http_ssl_module --with-http_spdy_module --with-ipv6 --without-http_browser_module --without-http_geo_module --without-http_limit_req_module --without-http_limit_zone_module --without-http_memcached_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --with-http_stub_status_module --without-http_ssi_module --without-http_userid_module --without-http_uwsgi_module ~$ ls -al /usr/sbin/nginx -rwxr-xr-x 1 root root 5226396 Jan 22 16:11 /usr/sbin/nginx What could be causing the increased binary size? Thank you, -Jim P. From ru at nginx.com Wed Jan 22 20:23:25 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 23 Jan 2014 00:23:25 +0400 Subject: v1.5.9 compiled size is 8x larger than v1.5.8 In-Reply-To: References: Message-ID: <20140122202325.GH70529@lo0.su> On Wed, Jan 22, 2014 at 02:56:38PM -0500, Jim Popovitch wrote: > I'm seeing a strange problem with v1.5.9, the compiled binary is 8 > times larger than it was on v1.5.8. My build environment (debian) is > the same that I've used for v1.5.8. > > Production: > ~$ nginx -V > nginx version: nginx/1.5.8 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid > --with-pcre-jit --with-http_gzip_static_module --with-http_ssl_module > --with-http_spdy_module --with-ipv6 --without-http_browser_module > --without-http_geo_module --without-http_limit_req_module > --without-http_limit_zone_module --without-http_memcached_module > --without-http_referer_module --without-http_scgi_module > --without-http_split_clients_module --with-http_stub_status_module > --without-http_ssi_module --without-http_userid_module > --without-http_uwsgi_module > ~$ ls -al /usr/sbin/nginx > -rwxr-xr-x 1 root root 613080 Dec 30 19:15 /usr/sbin/nginx > > > Testbed: > ~$ nginx -V > nginx version: nginx/1.5.9 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid > --with-pcre-jit --with-http_gzip_static_module --with-http_ssl_module > --with-http_spdy_module --with-ipv6 --without-http_browser_module > --without-http_geo_module --without-http_limit_req_module > --without-http_limit_zone_module --without-http_memcached_module > --without-http_referer_module --without-http_scgi_module > --without-http_split_clients_module --with-http_stub_status_module > --without-http_ssi_module --without-http_userid_module > --without-http_uwsgi_module > ~$ ls -al /usr/sbin/nginx > -rwxr-xr-x 1 root root 5226396 Jan 22 16:11 /usr/sbin/nginx > > > What could be causing the increased binary size? > > Thank you, Could it be that you lost some shared library that nginx uses? For the starters, compare the outputs of "file /usr/sbin/nginx" (are both binaries are stripped) and "ldd /usr/sbin/nginx" (do they show the same shared libs)? From jimpop at gmail.com Wed Jan 22 20:42:52 2014 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 22 Jan 2014 15:42:52 -0500 Subject: v1.5.9 compiled size is 8x larger than v1.5.8 In-Reply-To: <20140122202325.GH70529@lo0.su> References: <20140122202325.GH70529@lo0.su> Message-ID: On Wed, Jan 22, 2014 at 3:23 PM, Ruslan Ermilov wrote: > Could it be that you lost some shared library that nginx uses? > > For the starters, compare the outputs of "file /usr/sbin/nginx" > (are both binaries are stripped) and "ldd /usr/sbin/nginx" (do > they show the same shared libs)? Hmmm, the 2 environments (Production and Testbed) are identical, Debian 7.3 x86_64. Production (v1.5.8): ~$ file /usr/sbin/nginx /usr/sbin/nginx: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xe63406c0f961166e7b17abd320290fb2c8e976ee, stripped ~$ ldd /usr/sbin/nginx linux-vdso.so.1 => (0x00007fffc67ff000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9e6d744000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f9e6d50d000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f9e6d2cf000) libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f9e6d06f000) libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f9e6cc8b000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9e6ca86000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f9e6c86f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9e6c4e5000) /lib64/ld-linux-x86-64.so.2 (0x00007f9e6d965000) Testbed (v1.5.9) ~$ file /usr/sbin/nginx /usr/sbin/nginx: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0x38b6c11381d456bce11faaa5c5178f68886816a5, not stripped ~$ ldd /usr/sbin/nginx linux-vdso.so.1 => (0x00007fff24d4a000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb3a7424000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fb3a71ed000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fb3a6faf000) libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fb3a6d4f000) libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fb3a696b000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb3a6766000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fb3a654f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb3a61c5000) /lib64/ld-linux-x86-64.so.2 (0x00007fb3a7645000) -Jim P. From jimpop at gmail.com Wed Jan 22 20:50:22 2014 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 22 Jan 2014 15:50:22 -0500 Subject: v1.5.9 compiled size is 8x larger than v1.5.8 In-Reply-To: References: <20140122202325.GH70529@lo0.su> Message-ID: On Wed, Jan 22, 2014 at 3:42 PM, Jim Popovitch wrote: > not stripped Oh man, i feel like an idiot. I didn't see that the first time I read it. Sorry for the noise. -Jim P. From cubicdaiya at gmail.com Thu Jan 23 13:30:40 2014 From: cubicdaiya at gmail.com (cubicdaiya) Date: Thu, 23 Jan 2014 22:30:40 +0900 Subject: [PATCH]fix typo in ngx_http_range_filter_module.c Message-ID: # HG changeset patch # User Tatsuhiko Kubo # Date 1390482599 -32400 # Thu Jan 23 22:09:59 2014 +0900 # Node ID 5fe4d5140563b51a6530847e7e3a5d2a2031bd8a # Parent fff0f73673c3dbe09348913b61925994881bf72e fix typo diff -r fff0f73673c3 -r 5fe4d5140563 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Wed Jan 22 17:42:59 2014 +0400 +++ b/src/http/modules/ngx_http_range_filter_module.c Thu Jan 23 22:09:59 2014 +0900 @@ -22,7 +22,7 @@ * ... data ... * * - * the mutlipart format: + * the multipart format: * * "HTTP/1.0 206 Partial Content" CRLF * ... header ... -- Tatsuhiko Kubo E-Mail: cubicdaiya at gmail.com Profile: http://cccis.jp/index_en.html Github:https://github.com/cubicdaiya -------------- next part -------------- A non-text attachment was scrubbed... Name: fix_typo.patch Type: application/octet-stream Size: 654 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Jan 23 14:33:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2014 14:33:24 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/2a81949cbd7c branches: changeset: 5536:2a81949cbd7c user: Maxim Dounin date: Thu Jan 23 18:32:25 2014 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005009 -#define NGINX_VERSION "1.5.9" +#define nginx_version 1005010 +#define NGINX_VERSION "1.5.10" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From mdounin at mdounin.ru Thu Jan 23 14:33:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2014 14:33:25 +0000 Subject: [nginx] SSL: fixed $ssl_session_id possible segfault after 97e37... Message-ID: details: http://hg.nginx.org/nginx/rev/49b1ad48b55c branches: changeset: 5537:49b1ad48b55c user: Maxim Dounin date: Thu Jan 23 18:32:26 2014 +0400 description: SSL: fixed $ssl_session_id possible segfault after 97e3769637a7. Even during execution of a request it is possible that there will be no session available, notably in case of renegotiation. As a result logging of $ssl_session_id in some cases caused NULL pointer dereference after revision 97e3769637a7 (1.5.9). The check added returns an empty string if there is no session available. diffstat: src/event/ngx_event_openssl.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2508,6 +2508,10 @@ ngx_ssl_get_session_id(ngx_connection_t SSL_SESSION *sess; sess = SSL_get0_session(c->ssl->connection); + if (sess == NULL) { + s->len = 0; + return NGX_OK; + } buf = sess->session_id; len = sess->session_id_length; From vbart at nginx.com Thu Jan 23 15:27:54 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 23 Jan 2014 15:27:54 +0000 Subject: [nginx] Typo fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/a387ce36744a branches: changeset: 5538:a387ce36744a user: Tatsuhiko Kubo date: Thu Jan 23 22:09:59 2014 +0900 description: Typo fixed. diffstat: src/http/modules/ngx_http_range_filter_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 49b1ad48b55c -r a387ce36744a src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Thu Jan 23 18:32:26 2014 +0400 +++ b/src/http/modules/ngx_http_range_filter_module.c Thu Jan 23 22:09:59 2014 +0900 @@ -22,7 +22,7 @@ * ... data ... * * - * the mutlipart format: + * the multipart format: * * "HTTP/1.0 206 Partial Content" CRLF * ... header ... From vbart at nginx.com Thu Jan 23 15:32:40 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 23 Jan 2014 19:32:40 +0400 Subject: [PATCH]fix typo in ngx_http_range_filter_module.c In-Reply-To: References: Message-ID: <2499190.3cuUrqkCr9@vbart-laptop> On Thursday 23 January 2014 22:30:40 cubicdaiya wrote: > # HG changeset patch > # User Tatsuhiko Kubo > # Date 1390482599 -32400 > # Thu Jan 23 22:09:59 2014 +0900 > # Node ID 5fe4d5140563b51a6530847e7e3a5d2a2031bd8a > # Parent fff0f73673c3dbe09348913b61925994881bf72e > fix typo > [..] Thanks, committed with modified commit message. wbr, Valentin V. Bartenev From albertcasademont at gmail.com Thu Jan 23 17:22:40 2014 From: albertcasademont at gmail.com (Albert Casademont Filella) Date: Thu, 23 Jan 2014 18:22:40 +0100 Subject: An "error" that maybe should be an "info" or "debug" Message-ID: Hi all, We have a media server and we protect certain files from being accessed by external ips, we just allow access from our static ip. Everything works ok and a 403 is thrown to those who try to access the files. server { listen ***; root ***; {some other settings} # protect original image files location ~* "/[0-9]+\.(jpg|jpeg|tif|tiff|png|gif)$" { allow ***; deny all; } } However, an entry like this is logged in the error.log 2014/01/23 17:44:44 [error] 31071#0: *2701529 access forbidden by rule, client: ***, server: ***, request: "GET /29531.png HTTP/1.1", host: "***" Is this really an nginx error? Couldn't this be lowered to an "info" or "debug" level message? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Jan 23 20:16:48 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 23 Jan 2014 12:16:48 -0800 Subject: [PATH] Variable: setting $args should invalidate unparsed uri. Message-ID: Hello! A user of mine, rvsw, reported an issue in the $args builtin variable. Setting $args does not change the r->valid_unparsed_uri flag so modules like ngx_proxy might still use the unparsed URI even when the $args has been assigned to a new value. A minimal example that can reproduce this is as follows: server { listen 54123; location = /t { return 200 "args: $args"; } } server { listen 8080; location = /t { set $args "foo=1&bar=2"; proxy_pass http://127.0.0.1:54123; } } Querying localhost:8080/t should give args: foo=1&bar=2 But we're getting args: with the current nginx core. The patch attached to this email fixes this issue. Thanks! -agentzh # HG changeset patch # User Yichun Zhang # Date 1390506359 28800 # Node ID 17186b98c235c07e94c64e5853689f790f173756 # Parent 4b50d1f299d8a69f3e3f7975132e1490352642fe Variable: setting $args should invalidate unparsed uri. diff -r 4b50d1f299d8 -r 17186b98c235 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Fri Jan 10 11:22:14 2014 -0800 +++ b/src/http/ngx_http_variables.c Thu Jan 23 11:45:59 2014 -0800 @@ -15,6 +15,8 @@ ngx_http_variable_value_t *v, uintptr_t data); static void ngx_http_variable_request_set(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static void ngx_http_variable_request_args_set(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_get_size(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static void ngx_http_variable_request_set_size(ngx_http_request_t *r, @@ -218,7 +220,7 @@ NGX_HTTP_VAR_NOCACHEABLE, 0 }, { ngx_string("args"), - ngx_http_variable_request_set, + ngx_http_variable_request_args_set, ngx_http_variable_request, offsetof(ngx_http_request_t, args), NGX_HTTP_VAR_CHANGEABLE|NGX_HTTP_VAR_NOCACHEABLE, 0 }, @@ -647,6 +649,15 @@ static void +ngx_http_variable_request_args_set(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + r->valid_unparsed_uri = 0; + ngx_http_variable_request_set(r, v, data); +} + + +static void ngx_http_variable_request_set(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.5.8-setting_args_invalidates_uri.patch Type: text/x-patch Size: 1621 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Jan 24 15:46:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Jan 2014 19:46:22 +0400 Subject: An "error" that maybe should be an "info" or "debug" In-Reply-To: References: Message-ID: <20140124154622.GO1835@mdounin.ru> Hello! On Thu, Jan 23, 2014 at 06:22:40PM +0100, Albert Casademont Filella wrote: > Hi all, > > We have a media server and we protect certain files from being accessed by > external ips, we just allow access from our static ip. Everything works ok > and a 403 is thrown to those who try to access the files. > > server { > listen ***; > root ***; > > {some other settings} > > # protect original image files > location ~* "/[0-9]+\.(jpg|jpeg|tif|tiff|png|gif)$" { > allow ***; > deny all; > } > } > > However, an entry like this is logged in the error.log > > 2014/01/23 17:44:44 [error] 31071#0: *2701529 access forbidden by rule, > client: ***, server: ***, request: "GET /29531.png HTTP/1.1", host: "***" > > Is this really an nginx error? Couldn't this be lowered to an "info" or > "debug" level message? While this isn't an nginx error, it's either user or a configuration error, and lowering the logging level will likely result in confusion if the access is forbidden unintentionally and/or for other reasons it's not clear why the error is returned. Currently, the error_log with a higher level can be used in a particular location if such errors are common and shouldn't be logged. If it doesn't work for you for some reason, than adding something similar to log_not_found and/or limit_req_log_level directives can be considered. -- Maxim Dounin http://nginx.org/ From flevionnois at gmail.com Fri Jan 24 20:40:32 2014 From: flevionnois at gmail.com (flevionnois at gmail.com) Date: Fri, 24 Jan 2014 21:40:32 +0100 Subject: [PATCH 0 of 1] Mail: add support for SSL client certificate Message-ID: Add support for mail SSL client auth Take into account Sven Peter patch http://forum.nginx.org/read.php?29,246309,246328#msg-246328 and transmit the client certificate to the backend server From flevionnois at gmail.com Fri Jan 24 20:40:33 2014 From: flevionnois at gmail.com (flevionnois at gmail.com) Date: Fri, 24 Jan 2014 21:40:33 +0100 Subject: [PATCH 1 of 1] Mail: added support for SSL client certificate In-Reply-To: References: Message-ID: # HG changeset patch # User Franck Levionnois # Date 1390577176 -3600 # Fri Jan 24 16:26:16 2014 +0100 # Node ID d7b8381c200e300c2b6729574f4c2ab537804f56 # Parent a387ce36744aa36b50e8171dbf01ef716748327e Mail: added support for SSL client certificate Add support for SSL module like HTTP. Added mail configuration directives (like http): ssl_verify_client, ssl_verify_depth, ssl_client_certificate, ssl_trusted_certificate, ssl_crl Added headers: Auth-Certificate, Auth-Certificate-Verify, Auth-Issuer-DN, Auth-Subject-DN, Auth-Subject-Serial diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_auth_http_module.c Fri Jan 24 16:26:16 2014 +0100 @@ -1135,6 +1135,32 @@ "mail auth http dummy handler"); } +#if (NGX_MAIL_SSL) +static ngx_int_t +ngx_ssl_get_certificate_oneline(ngx_connection_t *c, ngx_pool_t *pool, + ngx_str_t *b64_cert) +{ + ngx_str_t pemCert; + if (ngx_ssl_get_raw_certificate(c, pool, &pemCert) != NGX_OK) { + return NGX_ERROR; + } + + if (pemCert.len == 0) { + b64_cert->len = 0; + return NGX_OK; + } + + b64_cert->len = ngx_base64_encoded_length(pemCert.len); + b64_cert->data = ngx_palloc( pool, b64_cert->len); + if (b64_cert->data == NULL) { + b64_cert->len = 0; + return NGX_ERROR; + } + ngx_encode_base64(b64_cert, &pemCert); + + return NGX_OK; +} +#endif static ngx_buf_t * ngx_mail_auth_http_create_request(ngx_mail_session_t *s, ngx_pool_t *pool, @@ -1142,7 +1168,9 @@ { size_t len; ngx_buf_t *b; - ngx_str_t login, passwd; + ngx_str_t login, passwd, client_cert, client_verify, + client_subject, client_issuer, + client_serial; ngx_mail_core_srv_conf_t *cscf; if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { @@ -1155,6 +1183,42 @@ cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); +#if (NGX_MAIL_SSL) + if (s->connection->ssl) { + if (ngx_ssl_get_client_verify(s->connection, pool, + &client_verify) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_subject_dn(s->connection, pool, + &client_subject) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_issuer_dn(s->connection, pool, + &client_issuer) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_serial_number(s->connection, pool, + &client_serial) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_certificate_oneline(s->connection, pool, + &client_cert) != NGX_OK) { + return NULL; + } + } else { + client_verify.len = 0; + client_issuer.len = 0; + client_subject.len = 0; + client_serial.len = 0; + client_cert.len = 0; + } + +#endif + len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 + sizeof("Host: ") - 1 + ahcf->host_header.len + sizeof(CRLF) - 1 + sizeof("Auth-Method: ") - 1 @@ -1163,6 +1227,18 @@ + sizeof("Auth-User: ") - 1 + login.len + sizeof(CRLF) - 1 + sizeof("Auth-Pass: ") - 1 + passwd.len + sizeof(CRLF) - 1 + sizeof("Auth-Salt: ") - 1 + s->salt.len +#if (NGX_MAIL_SSL) + + sizeof("Auth-Certificate: ") - 1 + client_cert.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Certificate-Verify: ") - 1 + client_verify.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Issuer-DN: ") - 1 + client_issuer.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Subject-DN: ") - 1 + client_subject.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Subject-Serial: ") - 1 + client_serial.len + + sizeof(CRLF) - 1 +#endif + sizeof("Auth-Protocol: ") - 1 + cscf->protocol->name.len + sizeof(CRLF) - 1 + sizeof("Auth-Login-Attempt: ") - 1 + NGX_INT_T_LEN @@ -1212,7 +1288,43 @@ s->passwd.data = NULL; } - +#if (NGX_MAIL_SSL) + if ( client_cert.len ) + { + b->last = ngx_cpymem(b->last, "Auth-Certificate: ", + sizeof("Auth-Certificate: ") - 1); + b->last = ngx_copy(b->last, client_cert.data, client_cert.len); + *b->last++ = CR; *b->last++ = LF; + } + if ( client_verify.len ) + { + b->last = ngx_cpymem(b->last, "Auth-Certificate-Verify: ", + sizeof("Auth-Certificate-Verify: ") - 1); + b->last = ngx_copy(b->last, client_verify.data, client_verify.len); + *b->last++ = CR; *b->last++ = LF; + } + if ( client_issuer.len ) + { + b->last = ngx_cpymem(b->last, "Auth-Issuer-DN: ", + sizeof("Auth-Issuer-DN: ") - 1); + b->last = ngx_copy(b->last, client_issuer.data, client_issuer.len); + *b->last++ = CR; *b->last++ = LF; + } + if ( client_subject.len ) + { + b->last = ngx_cpymem(b->last, "Auth-Subject-DN: ", + sizeof("Auth-Subject-DN: ") - 1); + b->last = ngx_copy(b->last, client_subject.data, client_subject.len); + *b->last++ = CR; *b->last++ = LF; + } + if ( client_serial.len ) + { + b->last = ngx_cpymem(b->last, "Auth-Subject-Serial: ", + sizeof("Auth-Subject-Serial: ") - 1); + b->last = ngx_copy(b->last, client_serial.data, client_serial.len); + *b->last++ = CR; *b->last++ = LF; + } +#endif b->last = ngx_cpymem(b->last, "Auth-Protocol: ", sizeof("Auth-Protocol: ") - 1); b->last = ngx_cpymem(b->last, cscf->protocol->name.data, diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_handler.c Fri Jan 24 16:26:16 2014 +0100 @@ -236,11 +236,59 @@ { ngx_mail_session_t *s; ngx_mail_core_srv_conf_t *cscf; +#if (NGX_MAIL_SSL) + ngx_mail_ssl_conf_t *sslcf; +#endif + + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ngx_mail_ssl_handshake_handler handshaked: %d ", + c->ssl->handshaked ); if (c->ssl->handshaked) { s = c->data; +#if (NGX_MAIL_SSL) + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); + if (sslcf->verify) { + long rc; + + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK + && (sslcf->verify != 3 || !ngx_ssl_verify_error_optional(rc))) + { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client SSL certificate verify error: (%l:%s)", + rc, X509_verify_cert_error_string(rc)); + + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + ngx_mail_close_connection(c); + return; + } + + if (sslcf->verify == 1) { + X509 *cert; + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client sent no required SSL certificate"); + + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + ngx_mail_close_connection(c); + return; + } + + X509_free(cert); + } + } +#endif + if (s->starttls) { cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); @@ -276,6 +324,10 @@ s->protocol = cscf->protocol->type; + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ngx_mail_init_session protocol: %d ", + cscf->protocol->type ); + s->ctx = ngx_pcalloc(c->pool, sizeof(void *) * ngx_mail_max_module); if (s->ctx == NULL) { ngx_mail_session_internal_server_error(s); diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_ssl_module.c Fri Jan 24 16:26:16 2014 +0100 @@ -43,6 +43,13 @@ { ngx_null_string, 0 } }; +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { + { ngx_string("off"), 0 }, + { ngx_string("on"), 1 }, + { ngx_string("optional"), 2 }, + { ngx_string("optional_no_ca"), 3 }, + { ngx_null_string, 0 } +}; static ngx_command_t ngx_mail_ssl_commands[] = { @@ -102,6 +109,34 @@ offsetof(ngx_mail_ssl_conf_t, ciphers), NULL }, + { ngx_string("ssl_verify_client"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify), + &ngx_mail_ssl_verify }, + + { ngx_string("ssl_verify_depth"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, + ngx_conf_set_num_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify_depth), + NULL }, + + { ngx_string("ssl_client_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, client_certificate), + NULL }, + + { ngx_string("ssl_trusted_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), + NULL }, + { ngx_string("ssl_prefer_server_ciphers"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -137,6 +172,13 @@ offsetof(ngx_mail_ssl_conf_t, session_timeout), NULL }, + { ngx_string("ssl_crl"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, crl), + NULL }, + ngx_null_command }; @@ -196,6 +238,8 @@ scf->enable = NGX_CONF_UNSET; scf->starttls = NGX_CONF_UNSET_UINT; scf->prefer_server_ciphers = NGX_CONF_UNSET; + scf->verify = NGX_CONF_UNSET_UINT; + scf->verify_depth = NGX_CONF_UNSET_UINT; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; scf->session_tickets = NGX_CONF_UNSET; @@ -228,11 +272,20 @@ (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); + ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); ngx_conf_merge_str_value(conf->certificate_key, prev->certificate_key, ""); ngx_conf_merge_str_value(conf->dhparam, prev->dhparam, ""); + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, + ""); + ngx_conf_merge_str_value(conf->trusted_certificate, + prev->trusted_certificate, ""); + ngx_conf_merge_str_value(conf->crl, prev->crl, ""); + ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve, NGX_DEFAULT_ECDH_CURVE); @@ -318,6 +371,35 @@ return NGX_CONF_ERROR; } + if (conf->verify) { + + if (conf->client_certificate.len == 0 && conf->verify != 3) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no ssl_client_certificate for ssl_client_verify"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_client_certificate(cf, &conf->ssl, + &conf->client_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + } + + if (ngx_ssl_trusted_certificate(cf, &conf->ssl, + &conf->trusted_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_crl(cf, &conf->ssl, &conf->crl) != NGX_OK) { + return NGX_CONF_ERROR; + } + if (conf->prefer_server_ciphers) { SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_ssl_module.h Fri Jan 24 16:26:16 2014 +0100 @@ -28,6 +28,8 @@ ngx_uint_t starttls; ngx_uint_t protocols; + ngx_uint_t verify; + ngx_uint_t verify_depth; ssize_t builtin_session_cache; time_t session_timeout; @@ -36,6 +38,9 @@ ngx_str_t certificate_key; ngx_str_t dhparam; ngx_str_t ecdh_curve; + ngx_str_t client_certificate; + ngx_str_t trusted_certificate; + ngx_str_t crl; ngx_str_t ciphers; From fdasilvayy at gmail.com Sat Jan 25 08:47:09 2014 From: fdasilvayy at gmail.com (Filipe da Silva) Date: Sat, 25 Jan 2014 09:47:09 +0100 Subject: [PATCH] Mail: added support for SSL client certificate In-Reply-To: <[PATCH 1 of 1] Mail: added support for SSL client certificate> References: <[PATCH 1 of 1] Mail: added support for SSL client certificate> Message-ID: <9dc48eeb8e5cb022676d.1390639629@HPC> # HG changeset patch # User Franck Levionnois # Date 1390577176 -3600 # Fri Jan 24 16:26:16 2014 +0100 # Node ID 9dc48eeb8e5cb022676dbbe56e3435d20e822ab3 # Parent a387ce36744aa36b50e8171dbf01ef716748327e Mail: added support for SSL client certificate. Add support for SSL Mutual Authentification like in HTTP module. Added mail configuration directives (like http): ssl_verify_client, ssl_verify_depth, ssl_client_certificate, ssl_trusted_certificate, ssl_crl Added headers: Auth-Certificate, Auth-Certificate-Verify, Auth-Issuer-DN, Auth-Subject-DN, Auth-Subject-Serial diff -r a387ce36744a -r 9dc48eeb8e5c src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_auth_http_module.c Fri Jan 24 16:26:16 2014 +0100 @@ -1135,6 +1135,35 @@ ngx_mail_auth_http_dummy_handler(ngx_eve "mail auth http dummy handler"); } +#if (NGX_MAIL_SSL) + +static ngx_int_t +ngx_ssl_get_certificate_oneline(ngx_connection_t *c, ngx_pool_t *pool, + ngx_str_t *b64_cert) +{ + ngx_str_t pem_cert; + if (ngx_ssl_get_raw_certificate(c, pool, &pem_cert) != NGX_OK) { + return NGX_ERROR; + } + + if (pem_cert.len == 0) { + b64_cert->len = 0; + return NGX_OK; + } + + b64_cert->len = ngx_base64_encoded_length(pem_cert.len); + b64_cert->data = ngx_palloc(pool, b64_cert->len); + if (b64_cert->data == NULL) { + b64_cert->len = 0; + return NGX_ERROR; + } + ngx_encode_base64(b64_cert, &pem_cert); + + return NGX_OK; +} + +#endif + static ngx_buf_t * ngx_mail_auth_http_create_request(ngx_mail_session_t *s, ngx_pool_t *pool, @@ -1143,6 +1172,11 @@ ngx_mail_auth_http_create_request(ngx_ma size_t len; ngx_buf_t *b; ngx_str_t login, passwd; +#if (NGX_MAIL_SSL) + ngx_str_t client_cert, client_verify, + client_subject, client_issuer, + client_serial; +#endif ngx_mail_core_srv_conf_t *cscf; if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { @@ -1155,6 +1189,41 @@ ngx_mail_auth_http_create_request(ngx_ma cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); +#if (NGX_MAIL_SSL) + if (s->connection->ssl) { + if (ngx_ssl_get_client_verify(s->connection, pool, + &client_verify) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_subject_dn(s->connection, pool, + &client_subject) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_issuer_dn(s->connection, pool, + &client_issuer) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_serial_number(s->connection, pool, + &client_serial) != NGX_OK) { + return NULL; + } + + if (ngx_ssl_get_certificate_oneline(s->connection, pool, + &client_cert) != NGX_OK) { + return NULL; + } + } else { + client_verify.len = 0; + client_issuer.len = 0; + client_subject.len = 0; + client_serial.len = 0; + client_cert.len = 0; + } +#endif + len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - 1 + sizeof("Host: ") - 1 + ahcf->host_header.len + sizeof(CRLF) - 1 + sizeof("Auth-Method: ") - 1 @@ -1163,6 +1232,18 @@ ngx_mail_auth_http_create_request(ngx_ma + sizeof("Auth-User: ") - 1 + login.len + sizeof(CRLF) - 1 + sizeof("Auth-Pass: ") - 1 + passwd.len + sizeof(CRLF) - 1 + sizeof("Auth-Salt: ") - 1 + s->salt.len +#if (NGX_MAIL_SSL) + + sizeof("Auth-Certificate: ") - 1 + client_cert.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Certificate-Verify: ") - 1 + client_verify.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Issuer-DN: ") - 1 + client_issuer.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Subject-DN: ") - 1 + client_subject.len + + sizeof(CRLF) - 1 + + sizeof("Auth-Subject-Serial: ") - 1 + client_serial.len + + sizeof(CRLF) - 1 +#endif + sizeof("Auth-Protocol: ") - 1 + cscf->protocol->name.len + sizeof(CRLF) - 1 + sizeof("Auth-Login-Attempt: ") - 1 + NGX_INT_T_LEN @@ -1213,6 +1294,44 @@ ngx_mail_auth_http_create_request(ngx_ma s->passwd.data = NULL; } +#if (NGX_MAIL_SSL) + if (client_cert.len) { + b->last = ngx_cpymem(b->last, "Auth-Certificate: ", + sizeof("Auth-Certificate: ") - 1); + b->last = ngx_copy(b->last, client_cert.data, client_cert.len); + *b->last++ = CR; *b->last++ = LF; + } + + if (client_verify.len) { + b->last = ngx_cpymem(b->last, "Auth-Certificate-Verify: ", + sizeof("Auth-Certificate-Verify: ") - 1); + b->last = ngx_copy(b->last, client_verify.data, client_verify.len); + *b->last++ = CR; *b->last++ = LF; + } + + if (client_issuer.len) { + b->last = ngx_cpymem(b->last, "Auth-Issuer-DN: ", + sizeof("Auth-Issuer-DN: ") - 1); + b->last = ngx_copy(b->last, client_issuer.data, client_issuer.len); + *b->last++ = CR; *b->last++ = LF; + } + + if (client_subject.len) { + b->last = ngx_cpymem(b->last, "Auth-Subject-DN: ", + sizeof("Auth-Subject-DN: ") - 1); + b->last = ngx_copy(b->last, client_subject.data, client_subject.len); + *b->last++ = CR; *b->last++ = LF; + } + + if (client_serial.len) { + b->last = ngx_cpymem(b->last, "Auth-Subject-Serial: ", + sizeof("Auth-Subject-Serial: ") - 1); + b->last = ngx_copy(b->last, client_serial.data, client_serial.len); + *b->last++ = CR; *b->last++ = LF; + } + +#endif + b->last = ngx_cpymem(b->last, "Auth-Protocol: ", sizeof("Auth-Protocol: ") - 1); b->last = ngx_cpymem(b->last, cscf->protocol->name.data, diff -r a387ce36744a -r 9dc48eeb8e5c src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_handler.c Fri Jan 24 16:26:16 2014 +0100 @@ -236,11 +236,61 @@ ngx_mail_ssl_handshake_handler(ngx_conne { ngx_mail_session_t *s; ngx_mail_core_srv_conf_t *cscf; +#if (NGX_MAIL_SSL) + ngx_mail_ssl_conf_t *sslcf; +#endif + + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "ngx_mail_ssl_handshake_handler handshaked: %d", + c->ssl->handshaked); if (c->ssl->handshaked) { s = c->data; +#if (NGX_MAIL_SSL) + + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); + if (sslcf->verify) { + long rc; + + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK + && (sslcf->verify != 3 || !ngx_ssl_verify_error_optional(rc))) + { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client SSL certificate verify error: (%l:%s)", + rc, X509_verify_cert_error_string(rc)); + + ngx_ssl_remove_cached_session(sscf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + ngx_mail_close_connection(c); + return; + } + + if (sslcf->verify == 1) { + X509 *cert; + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "client sent no required SSL certificate"); + + ngx_ssl_remove_cached_session(sscf->ssl.ctx, + (SSL_get0_session(c->ssl->connection))); + + ngx_mail_close_connection(c); + return; + } + + X509_free(cert); + } + } + +#endif + if (s->starttls) { cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); diff -r a387ce36744a -r 9dc48eeb8e5c src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_ssl_module.c Fri Jan 24 16:26:16 2014 +0100 @@ -43,6 +43,14 @@ static ngx_conf_bitmask_t ngx_mail_ssl_ { ngx_null_string, 0 } }; +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { + { ngx_string("off"), 0 }, + { ngx_string("on"), 1 }, + { ngx_string("optional"), 2 }, + { ngx_string("optional_no_ca"), 3 }, + { ngx_null_string, 0 } +}; + static ngx_command_t ngx_mail_ssl_commands[] = { @@ -102,6 +110,34 @@ static ngx_command_t ngx_mail_ssl_comma offsetof(ngx_mail_ssl_conf_t, ciphers), NULL }, + { ngx_string("ssl_verify_client"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_enum_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify), + &ngx_mail_ssl_verify }, + + { ngx_string("ssl_verify_depth"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, verify_depth), + NULL }, + + { ngx_string("ssl_client_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, client_certificate), + NULL }, + + { ngx_string("ssl_trusted_certificate"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), + NULL }, + { ngx_string("ssl_prefer_server_ciphers"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -137,6 +173,13 @@ static ngx_command_t ngx_mail_ssl_comma offsetof(ngx_mail_ssl_conf_t, session_timeout), NULL }, + { ngx_string("ssl_crl"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_ssl_conf_t, crl), + NULL }, + ngx_null_command }; @@ -189,6 +232,9 @@ ngx_mail_ssl_create_conf(ngx_conf_t *cf) * scf->certificate_key = { 0, NULL }; * scf->dhparam = { 0, NULL }; * scf->ecdh_curve = { 0, NULL }; + * scf->client_certificate = { 0, NULL }; + * scf->trusted_certificate = { 0, NULL }; + * scf->crl = { 0, NULL }; * scf->ciphers = { 0, NULL }; * scf->shm_zone = NULL; */ @@ -196,6 +242,8 @@ ngx_mail_ssl_create_conf(ngx_conf_t *cf) scf->enable = NGX_CONF_UNSET; scf->starttls = NGX_CONF_UNSET_UINT; scf->prefer_server_ciphers = NGX_CONF_UNSET; + scf->verify = NGX_CONF_UNSET_UINT; + scf->verify_depth = NGX_CONF_UNSET_UINT; scf->builtin_session_cache = NGX_CONF_UNSET; scf->session_timeout = NGX_CONF_UNSET; scf->session_tickets = NGX_CONF_UNSET; @@ -228,11 +276,20 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); + ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); ngx_conf_merge_str_value(conf->certificate_key, prev->certificate_key, ""); ngx_conf_merge_str_value(conf->dhparam, prev->dhparam, ""); + ngx_conf_merge_str_value(conf->client_certificate, prev->client_certificate, + ""); + ngx_conf_merge_str_value(conf->trusted_certificate, + prev->trusted_certificate, ""); + ngx_conf_merge_str_value(conf->crl, prev->crl, ""); + ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve, NGX_DEFAULT_ECDH_CURVE); @@ -318,6 +375,35 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, return NGX_CONF_ERROR; } + if (conf->verify) { + + if (conf->client_certificate.len == 0 && conf->verify != 3) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no ssl_client_certificate for ssl_client_verify"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_client_certificate(cf, &conf->ssl, + &conf->client_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + } + + if (ngx_ssl_trusted_certificate(cf, &conf->ssl, + &conf->trusted_certificate, + conf->verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + + if (ngx_ssl_crl(cf, &conf->ssl, &conf->crl) != NGX_OK) { + return NGX_CONF_ERROR; + } + if (conf->prefer_server_ciphers) { SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } diff -r a387ce36744a -r 9dc48eeb8e5c src/mail/ngx_mail_ssl_module.h --- a/src/mail/ngx_mail_ssl_module.h Thu Jan 23 22:09:59 2014 +0900 +++ b/src/mail/ngx_mail_ssl_module.h Fri Jan 24 16:26:16 2014 +0100 @@ -28,6 +28,8 @@ typedef struct { ngx_uint_t starttls; ngx_uint_t protocols; + ngx_uint_t verify; + ngx_uint_t verify_depth; ssize_t builtin_session_cache; time_t session_timeout; @@ -36,6 +38,9 @@ typedef struct { ngx_str_t certificate_key; ngx_str_t dhparam; ngx_str_t ecdh_curve; + ngx_str_t client_certificate; + ngx_str_t trusted_certificate; + ngx_str_t crl; ngx_str_t ciphers; -------------- next part -------------- A non-text attachment was scrubbed... Name: Mail-SSL-MutualAuthentification.patch Type: text/x-patch Size: 14356 bytes Desc: not available URL: From fdasilvayy at gmail.com Sat Jan 25 08:54:27 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Sat, 25 Jan 2014 09:54:27 +0100 Subject: : [PATCH 0 of 1] Mail: add support for SSL client certificate Message-ID: Hi, and Salut Franck ;) I just fix the typo, indentation, white space, empty lines mistakes, I have seen. I'have been working on this patch with Franck, as part of my last job. ---- Filipe Da Silva 2014/1/25 : > ------------------------------ > > Message: 2 > Date: Fri, 24 Jan 2014 21:40:32 +0100 > From: flevionnois at gmail.com > To: nginx-devel at nginx.org > Subject: [PATCH 0 of 1] Mail: add support for SSL client certificate > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > Add support for mail SSL client auth > > Take into account Sven Peter patch > http://forum.nginx.org/read.php?29,246309,246328#msg-246328 > > and transmit the client certificate to the backend server > > From vbart at nginx.com Mon Jan 27 20:02:19 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 28 Jan 2014 00:02:19 +0400 Subject: [PATCH] SPDY/3.1 protocol implementation Message-ID: <1799699.0Wo46U2hKl@vbart-laptop> http://nginx.org/patches/patch.spdy-v31.txt This patch upgrades implementation of SPDY protocol in the ngx_http_spdy_module from draft 2 to draft 3.1. I am going to commit it at the end of this week. Till then, testing, review and suggestions are very much appreciated. How-to for newbies: 1) Make sure that you have OpenSSL 1.0.1 or later. 2) Download nginx/1.5.9: % wget http://nginx.org/download/nginx-1.5.9.tar.gz 3) Unpack it: % tar xvfz nginx-1.5.9.tar.gz % cd nginx-1.5.9 4) Download and apply the patch: % wget http://nginx.org/patches/patch.spdy-v31.txt % patch -p1 < patch.spdy-v31.txt 5) Configure and build nginx: % ./configure --with-http_ssl_module --with-http_spdy_module % make Hint: have a look at http://nginx.org/en/docs/configure.html and try "./configure --help" for more useful options. If you are already using spdy/2, then no configuration changes are required. Otherwise, please check the documentation: http://nginx.org/en/docs/http/ngx_http_spdy_module.html Thanks for testing. wbr, Valentin V. Bartenev From ru at nginx.com Mon Jan 27 20:33:37 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 27 Jan 2014 20:33:37 +0000 Subject: [nginx] Configure: enabled -Werror for clang. Message-ID: details: http://hg.nginx.org/nginx/rev/c86dd32573c0 branches: changeset: 5539:c86dd32573c0 user: Ruslan Ermilov date: Tue Jan 28 00:31:31 2014 +0400 description: Configure: enabled -Werror for clang. Modern clang versions seem to no longer produce warnings for system headers on Linux (at least clang 3.3 works), hence the change. For older versions --with-cc-opt="-Wno-error" can be used as a workaround. diffstat: auto/cc/clang | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r a387ce36744a -r c86dd32573c0 auto/cc/clang --- a/auto/cc/clang Thu Jan 23 22:09:59 2014 +0900 +++ b/auto/cc/clang Tue Jan 28 00:31:31 2014 +0400 @@ -89,7 +89,7 @@ CFLAGS="$CFLAGS -Wconditional-uninitiali CFLAGS="$CFLAGS -Wno-unused-parameter" # stop on warning -#CFLAGS="$CFLAGS -Werror" +CFLAGS="$CFLAGS -Werror" # debug CFLAGS="$CFLAGS -g" From jimpop at gmail.com Mon Jan 27 21:12:16 2014 From: jimpop at gmail.com (Jim Popovitch) Date: Mon, 27 Jan 2014 16:12:16 -0500 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1799699.0Wo46U2hKl@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> Message-ID: On Mon, Jan 27, 2014 at 3:02 PM, Valentin V. Bartenev wrote: > http://nginx.org/patches/patch.spdy-v31.txt > > This patch upgrades implementation of SPDY protocol in the > ngx_http_spdy_module from draft 2 to draft 3.1. > > I am going to commit it at the end of this week. Till then, > testing, review and suggestions are very much appreciated. Woohoo! Thank you. Successfully applies to 1.5.9 and works as expected. Thanks! -Jim P. From piotr at cloudflare.com Mon Jan 27 21:53:09 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 27 Jan 2014 13:53:09 -0800 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1799699.0Wo46U2hKl@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> Message-ID: Hey Valentin, shouldn't SETTINGS_INITIAL_WINDOW_SIZE point to sc->init_window? Or at least be hardcoded to NGX_SPDY_INIT_STREAM_WINDOW and not NGX_SPDY_STREAM_WINDOW? --- a/src/http/ngx_http_spdy.c +++ b/src/http/ngx_http_spdy.c @@ -2062,7 +2062,7 @@ ngx_http_spdy_send_settings(ngx_http_spdy_connection_t *sc) p = ngx_spdy_frame_aligned_write_uint32(p, sscf->concurrent_streams); p = ngx_spdy_frame_write_flags_and_id(p, 0, NGX_SPDY_SETTINGS_INIT_WINDOW); - p = ngx_spdy_frame_aligned_write_uint32(p, NGX_SPDY_STREAM_WINDOW); + p = ngx_spdy_frame_aligned_write_uint32(p, sc->init_window); buf->last = p; Also, it seems that receiving window size is hardcoded to 2GBs, which makes flow control (which main point was to protect against single stream, for example big POST upload, taking over whole SPDY connection) totally useless. This value should be configureable or at the very least set to something much more reasonable than 2GBs. Best regards, Piotr Sikora From vbart at nginx.com Mon Jan 27 22:37:12 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 28 Jan 2014 02:37:12 +0400 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: References: <1799699.0Wo46U2hKl@vbart-laptop> Message-ID: <1475169.JyAx7dIWdy@vbart-laptop> On Monday 27 January 2014 13:53:09 Piotr Sikora wrote: > Hey Valentin, > shouldn't SETTINGS_INITIAL_WINDOW_SIZE point to sc->init_window? sc->init_window is intended to store the initial send window for new streams that client tells us by sending SETTINGS frame (how much data we can send to client from the start). It's not the same window that we specify in our SETTINGS for client, which in this case is the receive window (how much data client is allowed to send to us). > Or at least be hardcoded to NGX_SPDY_INIT_STREAM_WINDOW and not > NGX_SPDY_STREAM_WINDOW? Current receiving flow control implementation is pretty simple and effective: we allow browser to send as much data as it wants. That's why it is hardcoded to the maximum value. [..] > Also, it seems that receiving window size is hardcoded to 2GBs, which > makes flow control (which main point was to protect against single > stream, for example big POST upload, taking over whole SPDY > connection) totally useless. This value should be configureable or at > the very least set to something much more reasonable than 2GBs. No, it's actually browser's will to properly prioritize POST requests. The receiving flow control has two uses for server: 1. Preventing buffer bloat. But it's not our case since nginx currently supports only buffered uploads, and buffers the whole request body anyway. 2. It rather subcase of 1: preventing client from sending data till the moment when we actually need the body, (i.e. ngx_http_read_client_request_body() is called). The last optimization can be useful for nginx, but it unnecessary complicates implementation, what I would like to avoid at least for a while. wbr, Valentin V. Bartenev From vbart at nginx.com Mon Jan 27 23:21:45 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 28 Jan 2014 03:21:45 +0400 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1475169.JyAx7dIWdy@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> Message-ID: <6184979.DUky2VvHqM@vbart-laptop> On Tuesday 28 January 2014 02:37:12 Valentin V. Bartenev wrote: [..] > [..] > > Also, it seems that receiving window size is hardcoded to 2GBs, which > > makes flow control (which main point was to protect against single > > stream, for example big POST upload, taking over whole SPDY > > connection) totally useless. This value should be configureable or at > > the very least set to something much more reasonable than 2GBs. > > No, it's actually browser's will to properly prioritize POST requests. > > The receiving flow control has two uses for server: > > 1. Preventing buffer bloat. But it's not our case since > nginx currently supports only buffered uploads, and > buffers the whole request body anyway. > > 2. It rather subcase of 1: preventing client from sending > data till the moment when we actually need the body, > (i.e. ngx_http_read_client_request_body() is called). [..] Well, probably the third use case is upload rate limiting, but nginx currently also doesn't have one. wbr, Valentin V. Bartenev From piotr at cloudflare.com Mon Jan 27 23:42:26 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 27 Jan 2014 15:42:26 -0800 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1475169.JyAx7dIWdy@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> Message-ID: Hey Valentin, > Current receiving flow control implementation is pretty simple and effective: > we allow browser to send as much data as it wants. That's why it is hardcoded > to the maximum value. > > (...) > > No, it's actually browser's will to properly prioritize POST requests. But now you're relying on the browser to do the right thing vs forcing the correct behavior via SPDY's flow control. > The receiving flow control has two uses for server: I'd argue that making sure that requests are multiplexed is also a valid use case ;) In any case, I'd prefer if this would be configureable value. Also, it seems that we should be forcing minimum value for the client's window size, otherwise client can set window size to 2 bytes and make nginx return thousands of DATA frames and use way too many resources to serve a small static page (same is true for Google's & Twitter's web servers). This could be a huge (D)DoS-vector. Best regards, Piotr Sikora From piotr at cloudflare.com Tue Jan 28 00:31:20 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 27 Jan 2014 16:31:20 -0800 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> Message-ID: Hey, > Also, it seems that we should be forcing minimum value for the > client's window size, otherwise client can set window size to 2 bytes > and make nginx return thousands of DATA frames and use way too many > resources to serve a small static page (same is true for Google's & > Twitter's web servers). This could be a huge (D)DoS-vector. ...or worse, 1 byte (for some reason I thought the window size was defined as 2^n bytes, not n bytes). Best regards, Piotr Sikora From vbart at nginx.com Tue Jan 28 00:33:05 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 28 Jan 2014 04:33:05 +0400 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> Message-ID: <1701610.k7i1vBZgTh@vbart-laptop> On Monday 27 January 2014 15:42:26 Piotr Sikora wrote: > Hey Valentin, > > > Current receiving flow control implementation is pretty simple and effective: > > we allow browser to send as much data as it wants. That's why it is hardcoded > > to the maximum value. > > > > (...) > > > > No, it's actually browser's will to properly prioritize POST requests. > > But now you're relying on the browser to do the right thing vs forcing > the correct behavior via SPDY's flow control. [..] Browser is the only guy who can do the right thing in this case. We just don't have enough information on the server side, e.g. we don't know when the browser wants to make another request and open a new stream, or uploading this POST is the most important thing for the moment. It is up to browser to pack another bunch of data into DATA frame, or to create SYN_STREAM. The only decision we can make is to limit or not to limit it in its desire. > > > The receiving flow control has two uses for server: > > I'd argue that making sure that requests are multiplexed is also a > valid use case ;) > > In any case, I'd prefer if this would be configureable value. > > Also, it seems that we should be forcing minimum value for the > client's window size, otherwise client can set window size to 2 bytes > and make nginx return thousands of DATA frames and use way too many > resources to serve a small static page (same is true for Google's & > Twitter's web servers). This could be a huge (D)DoS-vector. [..] Not that simple, on each such frame client have to send WINDOW_UPDATE for another two bytes. There is a lot of absolutely legal ways in SPDY to force a server to do useless job, e.g. you can send hundreds of SYN_STREAMs followed by RST_STREAMs with CANCEL status. And the one you mentioned seems to me like a drop in the ocean. Currently there is no way to protect from all of the possible cases without occasionally breaking some clients. This is one of cons that users should consider when they decide whether enable spdy or not. wbr, Valentin V. Bartenev From alex at zeitgeist.se Tue Jan 28 02:11:39 2014 From: alex at zeitgeist.se (Alex) Date: Tue, 28 Jan 2014 03:11:39 +0100 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1799699.0Wo46U2hKl@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> Message-ID: <66C28E3F-1B19-49D1-A97F-47E90F31C2AB@postfach.slogh.com> On 2014-01-27 21:02, Valentin V. Bartenev wrote: Hi Valentin, > http://nginx.org/patches/patch.spdy-v31.txt > > This patch upgrades implementation of SPDY protocol in the > ngx_http_spdy_module from draft 2 to draft 3.1. Thanks for the patch! Already installed and it looks great so far. As Piotr mentioned, the large send window size of 2GB seems a bit off (Google servers advertise 1MB), but heh, if that's a concern right now, fine with me. ;) From piotr at cloudflare.com Tue Jan 28 03:05:39 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 27 Jan 2014 19:05:39 -0800 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: <1701610.k7i1vBZgTh@vbart-laptop> References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> <1701610.k7i1vBZgTh@vbart-laptop> Message-ID: Hey Valentin, > Not that simple, on each such frame client have to send WINDOW_UPDATE > for another two bytes. > > There is a lot of absolutely legal ways in SPDY to force a server to > do useless job, e.g. you can send hundreds of SYN_STREAMs followed by > RST_STREAMs with CANCEL status. > > And the one you mentioned seems to me like a drop in the ocean. > > Currently there is no way to protect from all of the possible cases > without occasionally breaking some clients. This is one of cons that > users should consider when they decide whether enable spdy or not. Agreed, but it's the web server's job to protect itself from such abuses. And I'm not saying that this is something that needs to be done right away or included with this change, but something that should be added down the road... Best regards, Piotr Sikora From mdounin at mdounin.ru Tue Jan 28 10:05:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Jan 2014 14:05:53 +0400 Subject: [PATCH] SPDY/3.1 protocol implementation In-Reply-To: References: <1799699.0Wo46U2hKl@vbart-laptop> <1475169.JyAx7dIWdy@vbart-laptop> Message-ID: <20140128100553.GU1835@mdounin.ru> Hello! On Mon, Jan 27, 2014 at 03:42:26PM -0800, Piotr Sikora wrote: > Hey Valentin, > > > Current receiving flow control implementation is pretty simple and effective: > > we allow browser to send as much data as it wants. That's why it is hardcoded > > to the maximum value. > > > > (...) > > > > No, it's actually browser's will to properly prioritize POST requests. > > But now you're relying on the browser to do the right thing vs forcing > the correct behavior via SPDY's flow control. > > > The receiving flow control has two uses for server: > > I'd argue that making sure that requests are multiplexed is also a > valid use case ;) > > In any case, I'd prefer if this would be configureable value. > > Also, it seems that we should be forcing minimum value for the > client's window size, otherwise client can set window size to 2 bytes > and make nginx return thousands of DATA frames and use way too many > resources to serve a small static page (same is true for Google's & > Twitter's web servers). This could be a huge (D)DoS-vector. It's believed that SPDY is a huge DDoS vector by itself. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jan 28 12:01:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Jan 2014 12:01:58 +0000 Subject: [nginx] SSI: fixed $date_local and $date_gmt without SSI (ticket... Message-ID: details: http://hg.nginx.org/nginx/rev/3a8e19528b30 branches: changeset: 5540:3a8e19528b30 user: Maxim Dounin date: Tue Jan 28 15:40:45 2014 +0400 description: SSI: fixed $date_local and $date_gmt without SSI (ticket #230). If there is no SSI context in a given request at a given time, the $date_local and $date_gmt variables used "%s" format, instead of "%A, %d-%b-%Y %H:%M:%S %Z" documented as the default and used if there is SSI module context and timefmt wasn't modified using the "config" SSI command. While use of these variables outside of the SSI evaluation isn't strictly valid, previous behaviour is certainly inconsistent, hence the fix. diffstat: src/http/modules/ngx_http_ssi_filter_module.c | 13 ++++++++----- 1 files changed, 8 insertions(+), 5 deletions(-) diffs (51 lines): diff --git a/src/http/modules/ngx_http_ssi_filter_module.c b/src/http/modules/ngx_http_ssi_filter_module.c --- a/src/http/modules/ngx_http_ssi_filter_module.c +++ b/src/http/modules/ngx_http_ssi_filter_module.c @@ -213,6 +213,7 @@ static ngx_http_output_body_filter_pt static u_char ngx_http_ssi_string[] = "