From agentzh at gmail.com Tue Sep 1 05:58:11 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 1 Sep 2015 13:58:11 +0800 Subject: [nginx] Decreased the NGX_HTTP_MAX_SUBREQUESTS limit. In-Reply-To: References: Message-ID: Hello! On Tue, Sep 1, 2015 at 4:29 AM, Valentin Bartenev wrote: > #define NGX_HTTP_MAX_URI_CHANGES 10 > -#define NGX_HTTP_MAX_SUBREQUESTS 200 > +#define NGX_HTTP_MAX_SUBREQUESTS 50 > Hmm, this change makes me sad. In our ngx_lua module, for example, we allow programatic parallel subrequests via the ngx.location.capture_multi() Lua API: https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi We'd better provide larger values in such hard-coded limits rather than smaller (unless we provide a way to allow 3rd-party nginx C modules to override it). Regards, -agentzh From steven.hartland at multiplay.co.uk Tue Sep 1 08:18:17 2015 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 1 Sep 2015 09:18:17 +0100 Subject: [nginx] Decreased the NGX_HTTP_MAX_SUBREQUESTS limit. In-Reply-To: References: Message-ID: <55E55F49.2080200@multiplay.co.uk> On 01/09/2015 06:58, Yichun Zhang (agentzh) wrote: > Hello! > > On Tue, Sep 1, 2015 at 4:29 AM, Valentin Bartenev wrote: >> #define NGX_HTTP_MAX_URI_CHANGES 10 >> -#define NGX_HTTP_MAX_SUBREQUESTS 200 >> +#define NGX_HTTP_MAX_SUBREQUESTS 50 >> > Hmm, this change makes me sad. In our ngx_lua module, for example, we > allow programatic parallel subrequests via the > ngx.location.capture_multi() Lua API: > > https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi > > We'd better provide larger values in such hard-coded limits rather > than smaller (unless we provide a way to allow 3rd-party nginx C > modules to override it). > Same here we have a module which uses dynamic sub-requests to split huge file requests in large amounts of smaller ranged requests which this change would likely break. Regards Steve From vbart at nginx.com Tue Sep 1 10:14:28 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 01 Sep 2015 13:14:28 +0300 Subject: [nginx] Decreased the NGX_HTTP_MAX_SUBREQUESTS limit. In-Reply-To: <55E55F49.2080200@multiplay.co.uk> References: <55E55F49.2080200@multiplay.co.uk> Message-ID: <1632302.ATFzkj8KRH@vbart-laptop> On Tuesday 01 September 2015 09:18:17 Steven Hartland wrote: > On 01/09/2015 06:58, Yichun Zhang (agentzh) wrote: > > Hello! > > > > On Tue, Sep 1, 2015 at 4:29 AM, Valentin Bartenev wrote: > >> #define NGX_HTTP_MAX_URI_CHANGES 10 > >> -#define NGX_HTTP_MAX_SUBREQUESTS 200 > >> +#define NGX_HTTP_MAX_SUBREQUESTS 50 > >> > > Hmm, this change makes me sad. In our ngx_lua module, for example, we > > allow programatic parallel subrequests via the > > ngx.location.capture_multi() Lua API: > > > > https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi > > > > We'd better provide larger values in such hard-coded limits rather > > than smaller (unless we provide a way to allow 3rd-party nginx C > > modules to override it). > > > Same here we have a module which uses dynamic sub-requests to split huge > file requests in large amounts of smaller ranged requests which this > change would likely break. > [..] Why do you guys use *recursive* subrequests() for that? Please note, that this constant now limits recursion (not parallelism) of subrequests, when one subrequest creates another subrequest and the depth of this subsequent chain is limited. wbr, Valentin V. Bartenev From agentzh at gmail.com Tue Sep 1 15:02:31 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 1 Sep 2015 23:02:31 +0800 Subject: [nginx] Decreased the NGX_HTTP_MAX_SUBREQUESTS limit. In-Reply-To: <1632302.ATFzkj8KRH@vbart-laptop> References: <55E55F49.2080200@multiplay.co.uk> <1632302.ATFzkj8KRH@vbart-laptop> Message-ID: Hello! On Tue, Sep 1, 2015 at 6:14 PM, Valentin V. Bartenev wrote: > Why do you guys use *recursive* subrequests() for that? > > Please note, that this constant now limits recursion (not parallelism) > of subrequests, when one subrequest creates another subrequest and the > depth of this subsequent chain is limited. > Oh, I see. Sorry for the confusion! Best regards, -agentzh From acharles at outlook.com Wed Sep 2 15:17:36 2015 From: acharles at outlook.com (Ahmed Charles) Date: Wed, 2 Sep 2015 08:17:36 -0700 Subject: nginx dockerfile. In-Reply-To: References: Message-ID: On 8/25/2015 11:26 PM, Atul Sowani wrote: > Hi, > > I checked nginx code repository as well as Internet to see if I can > get a Dockerfile to build nginx. I got a few references (like > https://github.com/dockerfile/nginx) but those are essentially to > _run_ nginx, not _build_ it. > > I am looking to build different versions of nginx (say > top-of-the-tree, latest-stable etc.) easily. It would be very > convenient if a Dockerfile is presented with the source code which > will build one of the versions mentioned above. If required, a slight > modification can then build any version of nginx. > > I would highly appreciate if somebody could point me to a source where > I can get a Dockerfile which builds nginx. The official Dockerfile is here: https://github.com/nginxinc/docker-nginx/blob/master/Dockerfile You can replace the apt-get commands with commands which build nginx, just like you would on a normal linux system. Or you could even build a custom .deb and use that. From vbart at nginx.com Wed Sep 2 16:28:57 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 02 Sep 2015 16:28:57 +0000 Subject: [nginx] Writing to some file systems can be interrupted. Message-ID: details: http://hg.nginx.org/nginx/rev/6fce16b1fc10 branches: changeset: 6240:6fce16b1fc10 user: Valentin Bartenev date: Wed Sep 02 19:26:40 2015 +0300 description: Writing to some file systems can be interrupted. At least such behavior was observed with CephFS, see: http://mailman.nginx.org/pipermail/nginx/2015-July/048188.html. diffstat: src/os/unix/ngx_files.c | 13 ++++++++++++- 1 files changed, 12 insertions(+), 1 deletions(-) diffs (33 lines): diff -r 281863981d0b -r 6fce16b1fc10 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Mon Aug 31 23:26:33 2015 +0300 +++ b/src/os/unix/ngx_files.c Wed Sep 02 19:26:40 2015 +0300 @@ -264,6 +264,7 @@ ngx_write_chain_to_file(ngx_file_t *file u_char *prev; size_t size; ssize_t total, n; + ngx_err_t err; ngx_array_t vec; struct iovec *iov, iovs[NGX_IOVS]; @@ -335,10 +336,20 @@ ngx_write_chain_to_file(ngx_file_t *file file->sys_offset = offset; } +eintr: + n = writev(file->fd, vec.elts, vec.nelts); if (n == -1) { - ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, + err = ngx_errno; + + if (err == NGX_EINTR) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + "writev() was interrupted"); + goto eintr; + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, "writev() \"%s\" failed", file->name.data); return NGX_ERROR; } From vbart at nginx.com Wed Sep 2 20:33:47 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 02 Sep 2015 20:33:47 +0000 Subject: [nginx] Fixed building --with-debug, broken by 6fce16b1fc10. Message-ID: details: http://hg.nginx.org/nginx/rev/387696b36c29 branches: changeset: 6241:387696b36c29 user: Valentin Bartenev date: Wed Sep 02 19:45:40 2015 +0300 description: Fixed building --with-debug, broken by 6fce16b1fc10. diffstat: src/os/unix/ngx_files.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 6fce16b1fc10 -r 387696b36c29 src/os/unix/ngx_files.c --- a/src/os/unix/ngx_files.c Wed Sep 02 19:26:40 2015 +0300 +++ b/src/os/unix/ngx_files.c Wed Sep 02 19:45:40 2015 +0300 @@ -344,7 +344,7 @@ eintr: err = ngx_errno; if (err == NGX_EINTR) { - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + ngx_log_debug0(NGX_LOG_DEBUG_CORE, file->log, err, "writev() was interrupted"); goto eintr; } From sowani at gmail.com Thu Sep 3 07:12:44 2015 From: sowani at gmail.com (Atul Sowani) Date: Thu, 3 Sep 2015 12:42:44 +0530 Subject: nginx dockerfile. In-Reply-To: References: Message-ID: Hi Ahmed, Thanks for the dockerfile pointer. So once I implement my dockerfile for ppc64le, can I commit the file at nginxinc/docker-nginx? It looks like nginx mainstream repository is not accepting any dockerfiles. Thanks, Atul. On Wed, Sep 2, 2015 at 8:47 PM, Ahmed Charles wrote: > On 8/25/2015 11:26 PM, Atul Sowani wrote: > > Hi, > > > > I checked nginx code repository as well as Internet to see if I can > > get a Dockerfile to build nginx. I got a few references (like > > https://github.com/dockerfile/nginx) but those are essentially to > > _run_ nginx, not _build_ it. > > > > I am looking to build different versions of nginx (say > > top-of-the-tree, latest-stable etc.) easily. It would be very > > convenient if a Dockerfile is presented with the source code which > > will build one of the versions mentioned above. If required, a slight > > modification can then build any version of nginx. > > > > I would highly appreciate if somebody could point me to a source where > > I can get a Dockerfile which builds nginx. > > The official Dockerfile is here: > > https://github.com/nginxinc/docker-nginx/blob/master/Dockerfile > > You can replace the apt-get commands with commands which build nginx, > just like you would on a normal linux system. Or you could even build a > custom .deb and use that. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Sep 3 12:13:16 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 03 Sep 2015 12:13:16 +0000 Subject: [nginx] Upstream: fixed cache send error handling. Message-ID: details: http://hg.nginx.org/nginx/rev/0e3a45ec2a3a branches: changeset: 6242:0e3a45ec2a3a user: Roman Arutyunyan date: Thu Sep 03 15:09:21 2015 +0300 description: Upstream: fixed cache send error handling. The value of NGX_ERROR, returned from filter handlers, was treated as a generic upstream error and changed to NGX_HTTP_INTERNAL_SERVER_ERROR before calling ngx_http_finalize_request(). This resulted in "header already sent" alert if header was already sent in filter handlers. The problem appeared in 54e9b83d00f0 (1.7.5). diffstat: src/http/ngx_http_upstream.c | 25 ++++++++++++++----------- 1 files changed, 14 insertions(+), 11 deletions(-) diffs (47 lines): diff -r 387696b36c29 -r 0e3a45ec2a3a src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Sep 02 19:45:40 2015 +0300 +++ b/src/http/ngx_http_upstream.c Thu Sep 03 15:09:21 2015 +0300 @@ -534,15 +534,24 @@ ngx_http_upstream_init_request(ngx_http_ r->write_event_handler = ngx_http_request_empty_handler; - if (rc == NGX_DONE) { - return; - } - if (rc == NGX_ERROR) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); return; } + if (rc == NGX_OK) { + rc = ngx_http_upstream_cache_send(r, u); + + if (rc == NGX_DONE) { + return; + } + + if (rc == NGX_HTTP_UPSTREAM_INVALID_HEADER) { + rc = NGX_DECLINED; + r->cached = 0; + } + } + if (rc != NGX_DECLINED) { ngx_http_finalize_request(r, rc); return; @@ -837,13 +846,7 @@ ngx_http_upstream_cache(ngx_http_request case NGX_OK: - rc = ngx_http_upstream_cache_send(r, u); - - if (rc != NGX_HTTP_UPSTREAM_INVALID_HEADER) { - return rc; - } - - break; + return NGX_OK; case NGX_HTTP_CACHE_STALE: From thresh at nginx.com Thu Sep 3 12:27:18 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Thu, 3 Sep 2015 15:27:18 +0300 Subject: nginx dockerfile. In-Reply-To: References: Message-ID: <55E83CA6.9010205@nginx.com> Hello Atul, On 03/09/2015 10:12, Atul Sowani wrote: > Hi Ahmed, > > Thanks for the dockerfile pointer. So once I implement my dockerfile for > ppc64le, can I commit the file at nginxinc/docker-nginx**? It looks like > nginx mainstream repository is not accepting any dockerfiles. Please see https://github.com/nginxinc/docker-nginx/issues/23. -- Konstantin Pavlov From shuxinyang.oss at gmail.com Fri Sep 4 01:39:49 2015 From: shuxinyang.oss at gmail.com (Shuxin Yang) Date: Thu, 03 Sep 2015 18:39:49 -0700 Subject: How does Nginx look-up cached resource? Message-ID: <55E8F665.5010806@gmail.com> Hi, There: I'm Nginx newbie. I have a question regarding how nginx lookup a cached resource. As far as I can tell, given a cache-key k. Nginx uses crc32(k) as as the key to lookup the cached resource in a RB tree, and use md5(k) verify if conflict take place; the key k per se is not used for looking up resource. Is my understanding correct? If so, how can we guarantee that crc32 and md5 combined can uniquely identify a resource? Thanks Shuxin From sowani at gmail.com Fri Sep 4 06:36:37 2015 From: sowani at gmail.com (Atul Sowani) Date: Fri, 4 Sep 2015 12:06:37 +0530 Subject: nginx dockerfile. In-Reply-To: <55E83CA6.9010205@nginx.com> References: <55E83CA6.9010205@nginx.com> Message-ID: Hi Konstantin, Thanks for the link! So it seems ppc64le dockerfile will never ever be available with prime nginx source repository unless the platform is supported officially. Will check elsewhere. Thanks, Atul. On Thu, Sep 3, 2015 at 5:57 PM, Konstantin Pavlov wrote: > Hello Atul, > > On 03/09/2015 10:12, Atul Sowani wrote: >> Hi Ahmed, >> >> Thanks for the dockerfile pointer. So once I implement my dockerfile for >> ppc64le, can I commit the file at nginxinc/docker-nginx**? It looks like >> nginx mainstream repository is not accepting any dockerfiles. > > Please see https://github.com/nginxinc/docker-nginx/issues/23. > > -- > Konstantin Pavlov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From jn.kim at navercorp.com Fri Sep 4 07:58:28 2015 From: jn.kim at navercorp.com (=?UTF-8?B?6rmA7KeE64WV?=) Date: Fri, 4 Sep 2015 16:58:28 +0900 (KST) Subject: nginx patch pull request about spaces in headers In-Reply-To: <68d595af8847a6d84bf01892aaafe42@cweb01.nmdf.nhnsystem.com> References: <68d595af8847a6d84bf01892aaafe42@cweb01.nmdf.nhnsystem.com> Message-ID: <125e44e24a62867ff130d1626c39a10@cweb03.nmdf.nhnsystem.com> Dear nginx developers, This is JinNyung Kim, a developer at Naver Korea corporation. (http://www.naver.com) I'm writing in regards to spaces in header field in nginx. Actually when I modified use from apache-tomcat to nginx-tomcat last Tuesday, I had a trouble with spaces in authorization header. I know the field name including spaces is not permitted, but it's problem that apache didn't ignore this and passed to tomcat. So, I need spaces in headers option and think that this will be helpful for the people who modify use from apache to nginx like me. Also, I developed the logging option in error.log. spaces_in_headers will likely act underscores_in_headers. spaces_in_headers default off log_spaces_in_headers default off When I request this two commands, nginx ignore the Authorization header in the second command. But this patch trims trailing spaces in header fields. (I couldn't check the spdy request) > curl -H "Authorization: test" http://x.x.x/SimpleHttp; > curl -H "Authorization : test" http://x.x.x/SimpleHttp; When log_spaces_in_headers option is set to on, you can see this log in error.log. 2015/09/03 16:30:24 [error] 9107#0: *18 [Spaces in Header] Authorization=test while reading client request headers, client: 0.0.0.0, server: x.x.x., request: "GET /SimpleHttp HTTP/1.1", host: "1.1.1.1" I look forward to hearing from you about whether patch is accepted or not. Thank you for reading my email. Yours Sincerely JinNyung Kim -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.patch Type: application/octet-stream Size: 9290 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Sep 4 13:23:30 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 16:23:30 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <55E8F665.5010806@gmail.com> References: <55E8F665.5010806@gmail.com> Message-ID: <20150904132330.GA72232@mdounin.ru> Hello! On Thu, Sep 03, 2015 at 06:39:49PM -0700, Shuxin Yang wrote: > I'm Nginx newbie. I have a question regarding how nginx lookup a cached > resource. > > As far as I can tell, given a cache-key k. Nginx uses crc32(k) as as the > key to lookup > the cached resource in a RB tree, and use md5(k) verify if conflict take > place; the > key k per se is not used for looking up resource. Is my understanding > correct? No, vice versa. MD5 is used to identify a resource, and CRC32 is used to additionally verify if there are any collisions. If any collision is detected, nginx will complain loudly. As of now, the only case when the message about a collision was seen, it was the result of a bug, not a collision. > If so, how can we guarantee that crc32 and md5 combined can uniquely > identify a resource? We can't. Collisions are unavoidable if you use a hash function with more inputs than outputs. The question is how often collisions are observed in practice. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Sep 4 14:08:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 17:08:48 +0300 Subject: nginx patch pull request about spaces in headers In-Reply-To: <125e44e24a62867ff130d1626c39a10@cweb03.nmdf.nhnsystem.com> References: <68d595af8847a6d84bf01892aaafe42@cweb01.nmdf.nhnsystem.com> <125e44e24a62867ff130d1626c39a10@cweb03.nmdf.nhnsystem.com> Message-ID: <20150904140848.GC72232@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 04:58:28PM +0900, ??? wrote: > Dear nginx developers, > > This is JinNyung Kim, a developer at Naver Korea corporation. (http://www.naver.com) > I'm writing in regards to spaces in header field in nginx. > Actually when I modified use from apache-tomcat to nginx-tomcat last Tuesday, I had a trouble with spaces in authorization header. > I know the field name including spaces is not permitted, but it's problem that apache didn't ignore this and passed to tomcat. > So, I need spaces in headers option and think that this will be helpful for the people who modify use from apache to nginx like me. > Also, I developed the logging option in error.log. > > spaces_in_headers will likely act underscores_in_headers. [...] Try "ignore_invalid_headers off" instead. See http://nginx.org/r/ignore_invalid_headers for details. [...] > I look forward to hearing from you about whether patch is accepted or not. Not. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Fri Sep 4 15:37:14 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Fri, 04 Sep 2015 17:37:14 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150904132330.GA72232@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> Message-ID: Hi, On 04.09.2015 15:23, Maxim Dounin wrote: > Hello! > > On Thu, Sep 03, 2015 at 06:39:49PM -0700, Shuxin Yang wrote: > > ... >> If so, how can we guarantee that crc32 and md5 combined can uniquely >> identify a resource? > > We can't. Collisions are unavoidable if you use a hash function > with more inputs than outputs. The question is how often > collisions are observed in practice. Well, but we can (I hope): the original key (not the hash of it, the key self, that will be set with `proxy_cache_key`, `fastcgi_cache_key` etc) will be saved in header of each cached file (see KEY: ...). So it can be validated also direct after entry for hash was found (compare original key if hash entry was found). In this case if collision for both hash values exists (original key does not match) - it should just say - not cached (and later overwrite an "wrong" resp. cache entry with "collision" - will very rarely do it). In this case it is really safe (but a little bit slower, because each time will compare original key also). But I hope that work exactly so (I must review the source code), because if not - it's very VERY evil. Regards, sebres. From melezhik at gmail.com Fri Sep 4 17:48:07 2015 From: melezhik at gmail.com (Alexey Melezhik) Date: Fri, 4 Sep 2015 20:48:07 +0300 Subject: automated portable sanity tests for nginx Message-ID: Hi all! I have created a simple test suite for nginx sanity checks: - install nginx from source code - install/run fast cgi service - configure/run nginx - check if nginx proxies fast server correctly - check nginx landing page is accessible This is automated test suite written on SWAT - bash/perl DSL for web application smoke testing. To install and run checks are as simple as installing curl and then installing swat::nginx cpan module: sudo apt-get install curl sudo cpanm swat::nginx swat swat::nginx # run tests For real integration example please take a look at travis job - https://travis-ci.org/melezhik/nginx-swat-test/builds SWAT home page - https://github.com/melezhik/swat SWAT/nginx test suite page - https://github.com/melezhik/swat-packages/tree/master/nginx PS I someone found this interesting we could talk about further project evolution ( more test cases, etc ) Regards. Alexey From mdounin at mdounin.ru Fri Sep 4 18:10:15 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 21:10:15 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> Message-ID: <20150904181015.GF72232@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 05:37:14PM +0200, Sergey Brester wrote: > On 04.09.2015 15:23, Maxim Dounin wrote: > > >On Thu, Sep 03, 2015 at 06:39:49PM -0700, Shuxin Yang wrote: > > > >... > >>If so, how can we guarantee that crc32 and md5 combined can uniquely > >>identify a resource? > > > >We can't. Collisions are unavoidable if you use a hash function > >with more inputs than outputs. The question is how often > >collisions are observed in practice. > > Well, but we can (I hope): the original key (not the hash of it, the key > self, that will be set with `proxy_cache_key`, `fastcgi_cache_key` etc) will > be saved in header of each cached file (see KEY: ...). > So it can be validated also direct after entry for hash was found (compare > original key if hash entry was found). > In this case if collision for both hash values exists (original key does not > match) - it should just say - not cached (and later overwrite an "wrong" > resp. cache entry with "collision" - will very rarely do it). > > In this case it is really safe (but a little bit slower, because each time > will compare original key also). > But I hope that work exactly so (I must review the source code), because if > not - it's very VERY evil. For sure this is something that can be done. The question remains though: how often collisions are observed in practice, is it make sense to do anything additional to protect from collisions and spend resources on it? Even considering only md5, without the crc32 check, no practical cases were reported so far. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Fri Sep 4 18:56:23 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Fri, 04 Sep 2015 20:56:23 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150904181015.GF72232@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> Message-ID: On 04.09.2015 20:10, Maxim Dounin wrote: > For sure this is something that can be done. The question remains > though: how often collisions are observed in practice, is it make > sense to do anything additional to protect from collisions and > spend resources on it? Even considering only md5, without the > crc32 check, no practical cases were reported so far. What? That SHOULD be done! Once is already too much! nginx can cache pages from different users (key contains username), so imagine in the case of such collision: - the user 1 will suddenly receive an info of the user 2; - if authorisation uses "auth_request" (via fastcgi) and it will be cached (because of performance resp. persistent handshake-like authorisation), the the user 1 will even act as a user 2 (with his rights and authority) etc. I can write hier hundred situations that never ever should be occured! Never. From mdounin at mdounin.ru Fri Sep 4 19:43:51 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Sep 2015 22:43:51 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> Message-ID: <20150904194351.GJ72232@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 08:56:23PM +0200, Sergey Brester wrote: > On 04.09.2015 20:10, Maxim Dounin wrote: > > >For sure this is something that can be done. The question remains > >though: how often collisions are observed in practice, is it make > >sense to do anything additional to protect from collisions and > >spend resources on it? Even considering only md5, without the > >crc32 check, no practical cases were reported so far. > > What? > That SHOULD be done! Once is already too much! No one yet happened. And likely won't ever happen, as md5 is a good hash function 128 bits wide, and it took many years to find even a single collision of md5. And even if it'll happen, we have crc32 check in place to protect us. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Fri Sep 4 21:00:58 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Fri, 04 Sep 2015 23:00:58 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150904194351.GJ72232@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> Message-ID: <432ac14ed0489c14cce98a83a9772f7d@sebres.de> On 04.09.2015 21:43, Maxim Dounin wrote: > No one yet happened. And likely won't ever happen, as md5 is a > good hash function 128 bits wide, and it took many years to find > even a single collision of md5. You confuse good for "collision-search algorithms" with a good in the sense of the "probability the collision can occur". A estimation of collision in sence of "collision-search algorithm" and co. implies the hashed string is unknown and for example it estimates attacks to find that (like brute, chosen-prefix etc). I'm talking about the probability of incidence the same hash for two different cache keys. In addition, because of so-called birthday problem (https://en.wikipedia.org/wiki/Birthday_problem) we can increase this probability with at least comparable 64 bit for real random data (different length). Don't forget our keys, that will be hashed, are not really any "random" data - most of the time it contains only specified characters and/or has specified length. So the probability that the collision will occur is still significant larger (a billion billion times larger). > And even if it'll happen, we have > crc32 check in place to protect us. Very funny... You make such conclusions based on what? So last but not least, if you still haven't seen the collision in sence of md5 "protected" crc32, how can you be sure, that this is still not occurred? For example, how large you will estimate the probability that the collision will occur, if my keys will contain only exact 32 characters in range [0-9A-Za-z]? And it frequency? Just approximately dimension... Regards, sebres. From gmm at csdoc.com Fri Sep 4 21:20:06 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Sat, 5 Sep 2015 00:20:06 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150904194351.GJ72232@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> Message-ID: <55EA0B06.7040808@csdoc.com> On 04.09.2015 22:43, Maxim Dounin wrote: >>> For sure this is something that can be done. The question remains >>> though: how often collisions are observed in practice, is it make >>> sense to do anything additional to protect from collisions and >>> spend resources on it? Even considering only md5, without the >>> crc32 check, no practical cases were reported so far. >> >> What? >> That SHOULD be done! Once is already too much! > > No one yet happened. And likely won't ever happen, as md5 is a > good hash function 128 bits wide, and it took many years to find > even a single collision of md5. And even if it'll happen, we have > crc32 check in place to protect us. "and it took many years to find even a single collision of md5" This is not true: The security of the MD5 hash function is severely compromised. A collision attack exists that can find collisions within seconds on a computer with a 2.6 GHz Pentium 4 processor (complexity of 2**24.1) - https://en.wikipedia.org/wiki/MD5#Security ============================================ Vulnerability Note VU#836068: Do not use the MD5 algorithm Software developers, Certification Authorities, website owners, and users should avoid using the MD5 algorithm in any capacity. As previous research has demonstrated, it should be considered cryptographically broken and unsuitable for further use. - http://www.kb.cert.org/vuls/id/836068 ============================================ For comparison: Variable-length hash function SHAKE128 from SHA-3 standard, for message M and output length 128 bit - SHAKE128(M, 128) have high collision resistance, its security is 64 bits. Also, using SHA-3 SHAKE128 instead of MD5 will be good for marketing purposes and for nginx compliance with any existing security standards and recommendations, which forbid and not recommend any usage of MD5. Theoretically, it is possible situation, what some of potential customers of NGINX Plus can't use NGINX Plus because NGINX Plus internally use MD5, which is broken. ============================================ Or: [...] While MD5 is known to be fast, it is also known to be broken, allowing a malicious user to craft colliding inputs. zbackup uses SHA1 instead. The cost of SHA1 calculations on modern machines is actually less than that of MD5 (run openssl speed md5 sha1 on yours), so it's a win-win situation. We only keep the first 128 bits of the SHA1 output [...] - http://zbackup.org/ ============================================ -- Best regards, Gena From hack988 at 163.com Sat Sep 5 13:20:17 2015 From: hack988 at 163.com (hack988) Date: Sat, 5 Sep 2015 21:20:17 +0800 Subject: How to get Original response before gzip or another module filter rewrite response Message-ID: <2015090521201511145750@163.com> Dear All: I'm a beginner for nginx development,Although i'm read Emiller's Guide and another article about Nginx develop for several days,I still don't know how to read whole original response buffer chain copy to myself module's temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite buffer. How to check another module is written output buffer to diffrent content type? I want to copy buffer to a file that no compress or chunked. I'm sorry for my poor english ,thx. hack988 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Sep 6 00:08:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 Sep 2015 03:08:03 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <432ac14ed0489c14cce98a83a9772f7d@sebres.de> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <432ac14ed0489c14cce98a83a9772f7d@sebres.de> Message-ID: <20150906000803.GA52312@mdounin.ru> Hello! On Fri, Sep 04, 2015 at 11:00:58PM +0200, Sergey Brester wrote: > On 04.09.2015 21:43, Maxim Dounin wrote: > > >No one yet happened. And likely won't ever happen, as md5 is a > >good hash function 128 bits wide, and it took many years to find > >even a single collision of md5. > > You confuse good for "collision-search algorithms" with a good in the sense > of the "probability the collision can occur". A estimation of collision in > sence of "collision-search algorithm" and co. implies the hashed string is > unknown and for example it estimates attacks to find that (like brute, > chosen-prefix etc). > > I'm talking about the probability of incidence the same hash for two > different cache keys. > In addition, because of so-called birthday problem > (https://en.wikipedia.org/wiki/Birthday_problem) we can increase this > probability with at least comparable 64 bit for real random data (different > length). Well, not, I don't confuse anything. For sure, brute force attack on a 128 bit hash requires approximately 2^64 attempts. That is, a single nginx instance with 2^64 cached resources will likely show up a collision. But that's not a number of resources you'll be able to store on a single node - in particular, because 64-bit address space wouldn't be enough to address that many cached items. To obtain a collision of a 128-bit hash with at least 1% probability, you'll need more than 10^18 resources cached on a single node, which is not even close to a something possible as well. Assuming 1 billion of keys (which is way more than a single nginx node can handle now, and will require about 125G of memory for a cache keys zone), probability of a collision is less than 10^(-20). Quoting https://en.wikipedia.org/wiki/Birthday_attack: : For comparison, 10^(?18) to 10^(?15) is the uncorrectable bit : error rate of a typical hard disk. That is, you are trying to fight a problem which is less probable than the chance that you'll get wrong data from your hard disk. > Don't forget our keys, that will be hashed, are not really any "random" data > - most of the time it contains only specified characters and/or has > specified length. > > So the probability that the collision will occur is still significant larger > (a billion billion times larger). This is not true as long as you are using a good enough hash function. See https://en.wikipedia.org/wiki/Hash_function#Uniformity for details. > >And even if it'll happen, we have > >crc32 check in place to protect us. > > Very funny... You make such conclusions based on what? > So last but not least, if you still haven't seen the collision in sence of > md5 "protected" crc32, how can you be sure, that this is still not occurred? > > For example, how large you will estimate the probability that the collision > will occur, if my keys will contain only exact 32 characters in range > [0-9A-Za-z]? And it frequency? Just approximately dimension... See above for detailed numbers. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Sep 6 01:56:30 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 Sep 2015 04:56:30 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EA0B06.7040808@csdoc.com> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> Message-ID: <20150906015629.GB52312@mdounin.ru> Hello! On Sat, Sep 05, 2015 at 12:20:06AM +0300, Gena Makhomed wrote: > On 04.09.2015 22:43, Maxim Dounin wrote: > > >>>For sure this is something that can be done. The question remains > >>>though: how often collisions are observed in practice, is it make > >>>sense to do anything additional to protect from collisions and > >>>spend resources on it? Even considering only md5, without the > >>>crc32 check, no practical cases were reported so far. > >> > >>What? > >>That SHOULD be done! Once is already too much! > > > >No one yet happened. And likely won't ever happen, as md5 is a > >good hash function 128 bits wide, and it took many years to find > >even a single collision of md5. And even if it'll happen, we have > >crc32 check in place to protect us. > > "and it took many years to find even a single collision of md5" > > This is not true: > > The security of the MD5 hash function is severely compromised. > A collision attack exists that can find collisions within seconds > on a computer with a 2.6 GHz Pentium 4 processor (complexity of 2**24.1) > - https://en.wikipedia.org/wiki/MD5#Security I said "took", not "takes now". The MD5 hash function was introduced in 1991, and the first collision was found in 2004. Also, it's important to understand that, while collision attacks now exists, it doesn't really make MD5 bad for various non-security uses. [...] > Variable-length hash function SHAKE128 from SHA-3 standard, > for message M and output length 128 bit - SHAKE128(M, 128) > have high collision resistance, its security is 64 bits. > > Also, using SHA-3 SHAKE128 instead of MD5 will be good > for marketing purposes and for nginx compliance with > any existing security standards and recommendations, > which forbid and not recommend any usage of MD5. > > Theoretically, it is possible situation, what some of > potential customers of NGINX Plus can't use NGINX Plus > because NGINX Plus internally use MD5, which is broken. We can't really avoid using MD5 anyway, as we support some things that require md5 (like $apr1$ passwords). Also, in this particular case keeping keys 128 bits wide isn't really required, and we can switch to any other function if needed. And, while SHA-3 is certainly interesting, I would rather prefer something more common. But I don't really think cache keys hash need to be changed. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Mon Sep 7 13:34:27 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Mon, 07 Sep 2015 15:34:27 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150906000803.GA52312@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <432ac14ed0489c14cce98a83a9772f7d@sebres.de> <20150906000803.GA52312@mdounin.ru> Message-ID: On 06.09.2015 02:08, Maxim Dounin wrote: > Well, not, I don't confuse anything. For sure, brute force attack > on a 128 bit hash requires approximately 2^64 attempts. > That is, a single nginx instance with 2^64 cached resources will > likely show up a collision. But that's not a number of resources > you'll be able to store on a single node - in particular, because > 64-bit address space wouldn't be enough to address that many > cached items. > To obtain a collision of a 128-bit hash with at least 1% > probability, you'll need more than 10^18 resources cached on a > single node, which is not even close to a something possible as > well. > Assuming 1 billion of keys (which is way more than a single nginx > node can handle now, and will require about 125G of memory for a > cache keys zone), probability of a collision is less than 10^(-20). > Quoting https://en.wikipedia.org/wiki/Birthday_attack [2]: > For comparison, 10^(-18) to 10^(-15) is the uncorrectable bit > error rate of a typical hard disk. 1) I will try to explain you, that is not quite true with a small approximation: let our hash value be exact one byte large (8 bit), it can obtain 2^8 = 256 different hash values (and let it be perfect distributed). The relative frequency to encounter a collision - same hash for any random another inside this interval (Hv0 - Hv255) will be also 256, because circa each 256-th char sequence will obtain the same hash value (Hv0). Will be such hash safe? Of course not, never. But if we will hash any character sequences with max length 16 bytes, we will have 256^16 (~= 10^38) different variations of binary string (keys). The relation (and the probability (Pc) to have a collision for two any random strings) would be only (10^38/256 - 1)/10^38 * (10^38/256 - 2)/(10^38 - 1) ~= 0.000015. Small, right? But the relative frequency (Hrc) is still 256! This can be explained with really large count of different sequences, and so with large count of hash values (Hv1-Hv255) that are not equal with (Hv0). But let us resolve the approximation: the hash value obtain 2^128 (~= 10^38), but how many values maximum should be hashed? It's unknown. Let our keys contain maximum 100 bytes, the count of variations of all possible strings will be 256^100 (~= 10^240). The probability to encounter of a collision and the relative frequency to encounter a collision will be a several order smaller (1e-78), but the relation between Hrc and Pc is comparable to example above (in the sense of relation between of both). And in this relation is similar (un)safe. Yes less likely (well 8 vs 128 bits) but still "unsafe". And we can have keys with the length of 500 bytes... And don't compare the probability of error rate in hard disks with probability of a collision for hashes of any *pseudo-random* two strings (stress mark to "pseudo-random"). This is in about the same as to compare a warm with soft. I can write here larger two pages formulas to prove it. But... forget the probabilities and this approximation... we come to point 2. 2) For the *caching* it's at all not required to have such "safe" hash functions: - The hash function should create reasonably perfect distributed values; - The hash function should be fast as possible (we can get MurmurHash or something similar, significantly faster than md5); - We should always compare the keys, after cache entry with hash value was found, to be sure exact the same key was found; But that does not make our cache slower, because the generation of hash value can be faster through algorithms faster as md5 (for example the MMH3 is up to 90% faster as MD5); - In the very less likely case of collision we will just forget the cached entry with previous key or save it as array for serial access (not really expected by caching and large hash value, because rare and that is a cache - not a database that should always hold an entry). I want implement that and post a PR, and can make it configurable (something like `fastcgi_cache_safe = on`) then you can compare the performance of it - would be faster as your md5/crc32 implementation. Regards, sebres. From hack988 at 163.com Mon Sep 7 14:07:20 2015 From: hack988 at 163.com (hack988) Date: Mon, 7 Sep 2015 22:07:20 +0800 Subject: How to get Original response before gzip or another module filter rewrite response References: <2015090521201511145750@163.com> Message-ID: <201509072207192016118@163.com> Hello everybody: Is Anyone can anwer my question? thanks very mach; hack988 From: hack988 Date: 2015-09-05 21:20 To: nginx-devel Subject: How to get Original response before gzip or another module filter rewrite response Dear All: I'm a beginner for nginx development,Although i'm read Emiller's Guide and another article about Nginx develop for several days,I still don't know how to read whole original response buffer chain copy to myself module's temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite buffer. How to check another module is written output buffer to diffrent content type? I want to copy buffer to a file that no compress or chunked. I'm sorry for my poor english ,thx. hack988 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Mon Sep 7 14:44:49 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 7 Sep 2015 17:44:49 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150906015629.GB52312@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> Message-ID: <55EDA2E1.8050101@csdoc.com> On 06.09.2015 4:56, Maxim Dounin wrote: >> The security of the MD5 hash function is severely compromised. >> A collision attack exists that can find collisions within seconds >> on a computer with a 2.6 GHz Pentium 4 processor (complexity of 2**24.1) >> - https://en.wikipedia.org/wiki/MD5#Security > > I said "took", not "takes now". The MD5 hash function was > introduced in 1991, and the first collision was found in 2004. > > Also, it's important to understand that, while collision attacks > now exists, it doesn't really make MD5 bad for various > non-security uses. nginx cache is security use too. If user configure common shared cache for all virtual servers, and config have two servers: first, protected by access, auth_basic or auth_request modules from unauthorized use, and second server with publicly available content. If attacker know proxy_cache_key, for example $scheme$host$request_uri and know $request_uri from protected site - he can create MD5/crc32 collision by building specific $request_uri for second server, and he will got unauthorized access to protected content from the first, protected web site. This is looks like vulnerability. And this vulnerability can be fixed as Sergey Brester propose: We should always compare the keys, after cache entry with hash value was found. Or vulnerability can be minimized by using secure hash function instead of current cryptographically broken MD5. -- Best regards, Gena From mdounin at mdounin.ru Mon Sep 7 16:17:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Sep 2015 19:17:59 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <432ac14ed0489c14cce98a83a9772f7d@sebres.de> <20150906000803.GA52312@mdounin.ru> Message-ID: <20150907161759.GC52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 03:34:27PM +0200, Sergey Brester wrote: > On 06.09.2015 02:08, Maxim Dounin wrote: > > >Well, not, I don't confuse anything. For sure, brute force attack > >on a 128 bit hash requires approximately 2^64 attempts. > > >That is, a single nginx instance with 2^64 cached resources will > >likely show up a collision. But that's not a number of resources > >you'll be able to store on a single node - in particular, because > >64-bit address space wouldn't be enough to address that many > >cached items. > > >To obtain a collision of a 128-bit hash with at least 1% > >probability, you'll need more than 10^18 resources cached on a > >single node, which is not even close to a something possible as > >well. > > >Assuming 1 billion of keys (which is way more than a single nginx > >node can handle now, and will require about 125G of memory for a > >cache keys zone), probability of a collision is less than 10^(-20). > > >Quoting https://en.wikipedia.org/wiki/Birthday_attack [2]: > > >For comparison, 10^(-18) to 10^(-15) is the uncorrectable bit > >error rate of a typical hard disk. > > 1) I will try to explain you, that is not quite true with a small > approximation: let our hash value be exact one byte large (8 bit), it can > obtain 2^8 = 256 different hash values (and let it be perfect distributed). > The relative frequency to encounter a collision - same hash for any random > another inside this interval (Hv0 - Hv255) will be also 256, because circa > each 256-th char sequence will obtain the same hash value (Hv0). Sure. > Will be such hash safe? Of course not, never. That depends on the use case, actually, but it's certainly not a good hash to use for identification of multiple documents. > But if we will hash any character sequences with max length 16 bytes, we > will have 256^16 (~= 10^38) different variations of binary string (keys). > The relation (and the probability (Pc) to have a collision for two any > random strings) would be only (10^38/256 - 1)/10^38 * (10^38/256 - 2)/(10^38 > - 1) ~= 0.000015. As long as you have two random 16-byte (128-bit) strings, they will collide with probablility 1/(2^128): one string can't collide with itself, and another one will collide if happens to be the same as the first one. Hash values of these two strings will collide with probablility of 1/256. It only depends on the size of the hash output, not sizes of input strings. I have no idea how did you get numbers you are claiming, they look wrong. > Small, right? But the relative frequency (Hrc) is still 256! This can be > explained with really large count of different sequences, and so with large > count of hash values (Hv1-Hv255) that are not equal with (Hv0). See above, hash values of two random strings will collide with probability of 1/256. > But let us resolve the approximation: the hash value obtain 2^128 (~= > 10^38), but how many values maximum should be hashed? It's unknown. Let our > keys contain maximum 100 bytes, the count of variations of all possible > strings will be 256^100 (~= 10^240). The probability to encounter of a > collision and the relative frequency to encounter a collision will be a > several order smaller (1e-78), but the relation between Hrc and Pc is > comparable to example above (in the sense of relation between of both). And > in this relation is similar (un)safe. Yes less likely (well 8 vs 128 bits) > but still "unsafe". The Wikipedia article about "Birthday attack" I linked explains how to calculate probabilities of collisions depending on hash output size and number of values hashed. Please read it carefully: https://en.wikipedia.org/wiki/Birthday_attack Trying to say that numbers calculated are "unknown" isn't constructive. [...] > 2) For the *caching* it's at all not required to have such "safe" hash > functions: > - The hash function should create reasonably perfect distributed values; > - The hash function should be fast as possible (we can get MurmurHash or > something similar, significantly faster than md5); > - We should always compare the keys, after cache entry with hash value was > found, to be sure exact the same key was found; But that does not make our > cache slower, because the generation of hash value can be faster through > algorithms faster as md5 (for example the MMH3 is up to 90% faster as MD5); > - In the very less likely case of collision we will just forget the cached > entry with previous key or save it as array for serial access (not really > expected by caching and large hash value, because rare and that is a cache - > not a database that should always hold an entry). > > I want implement that and post a PR, and can make it configurable (something > like `fastcgi_cache_safe = on`) then you can compare the performance of it - > would be faster as your md5/crc32 implementation. Apart from changing md5 to "something faster", this approach is identical to just adding the full key check. I already explained why this isn't needed. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Mon Sep 7 16:32:25 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Mon, 07 Sep 2015 18:32:25 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150907161759.GC52312@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <432ac14ed0489c14cce98a83a9772f7d@sebres.de> <20150906000803.GA52312@mdounin.ru> <20150907161759.GC52312@mdounin.ru> Message-ID: <1be0d3af4e63e96ddc3b2b1ae8f23e93@sebres.de> I have tried - I give up (it makes no sense), I have a my own fork (to make everything right there). From mdounin at mdounin.ru Mon Sep 7 16:58:11 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Sep 2015 19:58:11 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EDA2E1.8050101@csdoc.com> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> Message-ID: <20150907165811.GD52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 05:44:49PM +0300, Gena Makhomed wrote: > On 06.09.2015 4:56, Maxim Dounin wrote: > > >>The security of the MD5 hash function is severely compromised. > >>A collision attack exists that can find collisions within seconds > >>on a computer with a 2.6 GHz Pentium 4 processor (complexity of 2**24.1) > >>- https://en.wikipedia.org/wiki/MD5#Security > > > >I said "took", not "takes now". The MD5 hash function was > >introduced in 1991, and the first collision was found in 2004. > > > >Also, it's important to understand that, while collision attacks > >now exists, it doesn't really make MD5 bad for various > >non-security uses. > > nginx cache is security use too. > > If user configure common shared cache for all virtual servers, > and config have two servers: first, protected by access, > auth_basic or auth_request modules from unauthorized use, > and second server with publicly available content. > > If attacker know proxy_cache_key, for example $scheme$host$request_uri > and know $request_uri from protected site - he can create MD5/crc32 > collision by building specific $request_uri for second server, > and he will got unauthorized access to protected content > from the first, protected web site. > > This is looks like vulnerability. Yes, this looks like a valid example of a potentially affected configuration. Though as far as I know, it is not currently possible to construct a value (which choosen prefix) that maps to a given md5 value. > And this vulnerability can be fixed as Sergey Brester propose: > > We should always compare the keys, > after cache entry with hash value was found. > > Or vulnerability can be minimized by using secure hash > function instead of current cryptographically broken MD5. I think moving away from MD5 is a right way to go. -- Maxim Dounin http://nginx.org/ From amdeich at gmail.com Mon Sep 7 17:18:29 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Mon, 7 Sep 2015 20:18:29 +0300 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request Message-ID: Hello, Nginx SSL module allow to use some variables: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables But sometimes tey are not enough. Please find attached patch, adding two more: $ssl_client_not_before - Validity date from client certificate 'Not Before' $ssl_client_not_after - Validity date from client certificate 'Not After' After applying changes you may use them in configuration along with other variables: location /test_headers/ { proxy_set_header X-ClientCert-SubjectSerial $ssl_client_serial; proxy_set_header X-ClientCert-NotBefore $ssl_client_not_before; proxy_set_header X-ClientCert-NotAfter $ssl_client_not_after; proxy_pass http://192.168.88.156/; } And it will appears in (in this case) in proxied content in the following form: X-ClientCert-SubjectSerial: 120005C82FBE782D06D89FF14800000005C82F X-ClientCert-NotBefore: Jul 9 22:20:31 2015 GMT X-ClientCert-NotAfter: Oct 9 22:30:31 2015 GMT Tested on 1.8.0, tested that it can be cleanly applied to 1.9.4. Feel free to ask any questions regarding this matter. Best wishes, Andrey -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: add_client_not_before_not_aster_var.patch Type: text/x-patch Size: 3964 bytes Desc: not available URL: From kajtzu at a51.org Mon Sep 7 17:42:42 2015 From: kajtzu at a51.org (Kaj Niemi) Date: Mon, 7 Sep 2015 17:42:42 +0000 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request In-Reply-To: References: Message-ID: <322438CFF168E297.6ED3DB4C-87F1-4028-A922-A8A0F6669225@mail.outlook.com> Wouldn't it be easier to parse and compare if the not before/after values were written as a UNIX timestamp instead of in human readable format? Just a thought :) Kaj Sent from my iPad _____________________________ From: Andrey Kulikov > Sent: Monday, September 7, 2015 8:18 PM Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request To: > Hello, Nginx SSL module allow to use some variables: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables But sometimes tey are not enough. Please find attached patch, adding two more: $ssl_client_not_before - Validity date from client certificate 'Not Before' $ssl_client_not_after - Validity date from client certificate 'Not After' After applying changes you may use them in configuration along with other variables: location /test_headers/ { proxy_set_header X-ClientCert-SubjectSerial $ssl_client_serial; proxy_set_header X-ClientCert-NotBefore $ssl_client_not_before; proxy_set_header X-ClientCert-NotAfter $ssl_client_not_after; proxy_pass http://192.168.88.156/; } And it will appears in (in this case) in proxied content in the following form: X-ClientCert-SubjectSerial: 120005C82FBE782D06D89FF14800000005C82F X-ClientCert-NotBefore: Jul 9 22:20:31 2015 GMT X-ClientCert-NotAfter: Oct 9 22:30:31 2015 GMT Tested on 1.8.0, tested that it can be cleanly applied to 1.9.4. Feel free to ask any questions regarding this matter. Best wishes, Andrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 7 18:04:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Sep 2015 21:04:32 +0300 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request In-Reply-To: References: Message-ID: <20150907180432.GF52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 08:18:29PM +0300, Andrey Kulikov wrote: > Hello, > > Nginx SSL module allow to use some variables: > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables > But sometimes tey are not enough. > > Please find attached patch, adding two more: > $ssl_client_not_before - Validity date from client certificate 'Not Before' > $ssl_client_not_after - Validity date from client certificate 'Not After' > > After applying changes you may use them in configuration along with other > variables: > > location /test_headers/ { > proxy_set_header X-ClientCert-SubjectSerial $ssl_client_serial; > proxy_set_header X-ClientCert-NotBefore $ssl_client_not_before; > proxy_set_header X-ClientCert-NotAfter $ssl_client_not_after; > proxy_pass http://192.168.88.156/; > } > > And it will appears in (in this case) in proxied content in the following > form: > > X-ClientCert-SubjectSerial: 120005C82FBE782D06D89FF14800000005C82F > X-ClientCert-NotBefore: Jul 9 22:20:31 2015 GMT > X-ClientCert-NotAfter: Oct 9 22:30:31 2015 GMT > > > Tested on 1.8.0, tested that it can be cleanly applied to 1.9.4. > > Feel free to ask any questions regarding this matter. How do you expect these variables to be used? For some form of warning like "your certificate will expire soon, please update it"? Note that validity of the certificate was already checked at this point, these fields in particular, and that's not something a backend server needs to test. See also http://nginx.org/en/docs/contributing_changes.html for some hints on how we would prefer submissions to be done. [...] > + return NGX_OK; > +} > + > +ngx_int_t > +ngx_ssl_get_client_not_after(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) Two empty lines between functions, please. [...] > + return NGX_OK; > +} > + > +ngx_int_t > ngx_ssl_get_fingerprint(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) Same here. [...] > --- a/src/http/modules/ngx_http_ssl_module.c > +++ b/src/http/modules/ngx_http_ssl_module.c > @@ -307,6 +307,12 @@ static ngx_http_variable_t ngx_http_ssl_vars[] = { > { ngx_string("ssl_client_verify"), NULL, ngx_http_ssl_variable, > (uintptr_t) ngx_ssl_get_client_verify, NGX_HTTP_VAR_CHANGEABLE, 0 }, > > + { ngx_string("ssl_client_not_before"), NULL, ngx_http_ssl_variable, > + (uintptr_t) ngx_ssl_get_client_not_before, NGX_HTTP_VAR_CHANGEABLE, 0 }, > + > + { ngx_string("ssl_client_not_after"), NULL, ngx_http_ssl_variable, > + (uintptr_t) ngx_ssl_get_client_not_after, NGX_HTTP_VAR_CHANGEABLE, 0 }, > + > { ngx_null_string, NULL, NULL, 0, 0, 0 } > }; It should be better to put these variables after $ssl_client_serial, much like the functions itself. -- Maxim Dounin http://nginx.org/ From amdeich at gmail.com Mon Sep 7 18:23:08 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Mon, 7 Sep 2015 21:23:08 +0300 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request In-Reply-To: <20150907180432.GF52312@mdounin.ru> References: <20150907180432.GF52312@mdounin.ru> Message-ID: Hello Maxim, Thanks for comments! Please find ammended patch attached. As to example of usage: it's a real-world use-case - one of our customers do want to see these values on backend server for whatever purpose. But your example also have a right to be aplicable sometime. Best wishes, Andrey On 7 September 2015 at 21:04, Maxim Dounin wrote: > Hello! > > On Mon, Sep 07, 2015 at 08:18:29PM +0300, Andrey Kulikov wrote: > > > Hello, > > > > Nginx SSL module allow to use some variables: > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables > > But sometimes tey are not enough. > > > > Please find attached patch, adding two more: > > $ssl_client_not_before - Validity date from client certificate 'Not > Before' > > $ssl_client_not_after - Validity date from client certificate 'Not > After' > > > > After applying changes you may use them in configuration along with other > > variables: > > > > location /test_headers/ { > > proxy_set_header X-ClientCert-SubjectSerial $ssl_client_serial; > > proxy_set_header X-ClientCert-NotBefore > $ssl_client_not_before; > > proxy_set_header X-ClientCert-NotAfter > $ssl_client_not_after; > > proxy_pass http://192.168.88.156/; > > } > > > > And it will appears in (in this case) in proxied content in the following > > form: > > > > X-ClientCert-SubjectSerial: 120005C82FBE782D06D89FF14800000005C82F > > X-ClientCert-NotBefore: Jul 9 22:20:31 2015 GMT > > X-ClientCert-NotAfter: Oct 9 22:30:31 2015 GMT > > > > > > Tested on 1.8.0, tested that it can be cleanly applied to 1.9.4. > > > > Feel free to ask any questions regarding this matter. > > How do you expect these variables to be used? For some form of > warning like "your certificate will expire soon, please update > it"? Note that validity of the certificate was already checked at > this point, these fields in particular, and that's not something a > backend server needs to test. > > See also http://nginx.org/en/docs/contributing_changes.html for > some hints on how we would prefer submissions to be done. > > [...] > > > + return NGX_OK; > > +} > > + > > +ngx_int_t > > +ngx_ssl_get_client_not_after(ngx_connection_t *c, ngx_pool_t *pool, > ngx_str_t *s) > > Two empty lines between functions, please. > > [...] > > > + return NGX_OK; > > +} > > + > > +ngx_int_t > > ngx_ssl_get_fingerprint(ngx_connection_t *c, ngx_pool_t *pool, > ngx_str_t *s) > > Same here. > > [...] > > > --- a/src/http/modules/ngx_http_ssl_module.c > > +++ b/src/http/modules/ngx_http_ssl_module.c > > @@ -307,6 +307,12 @@ static ngx_http_variable_t ngx_http_ssl_vars[] = { > > { ngx_string("ssl_client_verify"), NULL, ngx_http_ssl_variable, > > (uintptr_t) ngx_ssl_get_client_verify, NGX_HTTP_VAR_CHANGEABLE, 0 > }, > > > > + { ngx_string("ssl_client_not_before"), NULL, ngx_http_ssl_variable, > > + (uintptr_t) ngx_ssl_get_client_not_before, > NGX_HTTP_VAR_CHANGEABLE, 0 }, > > + > > + { ngx_string("ssl_client_not_after"), NULL, ngx_http_ssl_variable, > > + (uintptr_t) ngx_ssl_get_client_not_after, > NGX_HTTP_VAR_CHANGEABLE, 0 }, > > + > > { ngx_null_string, NULL, NULL, 0, 0, 0 } > > }; > > It should be better to put these variables after $ssl_client_serial, > much like the functions itself. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: add_client_not_before_not_aster_var.patch Type: text/x-patch Size: 4066 bytes Desc: not available URL: From gmm at csdoc.com Mon Sep 7 19:29:42 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Mon, 7 Sep 2015 22:29:42 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150907165811.GD52312@mdounin.ru> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> Message-ID: <55EDE5A6.4060208@csdoc.com> On 07.09.2015 19:58, Maxim Dounin wrote: >>> Also, it's important to understand that, while collision attacks >>> now exists, it doesn't really make MD5 bad for various >>> non-security uses. >> >> nginx cache is security use too. >> >> If user configure common shared cache for all virtual servers, >> and config have two servers: first, protected by access, >> auth_basic or auth_request modules from unauthorized use, >> and second server with publicly available content. >> >> If attacker know proxy_cache_key, for example $scheme$host$request_uri >> and know $request_uri from protected site - he can create MD5/crc32 >> collision by building specific $request_uri for second server, >> and he will got unauthorized access to protected content >> from the first, protected web site. >> >> This is looks like vulnerability. > > Yes, this looks like a valid example of a potentially affected > configuration. Though as far as I know, it is not currently > possible to construct a value (which choosen prefix) that maps to > a given md5 value. It is possible and already was used to create forged certificates. In 2007, a chosen-prefix collision attack was found against MD5, requiring roughly 2**50 evaluations of the MD5 function. The paper also demonstrates two X.509 certificates for different domain names, with colliding hash values. This means that a certificate authority could be asked to sign a certificate for one domain, and then that certificate could be used to impersonate another domain. - https://en.wikipedia.org/wiki/Collision_attack#Chosen-prefix_collision_attack Details: http://www.win.tue.nl/hashclash/ChosenPrefixCollisions/ Current nVidia GPU hardware can process hundreds of thousands of MD5 hashes per second, and multiple GPU can be used in cluster with linear scalability, as I understand. >> And this vulnerability can be fixed as Sergey Brester propose: >> >> We should always compare the keys, >> after cache entry with hash value was found. >> >> Or vulnerability can be minimized by using secure hash >> function instead of current cryptographically broken MD5. > > I think moving away from MD5 is a right way to go. 160-bit SHA1 ? ...we present an identical-prefix collision attack and a chosen-prefix collision attack on SHA-1 with complexities equivalent to approximately 2**61 and 2**77.1 SHA-1 compressions, respectively. - https://marc-stevens.nl/research/papers/EC13-S.pdf SHA1 also was considered as insecure for SSL certificates, and now all forced to migrate from SHA1 to at least SHA-256: http://googleonlinesecurity.blogspot.co.uk/2014/09/gradually-sunsetting-sha-1.html Collision attacks against SHA-1 are too affordable: https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html Content cached in nginx caches also can be very valuable, even more valuable than SSL certificates for other sites. ==================================================== Using SHA-256 or SHA-512 or SHA3-256 or SHA3-512 is secure right now, but it requires more CPU power and more memory. More secure and robust way is to store proxy_cache_key value into cache file on disk and check this value before sending cached response to client. In such way we can be ensured, what cache misuse is not possible, and may be even fast 128-bit secure hash functions can be used, to minimize memory usage and CPU requirements. SHA1 truncated to 128 bits or something better than SHA1, or even leave current MD5 as is - for retaining backward compatibility with existing installations around the world. If retaining backward compatibility is not mandatory, may be SHAKE128(M, 128) can be used as 128-bit hash for saving server memory, but checking proxy_cache_key value is still required for preventing information disclosure attacks, - because this is only 128 bit hash and it can be brute forced in near future, as it described at Bruce Schneier site for SHA1. P.S. Using MurmurHash is not good idea, because attacker can easy make collisions and invalidate popular entries from cache, and this technology can be used for DDoS attacks. (even in case if only one site exists on server with nginx cache) Using secure hash function for nginx cache is strong requirement, even in case then full proxy_cache_key value check will be added. -- Best regards, Gena From serg.brester at sebres.de Mon Sep 7 21:22:25 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Mon, 07 Sep 2015 23:22:25 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EDE5A6.4060208@csdoc.com> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> Message-ID: <5fda28cb29358dcb4208c345e482e39f@sebres.de> On 07.09.2015 21:29, Gena Makhomed wrote: > Using MurmurHash is not good idea, because attacker > can easy make collisions and invalidate popular entries > from cache, and this technology can be used for DDoS attacks. > (even in case if only one site exists on server with nginx cache) > > Using secure hash function for nginx cache is strong requirement, > even in case then full proxy_cache_key value check will be added. It's not correct, because something like that will be called "security through obscurity"! Hash value should be used only for fast searching of hash key. Not to identify the cached resources! If your entry should be secure, the key (not it hash) should contain part of security token, authentication, salt etc. So again: hash bears no security function, and if the whole key would be always compared - it would be at all not important which hash function will be used, and how secure it is. And to "crack" resp. return the cache entry you should always "crack" it completely key, not it hash. As I already said, the hash value - is only the way to fast searching resp. even direct access to entry with the hashed key. I know systems, where the hash values are 32 bits and uses simplest algorithm like Ci << 3 + Ci+1. But as already said, hereafter the whole keys will be compared and it's very safe. So for example if you should cache the pages per user, give the authenticated user-name in the cache key (value given with proxy_cache_key). Then the attacker should crack the nginx authentication also. Everything else would be as already stated - security through obscurity. Regards, sebres. From gmm at csdoc.com Mon Sep 7 23:17:59 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Tue, 8 Sep 2015 02:17:59 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <5fda28cb29358dcb4208c345e482e39f@sebres.de> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <5fda28cb29358dcb4208c345e482e39f@sebres.de> Message-ID: <55EE1B27.4000005@csdoc.com> On 08.09.2015 0:22, Sergey Brester wrote: >> Using MurmurHash is not good idea, because attacker >> can easy make collisions and invalidate popular entries >> from cache, and this technology can be used for DDoS attacks. >> (even in case if only one site exists on server with nginx cache) >> >> Using secure hash function for nginx cache is strong requirement, >> even in case then full proxy_cache_key value check will be added. > > It's not correct, because something like that > will be called "security through obscurity"! There is no obscurity here. Value of proxy_cache_key is known, hash function is known, nginx sources is open and available. > Hash value should be used only for fast searching of hash key. > Not to identify the cached resources! You remember proposed solution from your message? http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007286.html > - In the very less likely case of collision we will just forget > the cached entry with previous key or save it as array for serial > access (not really expected by caching and large hash value, > because rare and that is a cache - not a database that > should always hold an entry). Attacker easily can provide DDoS attack against nginx in this case: http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-scale Hash Table Vulnerability Enables Wide-Scale DDoS Attacks > If your entry should be secure, the key (not it hash) should contain > part of security token, authentication, salt etc. This is "security through obscurity", and you say, what this is bad thing. > So again: hash bears no security function, and if the whole key would be > always compared - it would be at all not important which hash function > will be used, and how secure it is. And to "crack" resp. return the > cache entry you should always "crack" it completely key, not it hash. If site is under high load, and, for example contains many pages, which are very popular (for example, 100 req/sec of each) and backend need many time for generating such page, for example, 2-3 seconds - attacker can using MurmurHash create requestst to non-existend pages, with same MurmurHash hash value. And 404 responces from backend will replace very popular pages with different uri but with same MurmurHash hash value. And backend will be DDoS`ed by many requests for different popular pages. So, attacker easily can disable nginx cache for any known uri. And if backend can't process all client requests without cache - it will be overloaded and access to site will be denied for all new users. You can make workaround for this bug caused by MurmurHash, by appending and prepending proxy_cache_key value with some secret tokens, but management of these tokens will be headache of each nginx user, and many of these users don't do such "security through obscurity" things, and leave proxy_cache_key in config as is, in form $scheme$host$request_uri or even it default value. So, many thousands of nginx configurations will be in vulnerable state. Also, such workaround for bugs caused by using MurmurHash is very bad from usability point of view, we will force users to do the stupid things, and manually add workaround. Automatically add workaround to config - also is not good thing, because nginx only read config and never write it. Even more, config may be available to nginx only in read only mode. Storing such tokens inside cache files is also bad thing: file with these secure tokens will be single point of failure. Delete or modify such file, and entire nginx cache will be useless and invalid after nginx restart. Also, storing value of these secure tokens inside each cache file - is just wasting of space, if not store - cache can be easily exploded, and will return mess: unmatched proxy_cache_keys and cache content. Better approach, from my point of view - just use secure hash function, and attacker can't in this case do DDoS attacks against nginx cache. I agree with you, what compare entire keys for equal hash values is more safe solution: : More secure and robust way is to store proxy_cache_key : value into cache file on disk and check this value : before sending cached response to client. In such way : we can be ensured, what cache misuse is not possible > I know systems, where the hash values are 32 bits and uses simplest > algorithm like Ci << 3 + Ci+1. But as already said, hereafter the whole > keys will be compared and it's very safe. https://www.kb.cert.org/vuls/id/903934 Vulnerability Note VU#903934 Hash table implementations vulnerable to algorithmic complexity attacks -- Best regards, Gena From serg.brester at sebres.de Tue Sep 8 00:29:36 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Tue, 08 Sep 2015 02:29:36 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EE1B27.4000005@csdoc.com> References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <5fda28cb29358dcb4208c345e482e39f@sebres.de> <55EE1B27.4000005@csdoc.com> Message-ID: On 08.09.2015 01:17, Gena Makhomed wrote: > There is no obscurity here. Value of proxy_cache_key is known, > hash function is known, nginx sources is open and available. If value of proxy_cache_key is known and attackers can generate it, what do you want to protect with some hash value? If attacker can use any key - it's no matter which hash algorithm you have used (attacker can get entry). If attacker can't use any key (it's protected with some internal nginx variable) - it's no matter also which hash will be used (attacker cannot get entry, because the keys will be compared also). >> Hash value should be used only for fast searching of hash key. Not to >> identify the cached resources! > You remember proposed solution from your message? > http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007286.html > [1] > Attacker easily can provide DDoS attack against nginx in this case: > http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-scale > [2] > Hash Table Vulnerability Enables Wide-Scale DDoS Attacks And what's stopping him to do the same with much safe hash function? On the contrary, don't forget the generating of such hash values is cpu greedy also. >> If your entry should be secure, the key (not it hash) should contain >> part of security token, authentication, salt etc. > This is "security through obscurity", > and you say, what this is bad thing. Wrong! Because if this secure parts in key are an internal nginx values/variables (authenticated user name, salt etc.), that the attacker never can use! He can theoretical use a variable part of key, to generate or brute some expected hash, equal with searched one, but the keys comparison makes all his attempts void. If it is not so, the key contains nothing internals and attacker can use any key (example uri part) - see above - it's no matter which hash algorithm you have used (attacker can get entry because he can use key). The cache is then insecure. >> So again: hash bears no security function, and if the whole key would >> be always compared - it would be at all not important which hash >> function will be used, and how secure it is. And to "crack" resp. >> return the cache entry you should always "crack" it completely key, >> not it hash. > If site is under high load, and, for example contains many pages, > which are very popular (for example, 100 req/sec of each) and backend > need many time for generating such page, for example, 2-3 seconds - > attacker can using MurmurHash create requestst to non-existend > pages, with same MurmurHash hash value. And 404 responces > from backend will replace very popular pages with different uri > but with same MurmurHash hash value. And backend will be DDoS`ed > by many requests for different popular pages. So, attacker easily > can disable nginx cache for any known uri. And if backend can't > process all client requests without cache - it will be overloaded > and access to site will be denied for all new users. 1) big error - non-existend pages can be not placed in cache (no cache for 404); 2) attacker does not uses MurmurHash - it does nginx over a valid cache key generated nginx side, just to find the entry faster (for nothing else)! 3) the attacker almost always cannot control the key used for caching (except only if it a part of uri - insecure cache); 4) what do you want to do against it? If I understood it correct, instead of intensive load by page generation, you want shift the load into hash generation? Well, very clever. :) 5) DDOS prevention is in any event not the role of caching and can be made with other instruments (also using nginx); > So, many thousands of nginx configurations will be in vulnerable state. Wrong. Not a single one, and if only, then not vulnerable - because it would be then exactly so "vulnerable" with any other hash function - the configuration for key or something else is blame, not the hashing self. Either the cache is insecure (any key can be used) - no matter which hash function, Or the cache is secure (protected with any internal nginx variable as part of cache key, that attacker can't use) - fail by keys compare - no matter which hash function also. > Hash table implementations vulnerable to algorithmic complexity attacks That means the pure hashing, without keys comparison - that is safe if keys would be compared. What I was already said and wrote 100 times. From mdounin at mdounin.ru Tue Sep 8 01:41:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Sep 2015 04:41:27 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EDE5A6.4060208@csdoc.com> References: <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> Message-ID: <20150908014127.GG52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 10:29:42PM +0300, Gena Makhomed wrote: > On 07.09.2015 19:58, Maxim Dounin wrote: > > >>>Also, it's important to understand that, while collision attacks > >>>now exists, it doesn't really make MD5 bad for various > >>>non-security uses. > >> > >>nginx cache is security use too. > >> > >>If user configure common shared cache for all virtual servers, > >>and config have two servers: first, protected by access, > >>auth_basic or auth_request modules from unauthorized use, > >>and second server with publicly available content. > >> > >>If attacker know proxy_cache_key, for example $scheme$host$request_uri > >>and know $request_uri from protected site - he can create MD5/crc32 > >>collision by building specific $request_uri for second server, > >>and he will got unauthorized access to protected content > >>from the first, protected web site. > >> > >>This is looks like vulnerability. > > > >Yes, this looks like a valid example of a potentially affected > >configuration. Though as far as I know, it is not currently > >possible to construct a value (which choosen prefix) that maps to > >a given md5 value. > > It is possible and already was used to create forged certificates. > > In 2007, a chosen-prefix collision attack was found against MD5, > requiring roughly 2**50 evaluations of the MD5 function. The paper > also demonstrates two X.509 certificates for different domain names, > with colliding hash values. This means that a certificate authority > could be asked to sign a certificate for one domain, and then that > certificate could be used to impersonate another domain. > - > https://en.wikipedia.org/wiki/Collision_attack#Chosen-prefix_collision_attack For the attack you described chosen prefix collision attack is not enough, as it is about finding two colliding values with chosen prefixes. But to mount the attack described one needs to find a value that maps to a given md5 hash - essentially, this is a preimage attack with additional requirements. On the other hand, it might be possible to simplify requirements of the attack by forcing some authenticated user to load data under a given key and then retrieve this key contents using a choosen prefix collision previously calculated. [...] > More secure and robust way is to store proxy_cache_key > value into cache file on disk and check this value > before sending cached response to client. In such way > we can be ensured, what cache misuse is not possible, > and may be even fast 128-bit secure hash functions > can be used, to minimize memory usage and CPU requirements. > SHA1 truncated to 128 bits or something better than SHA1, > or even leave current MD5 as is - for retaining backward > compatibility with existing installations around the world. May be you are right and checking full key value would be the most secure and efficient solution after all, especially keeping in mind backward compatibility. [...] > Using MurmurHash is not good idea, because attacker > can easy make collisions and invalidate popular entries > from cache, and this technology can be used for DDoS attacks. > (even in case if only one site exists on server with nginx cache) > > Using secure hash function for nginx cache is strong requirement, > even in case then full proxy_cache_key value check will be added. Agree, using intentionally non-secure hashes isn't a good idea. -- Maxim Dounin http://nginx.org/ From hungnv at opensource.com.vn Tue Sep 8 03:01:22 2015 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Tue, 8 Sep 2015 10:01:22 +0700 Subject: How to get Original response before gzip or another module filter rewrite response In-Reply-To: <201509072207192016118@163.com> References: <2015090521201511145750@163.com> <201509072207192016118@163.com> Message-ID: <37424156-8AA1-4E5D-862F-04E5E814615D@opensource.com.vn> Hello, I don?t know which context you are trying to do, but in order to write a chain buffer (this?s what contain nginx?s response) to file, you can use ngx_write_chain_to_file or ngx_write_chain_to_temp_file function. Many nginx modules already use these function, take a look at ngx_http_proxy_module for more detail. ? H?ng > On Sep 7, 2015, at 9:07 PM, hack988 wrote: > > Hello everybody: > Is Anyone can anwer my question? thanks very mach; > > hack988 > > From: hack988 > Date: 2015-09-05 21:20 > To: nginx-devel > Subject: How to get Original response before gzip or another module filter rewrite response > Dear All: > I'm a beginner for nginx development,Although i'm read Emiller's Guide and another article about Nginx develop for several days,I still don't know how to read whole original response buffer chain copy to myself module's temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite buffer. > How to check another module is written output buffer to diffrent content type? > I want to copy buffer to a file that no compress or chunked. > I'm sorry for my poor english ,thx. > > hack988 > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From hack988 at 163.com Tue Sep 8 15:43:34 2015 From: hack988 at 163.com (hack988) Date: Tue, 8 Sep 2015 23:43:34 +0800 Subject: How to get Original response before gzip or another module filter rewrite response References: <2015090521201511145750@163.com>, <201509072207192016118@163.com>, <37424156-8AA1-4E5D-862F-04E5E814615D@opensource.com.vn> Message-ID: <2015090823433328449580@163.com> Hello, I has already read ngx_http_proxy_module.c twice(source code version 1.9.4),Sadness that i'm hadn't find ngx_write_chain_to_file or ngx_write_chain_to_temp_file function in this source file. I found other code: ================================================================================================= line 1551 for ( ;; ) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "proxy output chunk: %d", ngx_buf_size(cl->buf)); size += ngx_buf_size(cl->buf); if (cl->buf->flush || cl->buf->sync || ngx_buf_in_memory(cl->buf) || cl->buf->in_file) { tl = ngx_alloc_chain_link(r->pool); if (tl == NULL) { return NGX_ERROR; } tl->buf = cl->buf; *ll = tl; ll = &tl->next; } if (cl->next == NULL) { break; } cl = cl->next; } line 1653 out: rc = ngx_chain_writer(&r->upstream->writer, out); I guss that nginx get response from buffer and rewrite output buffer directly. =============================================================================================== line 459 { ngx_string("proxy_cache_path"), NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE, ngx_http_file_cache_set_slot, NGX_HTTP_MAIN_CONF_OFFSET, offsetof(ngx_http_proxy_main_conf_t, caches), &ngx_http_proxy_module }, proxy_cache_path is set to ngx_http_proxy_main_conf_t->caches varible,but it doesn't be used in proxy moudle's source code to write cache file. ================================================================================================= What I want to do: Nginx Call header,body handler like this: header handle1->header handler2->body handler1->body hander2 Myself module is in: header handle1->header handler2->body handler1->myself body handle->body hander2 I want to save response buffer which body handler1 rewrite to a file. hack988 From: Hung Nguyen Date: 2015-09-08 11:01 To: nginx-devel Subject: Re: How to get Original response before gzip or another module filter rewrite response Hello, I don?t know which context you are trying to do, but in order to write a chain buffer (this?s what contain nginx?s response) to file, you can use ngx_write_chain_to_file or ngx_write_chain_to_temp_file function. Many nginx modules already use these function, take a look at ngx_http_proxy_module for more detail. ? H?ng On Sep 7, 2015, at 9:07 PM, hack988 wrote: Hello everybody: Is Anyone can anwer my question? thanks very mach; hack988 From: hack988 Date: 2015-09-05 21:20 To: nginx-devel Subject: How to get Original response before gzip or another module filter rewrite response Dear All: I'm a beginner for nginx development,Although i'm read Emiller's Guide and another article about Nginx develop for several days,I still don't know how to read whole original response buffer chain copy to myself module's temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite buffer. How to check another module is written output buffer to diffrent content type? I want to copy buffer to a file that no compress or chunked. I'm sorry for my poor english ,thx. hack988 _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorin.v.manole at gmail.com Tue Sep 8 16:32:30 2015 From: sorin.v.manole at gmail.com (Sorin Manole) Date: Tue, 8 Sep 2015 19:32:30 +0300 Subject: How to get Original response before gzip or another module filter rewrite response In-Reply-To: <2015090823433328449580@163.com> References: <2015090521201511145750@163.com> <201509072207192016118@163.com> <37424156-8AA1-4E5D-862F-04E5E814615D@opensource.com.vn> <2015090823433328449580@163.com> Message-ID: Hello, You need to create a response body filter module. Your filter handler is called every time some response data is received from the upstream. In that handler you can use the ngx_write_chain_to_file or ngx_write_chain_to_temp_file functions to write the data to your file. An example of such a module is: https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_sub_filter_module.c#L282 http://www.evanmiller.org/nginx-modules-guide.html#filters-body 2015-09-08 18:43 GMT+03:00 hack988 : > Hello, > I has already read ngx_http_proxy_module.c twice(source code version > 1.9.4),Sadness that i'm hadn't find ngx_write_chain_to_file or > ngx_write_chain_to_temp_file function in this source file. > I found other code: > > ================================================================================================= > line 1551 > for ( ;; ) { > ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > "proxy output chunk: %d", ngx_buf_size(cl->buf)); > > size += ngx_buf_size(cl->buf); > > if (cl->buf->flush > || cl->buf->sync > || ngx_buf_in_memory(cl->buf) > || cl->buf->in_file) > { > tl = ngx_alloc_chain_link(r->pool); > if (tl == NULL) { > return NGX_ERROR; > } > > tl->buf = cl->buf; > *ll = tl; > ll = &tl->next; > } > > if (cl->next == NULL) { > break; > } > > cl = cl->next; > } > line 1653 > out: > > rc = ngx_chain_writer(&r->upstream->writer, out); > > I guss that nginx get response from buffer and rewrite output buffer > directly. > > =============================================================================================== > line 459 > { ngx_string("proxy_cache_path"), > NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE, > ngx_http_file_cache_set_slot, > NGX_HTTP_MAIN_CONF_OFFSET, > offsetof(ngx_http_proxy_main_conf_t, caches), > &ngx_http_proxy_module }, > > proxy_cache_path is set to ngx_http_proxy_main_conf_t->caches varible,but > it doesn't be used in proxy moudle's source code to write cache file. > > ================================================================================================= > What I want to do: > Nginx Call header,body handler like this: > header handle1->header handler2->body handler1->body hander2 > > Myself module is in: > header handle1->header handler2->body handler1->myself body handle->body > hander2 > > I want to save response buffer which body handler1 rewrite to a file. > > > > ------------------------------ > hack988 > > > *From:* Hung Nguyen > *Date:* 2015-09-08 11:01 > *To:* nginx-devel > *Subject:* Re: How to get Original response before gzip or another module > filter rewrite response > Hello, > > I don?t know which context you are trying to do, but in order to write a > chain buffer (this?s what contain nginx?s response) to file, you can use > ngx_write_chain_to_file or ngx_write_chain_to_temp_file function. > Many nginx modules already use these function, take a look at > ngx_http_proxy_module for more detail. > > ? > H?ng > > > On Sep 7, 2015, at 9:07 PM, hack988 wrote: > > Hello everybody: > Is Anyone can anwer my question? thanks very mach; > > ------------------------------ > hack988 > > > *From:* hack988 > *Date:* 2015-09-05 21:20 > *To:* nginx-devel > *Subject:* How to get Original response before gzip or another module > filter rewrite response > Dear All: > I'm a beginner for nginx development,Although i'm read Emiller's Guide and > another article about Nginx develop for several days,I still don't know > how to read whole original response buffer chain copy to myself module's > temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite > buffer. > How to check another module is written output buffer to diffrent content > type? > I want to copy buffer to a file that no compress or chunked. > I'm sorry for my poor english ,thx. > > ------------------------------ > hack988 > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Tue Sep 8 21:07:36 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 9 Sep 2015 00:07:36 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150908014127.GG52312@mdounin.ru> References: <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> Message-ID: <55EF4E18.5020702@csdoc.com> On 08.09.2015 4:41, Maxim Dounin wrote: > On the other hand, it might be possible to simplify requirements > of the attack by forcing some authenticated user to load data > under a given key and then retrieve this key contents using a > choosen prefix collision previously calculated. Yes, $request_uri - full original request URI (with arguments) Most backends ignore unknown request arguments without errors. >> More secure and robust way is to store proxy_cache_key >> value into cache file on disk and check this value >> before sending cached response to client. In such way >> we can be ensured, what cache misuse is not possible, >> and may be even fast 128-bit secure hash functions >> can be used, to minimize memory usage and CPU requirements. >> SHA1 truncated to 128 bits or something better than SHA1, >> or even leave current MD5 as is - for retaining backward >> compatibility with existing installations around the world. > > May be you are right and checking full key value would be the most > secure and efficient solution after all, especially keeping in > mind backward compatibility. Check full key is not my idea, author of this idea is Sergey Brester. Overhead for such additional full key value check should be minimal. But this protect nginx users from any future bugs in hash functions. Using 256-bit or 512-bit secure hash function requires more memory, requires more CPU power and therefore it is not very good solution. But I am still not sure which 128-bit secure hash functions will be the best choice for nginx cache keys hash function. For legacy CPUs MD5 faster, but for new CPUs SHA1 is faster (this can be checked with "openssl speed md5 sha1" command) Chosen-prefix collision attack on MD5 has compexity 2**50, but 128-bit SHA-1 is more secure than MD5 for such attack. 128-bit SHA-1 is always better than MD5 for new CPUs, and may be this hash should be used for nginx cache? But may be SHAKE128 from SHA-3 is even more faster and more secure? Currently is no known any collision/preimage attacks against SHAKE128. https://godoc.org/golang.org/x/crypto/sha3 The SHAKE functions are recommended for most new uses. They can produce output of arbitrary length. SHAKE256, with an output length of at least 64 bytes, provides 256-bit security against all attacks. The Keccak team recommends it for most applications upgrading from SHA2-512. (NIST chose a much stronger, but much slower, sponge instance for SHA3-512.) ======================================================================= Replacing MD5 with other hash function will invalidate all old caches, but this will be only one time performance degrade after nginx upgrade. Choice between always using weak "secure" hash function and one time cache invalidation IMHO should be resolved by replacing hash function. IMHO, MD5 is worst, SHA1 is better and SHAKE128 is the best candidate. ============================ Do not use the MD5 algorithm Software developers, Certification Authorities, website owners, and users should avoid using the MD5 algorithm in any capacity. As previous research has demonstrated, it should be considered cryptographically broken and unsuitable for further use. - http://www.kb.cert.org/vuls/id/836068 ======================================= -- Best regards, Gena From gmm at csdoc.com Tue Sep 8 23:09:28 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 9 Sep 2015 02:09:28 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: References: <55E8F665.5010806@gmail.com> <20150904132330.GA72232@mdounin.ru> <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <5fda28cb29358dcb4208c345e482e39f@sebres.de> <55EE1B27.4000005@csdoc.com> Message-ID: <55EF6AA8.40609@csdoc.com> On 08.09.2015 3:29, Sergey Brester wrote: >> There is no obscurity here. Value of proxy_cache_key is known, >> hash function is known, nginx sources is open and available. > If value of proxy_cache_key is known and attackers can generate it, > what do you want to protect with some hash value? I want protect backend from DDoS attack caused by nginx cache poisoning caused by easily discoverable collisions of MurMurHash. So, using MurMurHash in nginx is bad idea, because this allow any attacker virtually turn off nginx cache for any cache entries. > If attacker can use any key - it's no matter which hash algorithm you > have used (attacker can get entry). Attacker can get entry from nginx cache, not from backend - it is Ok. Cache is fast and backend will be not overloaded in this use case. For example, if site www.examle.com has popular page http://www.examle.com/very-popular-page/ frequently requested by users from cache - site work fine. But if attacker generate request to other page, for example, http://www.examle.com/other-page/?some-text-wkjhwhgfwjefwje and hash of proxy_cache_key value of this page will be the same as hash of proxy_cache_key value of page http://www.examle.com/very-popular-page/ - as you say previously, nginx cache entry will be replaced with new page content. And if any other user request http://www.examle.com/very-popular-page/ - nginx can't process this request from cache and must send it to backend, because full key comparison say, what cache entry contains different page, not requested one. In same way nginx cache can be turned off for any set of most popular pages on site, and backend will be under DDoS and nginx cache can't help, because it poisoned by collisions. Root cause of this DDoS attack is usage of insecure MurMurHash. >>> Hash value should be used only for fast searching of hash key. Not to >>> identify the cached resources! >> You remember proposed solution from your message? >> http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007286.html >> [1] >> Attacker easily can provide DDoS attack against nginx in this case: >> http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-scale >> [2] >> Hash Table Vulnerability Enables Wide-Scale DDoS Attacks > > And what's stopping him to do the same with much safe hash function? "Collision resistance is a property of cryptographic hash functions". - https://en.wikipedia.org/wiki/Collision_resistance > On the contrary, don't forget the generating of such hash values is cpu > greedy also. You can check it by benchmark: openssl speed md5 sha1 Slow backend can generate *one* page several seconds. >>> If your entry should be secure, the key (not it hash) should contain >>> part of security token, authentication, salt etc. >> This is "security through obscurity", >> and you say, what this is bad thing. > Wrong! Because if this secure parts in key are an internal nginx > values/variables (authenticated user name, salt etc.), that the attacker > never can use! Default value of proxy_cache_key is $scheme$proxy_host$request_uri; Frequently used proxy_cache_key value is $scheme$host$request_uri; > He can theoretical use a variable part of key, to generate or brute some > expected hash, equal with searched one, but the keys comparison makes > all his attempts void. Also this makes nginx cache void too. If backend can't work without nginx cache - site will return denial of service to customers, for example, by 504 Gateway Timeout Errors. > 4) what do you want to do against it? If I understood it correct, > instead of intensive load by page generation, you want shift the load > into hash generation? Well, very clever. :) Yes. Doing sha1 for 3s on 1024 size blocks: 1955911 sha1's in 3.01s Doing sha1 for 3s on 8192 size blocks: 289605 sha1's in 3.00s Backend can generate only two pages in 3.00s Compare 2 with 289605 and you will see ~ 100_000 performance boost. >> Hash table implementations vulnerable to algorithmic complexity attacks > That means the pure hashing, without keys comparison With key comparison too, because thanks to insecure hash collisions, hash table can be easily "converted" into linked list by attacker. -- Best regards, Gena From amdeich at gmail.com Tue Sep 8 23:46:08 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Wed, 9 Sep 2015 02:46:08 +0300 Subject: [PATCH] Add ssl_client_EKU nginx variable. Message-ID: Hello, Please find attached patch, that add ssl_client_EKU nginx variable. Variable contains coma-separated list of OIDs, presented in client's certificate (if any). If EKU extension is absent, empty line will be returned. Dot-separated form of OID choosen rather than human-readable short name, as EKU may contains values OpenSSL not aware of, and we receive "UNDEF" only in this case. Purpose is to use in LUA scripts, or let backend server know the list of EKU's, as it can contains lot more that just 'TLS Client Authentication'. (for those who read in Russain: http://www.infotrust.ru/data/Docs/InfoTrustCP.pdf page 37, as an example) For example directive proxy_set_header X-ClientCert-EKU $ssl_client_EKU; will result in following in proxied header: X-ClientCert-EKU: 1.3.6.1.5.5.7.3.2,1.2.643.3.34.2.6,1.2.643.3.34.2.1 Tested on 1.8.0, 1.9.4 Best wishes, Andrey -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: add_ssl_client_EKU_var.patch Type: text/x-patch Size: 4039 bytes Desc: not available URL: From hack988 at 163.com Wed Sep 9 00:01:44 2015 From: hack988 at 163.com (hack988) Date: Wed, 9 Sep 2015 08:01:44 +0800 Subject: How to get Original response before gzip or another module filter rewrite response I made a mistake References: <2015090521201511145750@163.com>, <201509072207192016118@163.com>, <37424156-8AA1-4E5D-862F-04E5E814615D@opensource.com.vn>, <2015090823433328449580@163.com> Message-ID: <20150909080143167835111@163.com> Hello, Sorry,i made a mistake in my last email. =========================================================================================== What I want to do: Nginx Call header,body filter like this: header filter1->header filter2->body filter1->body filter1 Myself module is in: header filter1->header filter2->body filter1->myself body filter->body filter2 I want to save response buffer which body filter1 rewrite to a file. hack988 From: hack988 Date: 2015-09-08 23:43 To: nginx-devel Subject: Re: Re: How to get Original response before gzip or another module filter rewrite response Hello, I has already read ngx_http_proxy_module.c twice(source code version 1.9.4),Sadness that i'm hadn't find ngx_write_chain_to_file or ngx_write_chain_to_temp_file function in this source file. I found other code: ================================================================================================= line 1551 for ( ;; ) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "proxy output chunk: %d", ngx_buf_size(cl->buf)); size += ngx_buf_size(cl->buf); if (cl->buf->flush || cl->buf->sync || ngx_buf_in_memory(cl->buf) || cl->buf->in_file) { tl = ngx_alloc_chain_link(r->pool); if (tl == NULL) { return NGX_ERROR; } tl->buf = cl->buf; *ll = tl; ll = &tl->next; } if (cl->next == NULL) { break; } cl = cl->next; } line 1653 out: rc = ngx_chain_writer(&r->upstream->writer, out); I guss that nginx get response from buffer and rewrite output buffer directly. =============================================================================================== line 459 { ngx_string("proxy_cache_path"), NGX_HTTP_MAIN_CONF|NGX_CONF_2MORE, ngx_http_file_cache_set_slot, NGX_HTTP_MAIN_CONF_OFFSET, offsetof(ngx_http_proxy_main_conf_t, caches), &ngx_http_proxy_module }, proxy_cache_path is set to ngx_http_proxy_main_conf_t->caches varible,but it doesn't be used in proxy moudle's source code to write cache file. ================================================================================================= What I want to do: Nginx Call header,body handler like this: header handle1->header handler2->body handler1->body hander2 Myself module is in: header handle1->header handler2->body handler1->myself body handle->body hander2 I want to save response buffer which body handler1 rewrite to a file. hack988 From: Hung Nguyen Date: 2015-09-08 11:01 To: nginx-devel Subject: Re: How to get Original response before gzip or another module filter rewrite response Hello, I don?t know which context you are trying to do, but in order to write a chain buffer (this?s what contain nginx?s response) to file, you can use ngx_write_chain_to_file or ngx_write_chain_to_temp_file function. Many nginx modules already use these function, take a look at ngx_http_proxy_module for more detail. ? H?ng On Sep 7, 2015, at 9:07 PM, hack988 wrote: Hello everybody: Is Anyone can anwer my question? thanks very mach; hack988 From: hack988 Date: 2015-09-05 21:20 To: nginx-devel Subject: How to get Original response before gzip or another module filter rewrite response Dear All: I'm a beginner for nginx development,Although i'm read Emiller's Guide and another article about Nginx develop for several days,I still don't know how to read whole original response buffer chain copy to myself module's temporary buffer,before another module(gzip,gunzip,gzip_static) rewrite buffer. How to check another module is written output buffer to diffrent content type? I want to copy buffer to a file that no compress or chunked. I'm sorry for my poor english ,thx. hack988 _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 9 17:17:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Sep 2015 20:17:41 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <55EF4E18.5020702@csdoc.com> References: <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> Message-ID: <20150909171741.GN52312@mdounin.ru> Hello! On Wed, Sep 09, 2015 at 12:07:36AM +0300, Gena Makhomed wrote: > On 08.09.2015 4:41, Maxim Dounin wrote: > > > On the other hand, it might be possible to simplify requirements > > of the attack by forcing some authenticated user to load data > > under a given key and then retrieve this key contents using a > > choosen prefix collision previously calculated. > > Yes, $request_uri - full original request URI (with arguments) > Most backends ignore unknown request arguments without errors. Yes, that's the basic idea. Though I still think that such an attack is unlikely to be practical now due to various limiting factors (like crc32 check and $request_uri being ASCII), but it certainly make sense to mitigate such potential attacks. [...] > Overhead for such additional full key value check should be minimal. > But this protect nginx users from any future bugs in hash functions. Yes, in my testing I wasn't able to detect any statistically significant difference in performance. Patch: # HG changeset patch # User Maxim Dounin # Date 1441818706 -10800 # Wed Sep 09 20:11:46 2015 +0300 # Node ID 85034da89d12dfdf5e0d72f0a99251f98ec1764c # Parent 412ccd679a4691725d5a5562494051a3cadd69ca Cache: check the whole cache key in addition to hashes. This prevents a potential attack that discloses cached data if an attacker will be able to craft a hash collision between some cache key the attacker is allowed to access and another cache key with protected data. See http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007288.html. Thanks to Gena Makhomed and Sergey Brester. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -521,9 +521,12 @@ wakeup: static ngx_int_t ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c) { + u_char *p; time_t now; ssize_t n; + ngx_str_t *key; ngx_int_t rc; + ngx_uint_t i; ngx_http_file_cache_t *cache; ngx_http_file_cache_header_t *h; @@ -547,12 +550,27 @@ ngx_http_file_cache_read(ngx_http_reques return NGX_DECLINED; } - if (h->crc32 != c->crc32) { + if (h->crc32 != c->crc32 || h->header_start != c->header_start) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has md5 collision", c->file.name.data); return NGX_DECLINED; } + p = c->buf->pos + sizeof(ngx_http_file_cache_header_t) + + sizeof(ngx_http_file_cache_key); + + key = c->keys.elts; + for (i = 0; i < c->keys.nelts; i++) { + if (ngx_memcmp(p, key[i].data, key[i].len) != 0) { + ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, + "cache file \"%s\" has md5 collision", + c->file.name.data); + return NGX_DECLINED; + } + + p += key[i].len; + } + if ((size_t) h->body_start > c->body_start) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has too long header", @@ -583,7 +601,6 @@ ngx_http_file_cache_read(ngx_http_reques c->last_modified = h->last_modified; c->date = h->date; c->valid_msec = h->valid_msec; - c->header_start = h->header_start; c->body_start = h->body_start; c->etag.len = h->etag_len; c->etag.data = h->etag; [...] > Replacing MD5 with other hash function will invalidate all old caches, > but this will be only one time performance degrade after nginx upgrade. > > Choice between always using weak "secure" hash function and one time > cache invalidation IMHO should be resolved by replacing hash function. > > IMHO, MD5 is worst, SHA1 is better and SHAKE128 is the best candidate. Yes, I think switching away from MD5 is a right way to go. On the other hand, SHA1 is not that much better - while no collisions are currently known, it's cryptographically broken. SHA256 is slower (but not sure we care, as it's unlikely to be statistically significant in overral testing). SHAKE128 looks interesting, but it's not really available anywhere now. -- Maxim Dounin http://nginx.org/ From amdeich at gmail.com Thu Sep 10 00:34:41 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Thu, 10 Sep 2015 03:34:41 +0300 Subject: [PATCH] Add ssl_client_EKU nginx variable. In-Reply-To: References: Message-ID: Small correction - replace magic value with sizeof(). On 9 September 2015 at 02:46, Andrey Kulikov wrote: > Hello, > > Please find attached patch, that add ssl_client_EKU nginx variable. > > Variable contains coma-separated list of OIDs, presented in > client's certificate (if any). If EKU extension is absent, empty line will > be returned. > Dot-separated form of OID choosen rather than human-readable > short name, as EKU may contains values OpenSSL not aware of, > and we receive "UNDEF" only in this case. > Purpose is to use in LUA scripts, or let backend server know the list of > EKU's, as it can contains lot more that just 'TLS Client Authentication'. > (for those who read in Russain: > http://www.infotrust.ru/data/Docs/InfoTrustCP.pdf page 37, as an example) > > For example directive > proxy_set_header X-ClientCert-EKU $ssl_client_EKU; > will result in following in proxied header: > X-ClientCert-EKU: 1.3.6.1.5.5.7.3.2,1.2.643.3.34.2.6,1.2.643.3.34.2.1 > > Tested on 1.8.0, 1.9.4 > > Best wishes, > Andrey > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: add_ssl_client_EKU_var.patch Type: text/x-patch Size: 4051 bytes Desc: not available URL: From serg.brester at sebres.de Thu Sep 10 09:57:23 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 10 Sep 2015 11:57:23 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150909171741.GN52312@mdounin.ru> References: <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> Message-ID: <2bb909b9da4516c63ea31043769c2559@sebres.de> The patch sounds not bad at all, but I would have also removed the calculation and verification of crc32... Makes no sense, if either way the keys would be compared. From serg.brester at sebres.de Thu Sep 10 12:53:52 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 10 Sep 2015 14:53:52 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <2bb909b9da4516c63ea31043769c2559@sebres.de> References: <20150904181015.GF72232@mdounin.ru> <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> Message-ID: <5b0ee21aa7090c2e3393982971441735@sebres.de> Enclosed you will find an attached changeset, that contains suggested fix with keys comparison and completely removed additional protection via crc32. Tested also on known to me keys with md5 collisions (see below) - it works. If someone needs a git version of it: https://github.com/sebres/nginx/pull/2 [2] Below you can find a TCL-code to test strings (hex), that produce an md5 collision (with an example with one collision): https://github.com/sebres/misc/blob/tcl-test-hash-collision/tcl/hash-collision.tcl [3] Regards, sebres. On 10.09.2015 11:57, Sergey Brester wrote: > The patch sounds not bad at all, but I would have also removed the calculation and verification of crc32... Makes no sense, if either way the keys would be compared. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx-devel [2] https://github.com/sebres/nginx/pull/2 [3] https://github.com/sebres/misc/blob/tcl-test-hash-collision/tcl/hash-collision.tcl -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: _sb-secure-cache-fix.patch Type: text/x-diff Size: 5089 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Sep 10 14:47:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 17:47:04 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <2bb909b9da4516c63ea31043769c2559@sebres.de> References: <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> Message-ID: <20150910144704.GQ52312@mdounin.ru> Hello! On Thu, Sep 10, 2015 at 11:57:23AM +0200, Sergey Brester wrote: > The patch sounds not bad at all, but I would have also removed the > calculation and verification of crc32... Makes no sense, if either way the > keys would be compared. This will break compatibility with old versions for no real reason. We may consider this change when we'll change cache header format next time. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Thu Sep 10 15:07:36 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 10 Sep 2015 17:07:36 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150910144704.GQ52312@mdounin.ru> References: <20150904194351.GJ72232@mdounin.ru> <55EA0B06.7040808@csdoc.com> <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> <20150910144704.GQ52312@mdounin.ru> Message-ID: <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> Leave header format unchanged (I mean changes in header file 'src/http/ngx_http_cache.h'), but not calculate and not compare crc32 (unused / reserved up to "change cache header format next time")? On 10.09.2015 16:47, Maxim Dounin wrote: > This will break compatibility with old versions for no real > reason. We may consider this change when we'll change cache > header format next time. From mdounin at mdounin.ru Thu Sep 10 15:33:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 18:33:38 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> References: <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> <20150910144704.GQ52312@mdounin.ru> <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> Message-ID: <20150910153338.GS52312@mdounin.ru> Hello! On Thu, Sep 10, 2015 at 05:07:36PM +0200, Sergey Brester wrote: > Leave header format unchanged (I mean changes in header file > 'src/http/ngx_http_cache.h'), but not calculate and not compare crc32 > (unused / reserved up to "change cache header format next time")? Even that way resulting cache files will not be compatible with older versions which do the check, thus breaking upgrades (when cache items can be used by different versions simultaneously for a short time) and, more importantly, downgrades (which sometimes happen due to various reasons). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 10 15:36:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 18:36:52 +0300 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request In-Reply-To: References: <20150907180432.GF52312@mdounin.ru> Message-ID: <20150910153652.GT52312@mdounin.ru> Hello! On Mon, Sep 07, 2015 at 09:23:08PM +0300, Andrey Kulikov wrote: > As to example of usage: it's a real-world use-case - one of our customers > do want to see these values on backend server for whatever purpose. > But your example also have a right to be aplicable sometime. So, no real-world use case, because my is just a wild guess and certainly doesn't explain $ssl_client_not_before. Could you please ask your customer to describe how it's expected to be used? -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 10 15:48:05 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 18:48:05 +0300 Subject: [PATCH] Add ssl_client_EKU nginx variable. In-Reply-To: References: Message-ID: <20150910154805.GU52312@mdounin.ru> Hello! On Wed, Sep 09, 2015 at 02:46:08AM +0300, Andrey Kulikov wrote: > Hello, > > Please find attached patch, that add ssl_client_EKU nginx variable. > > Variable contains coma-separated list of OIDs, presented in > client's certificate (if any). If EKU extension is absent, empty line will > be returned. > Dot-separated form of OID choosen rather than human-readable > short name, as EKU may contains values OpenSSL not aware of, > and we receive "UNDEF" only in this case. > Purpose is to use in LUA scripts, or let backend server know the list of > EKU's, as it can contains lot more that just 'TLS Client Authentication'. > (for those who read in Russain: > http://www.infotrust.ru/data/Docs/InfoTrustCP.pdf page 37, as an example) > > For example directive > proxy_set_header X-ClientCert-EKU $ssl_client_EKU; > will result in following in proxied header: > X-ClientCert-EKU: 1.3.6.1.5.5.7.3.2,1.2.643.3.34.2.6,1.2.643.3.34.2.1 I can't say I like this. It digs too deep into certificate internals, and I don't really think this should be availalbe as nginx variable. Instead, you may consider obtaining the certificate itself and parsing needed details from it. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Thu Sep 10 15:54:45 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 10 Sep 2015 17:54:45 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150910153338.GS52312@mdounin.ru> References: <20150906015629.GB52312@mdounin.ru> <55EDA2E1.8050101@csdoc.com> <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> <20150910144704.GQ52312@mdounin.ru> <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> <20150910153338.GS52312@mdounin.ru> Message-ID: <334a540efd9cf111dddc917e9c7740f6@sebres.de> On 10.09.2015 17:33, Maxim Dounin wrote: > Hello! > > On Thu, Sep 10, 2015 at 05:07:36PM +0200, Sergey Brester wrote: > >> Leave header format unchanged (I mean changes in header file >> 'src/http/ngx_http_cache.h'), but not calculate and not compare crc32 >> (unused / reserved up to "change cache header format next time")? > > Even that way resulting cache files will not be compatible with > older versions which do the check, thus breaking upgrades (when > cache items can be used by different versions simultaneously for a > short time) and, more importantly, downgrades (which sometimes > happen due to various reasons). In this case (someone will downgrade back to previous nginx version) the cache entries will be invalidated by first use (because crc32 will not equal) - well I think it's not a great problem. Please find enclosed a changeset (2nd) that should restore backwards compatibility of cache header file (forwards). Regards, sebres. -------------- next part -------------- A non-text attachment was scrubbed... Name: _sb-secure-cache-fix.patch Type: text/x-diff Size: 5089 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: _sb-secure-cache-fix2.patch Type: text/x-diff Size: 996 bytes Desc: not available URL: From amdeich at gmail.com Thu Sep 10 16:28:21 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Thu, 10 Sep 2015 19:28:21 +0300 Subject: [PATCH] Add ssl_client_EKU nginx variable. In-Reply-To: <20150910154805.GU52312@mdounin.ru> References: <20150910154805.GU52312@mdounin.ru> Message-ID: Hi, On 10 September 2015 at 18:48, Maxim Dounin wrote: > Instead, you may consider obtaining the > certificate itself and parsing needed details from it. > Indeed, certificate itself available as variable. But parsing properly it is not so trivial task. And what should gear that parsing on frontend side? I do not know such usable tools. Even lua-opeanssl, in current state, far from leting this done. >From the other hand, EKU is not a rocket since - it's a information presented in any real-world certificate, and used for multiple purposes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amdeich at gmail.com Thu Sep 10 16:29:43 2015 From: amdeich at gmail.com (Andrey Kulikov) Date: Thu, 10 Sep 2015 19:29:43 +0300 Subject: [PATCH] Add ssl_client_not_before and ssl_client_not_after request In-Reply-To: <20150910153652.GT52312@mdounin.ru> References: <20150907180432.GF52312@mdounin.ru> <20150910153652.GT52312@mdounin.ru> Message-ID: Hi, On 10 September 2015 at 18:36, Maxim Dounin wrote: > Could you > please ask your customer to describe how it's expected to be used? > Will try. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 10 16:59:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Sep 2015 19:59:09 +0300 Subject: How does Nginx look-up cached resource? In-Reply-To: <334a540efd9cf111dddc917e9c7740f6@sebres.de> References: <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> <20150910144704.GQ52312@mdounin.ru> <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> <20150910153338.GS52312@mdounin.ru> <334a540efd9cf111dddc917e9c7740f6@sebres.de> Message-ID: <20150910165909.GV52312@mdounin.ru> Hello! On Thu, Sep 10, 2015 at 05:54:45PM +0200, Sergey Brester wrote: > On 10.09.2015 17:33, Maxim Dounin wrote: > > >On Thu, Sep 10, 2015 at 05:07:36PM +0200, Sergey Brester wrote: > > > >>Leave header format unchanged (I mean changes in header file > >>'src/http/ngx_http_cache.h'), but not calculate and not compare crc32 > >>(unused / reserved up to "change cache header format next time")? > > > >Even that way resulting cache files will not be compatible with > >older versions which do the check, thus breaking upgrades (when > >cache items can be used by different versions simultaneously for a > >short time) and, more importantly, downgrades (which sometimes > >happen due to various reasons). > > In this case (someone will downgrade back to previous nginx version) the > cache entries will be invalidated by first use (because crc32 will not > equal) - well I think it's not a great problem. When cache entries are ignored due to incorrect crc32, this will result in an alert in logs. Even if you are ok with invalidating such cache items (which isn't a good thing, either), unexpected alerts are certainly a bad thing and shouldn't happen. -- Maxim Dounin http://nginx.org/ From serg.brester at sebres.de Thu Sep 10 20:53:49 2015 From: serg.brester at sebres.de (Sergey Brester) Date: Thu, 10 Sep 2015 22:53:49 +0200 Subject: How does Nginx look-up cached resource? In-Reply-To: <20150910165909.GV52312@mdounin.ru> References: <20150907165811.GD52312@mdounin.ru> <55EDE5A6.4060208@csdoc.com> <20150908014127.GG52312@mdounin.ru> <55EF4E18.5020702@csdoc.com> <20150909171741.GN52312@mdounin.ru> <2bb909b9da4516c63ea31043769c2559@sebres.de> <20150910144704.GQ52312@mdounin.ru> <13318fd5a9cb1cf0cb2b693f6d6ca6ca@sebres.de> <20150910153338.GS52312@mdounin.ru> <334a540efd9cf111dddc917e9c7740f6@sebres.de> <20150910165909.GV52312@mdounin.ru> Message-ID: <5cadc145bfdc5d2be303217df55f07bf@sebres.de> On 10.09.2015 18:59, Maxim Dounin wrote: > unexpected alerts are certainly a bad thing and shouldn't happen. But only in case of downgrade to prev version... From mdounin at mdounin.ru Fri Sep 11 14:12:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Sep 2015 14:12:18 +0000 Subject: [nginx] Cache: check the whole cache key in addition to hashes. Message-ID: details: http://hg.nginx.org/nginx/rev/4821fc788c12 branches: changeset: 6243:4821fc788c12 user: Maxim Dounin date: Fri Sep 11 17:03:56 2015 +0300 description: Cache: check the whole cache key in addition to hashes. This prevents a potential attack that discloses cached data if an attacker will be able to craft a hash collision between some cache key the attacker is allowed to access and another cache key with protected data. See http://mailman.nginx.org/pipermail/nginx-devel/2015-September/007288.html. Thanks to Gena Makhomed and Sergey Brester. diffstat: src/http/ngx_http_file_cache.c | 21 +++++++++++++++++++-- 1 files changed, 19 insertions(+), 2 deletions(-) diffs (53 lines): diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -521,9 +521,12 @@ wakeup: static ngx_int_t ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c) { + u_char *p; time_t now; ssize_t n; + ngx_str_t *key; ngx_int_t rc; + ngx_uint_t i; ngx_http_file_cache_t *cache; ngx_http_file_cache_header_t *h; @@ -547,12 +550,27 @@ ngx_http_file_cache_read(ngx_http_reques return NGX_DECLINED; } - if (h->crc32 != c->crc32) { + if (h->crc32 != c->crc32 || h->header_start != c->header_start) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has md5 collision", c->file.name.data); return NGX_DECLINED; } + p = c->buf->pos + sizeof(ngx_http_file_cache_header_t) + + sizeof(ngx_http_file_cache_key); + + key = c->keys.elts; + for (i = 0; i < c->keys.nelts; i++) { + if (ngx_memcmp(p, key[i].data, key[i].len) != 0) { + ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, + "cache file \"%s\" has md5 collision", + c->file.name.data); + return NGX_DECLINED; + } + + p += key[i].len; + } + if ((size_t) h->body_start > c->body_start) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has too long header", @@ -583,7 +601,6 @@ ngx_http_file_cache_read(ngx_http_reques c->last_modified = h->last_modified; c->date = h->date; c->valid_msec = h->valid_msec; - c->header_start = h->header_start; c->body_start = h->body_start; c->etag.len = h->etag_len; c->etag.data = h->etag; From mdounin at mdounin.ru Fri Sep 11 14:12:22 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Sep 2015 14:12:22 +0000 Subject: [nginx] Fixed segfault with incorrect location nesting. Message-ID: details: http://hg.nginx.org/nginx/rev/055d1f63960a branches: changeset: 6244:055d1f63960a user: Maxim Dounin date: Fri Sep 11 17:04:04 2015 +0300 description: Fixed segfault with incorrect location nesting. A configuration with a named location inside a zero-length prefix or regex location used to trigger a segmentation fault, as ngx_http_core_location() failed to properly detect if a nested location was created. Example configuration to reproduce the problem: location "" { location @foo {} } Fix is to not rely on a parent location name length, but rather check command type we are currently parsing. Identical fix is also applied to ngx_http_rewrite_if(), which used to incorrectly assume the "if" directive is on server{} level in such locations. Reported by Markus Linnala. Found with afl-fuzz. diffstat: src/http/modules/ngx_http_rewrite_module.c | 2 +- src/http/ngx_http_core_module.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff --git a/src/http/modules/ngx_http_rewrite_module.c b/src/http/modules/ngx_http_rewrite_module.c --- a/src/http/modules/ngx_http_rewrite_module.c +++ b/src/http/modules/ngx_http_rewrite_module.c @@ -612,7 +612,7 @@ ngx_http_rewrite_if(ngx_conf_t *cf, ngx_ save = *cf; cf->ctx = ctx; - if (pclcf->name.len == 0) { + if (cf->cmd_type == NGX_HTTP_SRV_CONF) { if_code->loc_conf = NULL; cf->cmd_type = NGX_HTTP_SIF_CONF; diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3196,7 +3196,7 @@ ngx_http_core_location(ngx_conf_t *cf, n pclcf = pctx->loc_conf[ngx_http_core_module.ctx_index]; - if (pclcf->name.len) { + if (cf->cmd_type == NGX_HTTP_LOC_CONF) { /* nested location */ From mdounin at mdounin.ru Fri Sep 11 14:12:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Sep 2015 14:12:24 +0000 Subject: [nginx] Core: fixed segfault with null in wildcard hash names. Message-ID: details: http://hg.nginx.org/nginx/rev/3cf25d33886a branches: changeset: 6245:3cf25d33886a user: Maxim Dounin date: Fri Sep 11 17:04:40 2015 +0300 description: Core: fixed segfault with null in wildcard hash names. A configuration like server { server_name .foo^@; } server { server_name .foo; } resulted in a segmentation fault during construction of server names hash. Reported by Markus Linnala. Found with afl-fuzz. diffstat: src/core/ngx_hash.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff --git a/src/core/ngx_hash.c b/src/core/ngx_hash.c --- a/src/core/ngx_hash.c +++ b/src/core/ngx_hash.c @@ -743,6 +743,10 @@ ngx_hash_add_key(ngx_hash_keys_arrays_t if (key->data[i] == '.' && key->data[i + 1] == '.') { return NGX_DECLINED; } + + if (key->data[i] == '\0') { + return NGX_DECLINED; + } } if (key->len > 1 && key->data[0] == '.') { From jonnybarnes at gmail.com Sat Sep 12 20:23:14 2015 From: jonnybarnes at gmail.com (Jonny Barnes) Date: Sat, 12 Sep 2015 20:23:14 +0000 Subject: HTTP/2 and GZIP Compression Message-ID: Can (and indeed should) we use gzip compression when responding over a HTTP/2 connection to improve performance for static files? The HTTP/2 spec suggests not[1], at least when using TLS. So I was wondering if there were more knowledgeable people here than me that could weigh in with their opinion. [1]: https://httpwg.github.io/specs/rfc7540.html#TLSUsage -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Sep 12 22:34:25 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 13 Sep 2015 01:34:25 +0300 Subject: HTTP/2 and GZIP Compression In-Reply-To: References: Message-ID: <1760732.fKL9MF3L9N@vbart-laptop> On Saturday 12 September 2015 20:23:14 Jonny Barnes wrote: > Can (and indeed should) we use gzip compression when responding over a > HTTP/2 connection to improve performance for static files? > > The HTTP/2 spec suggests not[1], at least when using TLS. So I was > wondering if there were more knowledgeable people here than me that could > weigh in with their opinion. > > [1]: https://httpwg.github.io/specs/rfc7540.html#TLSUsage You're confusing TLS compression with HTTP compression. It's completely different things, that work on different levels. In fact, TLS compression has been disabled in NGINX since 1.1.6. There's no reason to disable HTTP compression with HTTP/2. wbr, Valentin V. Bartenev From jonnybarnes at gmail.com Mon Sep 14 19:05:43 2015 From: jonnybarnes at gmail.com (Jonny Barnes) Date: Mon, 14 Sep 2015 19:05:43 +0000 Subject: HTTP/2 and GZIP Compression In-Reply-To: <1760732.fKL9MF3L9N@vbart-laptop> References: <1760732.fKL9MF3L9N@vbart-laptop> Message-ID: Excellent, thanks for clearing that up for me :) On Sat, 12 Sep 2015 at 23:34 Valentin V. Bartenev wrote: > On Saturday 12 September 2015 20:23:14 Jonny Barnes wrote: > > Can (and indeed should) we use gzip compression when responding over a > > HTTP/2 connection to improve performance for static files? > > > > The HTTP/2 spec suggests not[1], at least when using TLS. So I was > > wondering if there were more knowledgeable people here than me that could > > weigh in with their opinion. > > > > [1]: https://httpwg.github.io/specs/rfc7540.html#TLSUsage > > You're confusing TLS compression with HTTP compression. > > It's completely different things, that work on different levels. > In fact, TLS compression has been disabled in NGINX since 1.1.6. > > There's no reason to disable HTTP compression with HTTP/2. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Sep 17 13:54:25 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 17 Sep 2015 13:54:25 +0000 Subject: [nginx] The HTTP/2 implementation (RFC 7240, 7241). Message-ID: details: http://hg.nginx.org/nginx/rev/257b51c37c5a branches: changeset: 6246:257b51c37c5a user: Valentin Bartenev date: Fri Sep 11 20:13:06 2015 +0300 description: The HTTP/2 implementation (RFC 7240, 7241). The SPDY support is removed, as it's incompatible with the new module. diffstat: auto/make | 2 +- auto/modules | 18 +- auto/options | 6 +- auto/sources | 18 +- src/core/ngx_connection.h | 2 +- src/http/modules/ngx_http_ssl_module.c | 24 +- src/http/ngx_http.c | 31 +- src/http/ngx_http.h | 8 +- src/http/ngx_http_core_module.c | 29 +- src/http/ngx_http_core_module.h | 8 +- src/http/ngx_http_request.c | 41 +- src/http/ngx_http_request.h | 5 +- src/http/ngx_http_request_body.c | 12 +- src/http/ngx_http_spdy.c | 3701 ---------------------------- src/http/ngx_http_spdy.h | 261 -- src/http/ngx_http_spdy_filter_module.c | 1222 --------- src/http/ngx_http_spdy_module.c | 408 --- src/http/ngx_http_spdy_module.h | 41 - src/http/ngx_http_upstream.c | 8 +- src/http/v2/ngx_http_v2.c | 3964 +++++++++++++++++++++++++++++++ src/http/v2/ngx_http_v2.h | 334 ++ src/http/v2/ngx_http_v2_filter_module.c | 1290 ++++++++++ src/http/v2/ngx_http_v2_huff_decode.c | 2714 +++++++++++++++++++++ src/http/v2/ngx_http_v2_huff_encode.c | 10 + src/http/v2/ngx_http_v2_module.c | 469 +++ src/http/v2/ngx_http_v2_module.h | 42 + src/http/v2/ngx_http_v2_table.c | 349 ++ 27 files changed, 9283 insertions(+), 5734 deletions(-) diffs (truncated from 15414 to 300 lines): diff -r 3cf25d33886a -r 257b51c37c5a auto/make --- a/auto/make Fri Sep 11 17:04:40 2015 +0300 +++ b/auto/make Fri Sep 11 20:13:06 2015 +0300 @@ -7,7 +7,7 @@ echo "creating $NGX_MAKEFILE" mkdir -p $NGX_OBJS/src/core $NGX_OBJS/src/event $NGX_OBJS/src/event/modules \ $NGX_OBJS/src/os/unix $NGX_OBJS/src/os/win32 \ - $NGX_OBJS/src/http $NGX_OBJS/src/http/modules \ + $NGX_OBJS/src/http $NGX_OBJS/src/http/v2 $NGX_OBJS/src/http/modules \ $NGX_OBJS/src/http/modules/perl \ $NGX_OBJS/src/mail \ $NGX_OBJS/src/stream \ diff -r 3cf25d33886a -r 257b51c37c5a auto/modules --- a/auto/modules Fri Sep 11 17:04:40 2015 +0300 +++ b/auto/modules Fri Sep 11 20:13:06 2015 +0300 @@ -94,7 +94,7 @@ fi # ngx_http_write_filter # ngx_http_header_filter # ngx_http_chunked_filter -# ngx_http_spdy_filter +# ngx_http_v2_filter # ngx_http_range_header_filter # ngx_http_gzip_filter # ngx_http_postpone_filter @@ -115,8 +115,8 @@ HTTP_FILTER_MODULES="$HTTP_WRITE_FILTER_ $HTTP_HEADER_FILTER_MODULE \ $HTTP_CHUNKED_FILTER_MODULE" -if [ $HTTP_SPDY = YES ]; then - HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES $HTTP_SPDY_FILTER_MODULE" +if [ $HTTP_V2 = YES ]; then + HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES $HTTP_V2_FILTER_MODULE" fi HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES $HTTP_RANGE_HEADER_FILTER_MODULE" @@ -180,12 +180,12 @@ if [ $HTTP_USERID = YES ]; then fi -if [ $HTTP_SPDY = YES ]; then - have=NGX_HTTP_SPDY . auto/have - USE_ZLIB=YES - HTTP_MODULES="$HTTP_MODULES $HTTP_SPDY_MODULE" - HTTP_DEPS="$HTTP_DEPS $HTTP_SPDY_DEPS" - HTTP_SRCS="$HTTP_SRCS $HTTP_SPDY_SRCS" +if [ $HTTP_V2 = YES ]; then + have=NGX_HTTP_V2 . auto/have + HTTP_MODULES="$HTTP_MODULES $HTTP_V2_MODULE" + HTTP_INCS="$HTTP_INCS $HTTP_V2_INCS" + HTTP_DEPS="$HTTP_DEPS $HTTP_V2_DEPS" + HTTP_SRCS="$HTTP_SRCS $HTTP_V2_SRCS" fi HTTP_MODULES="$HTTP_MODULES $HTTP_STATIC_MODULE" diff -r 3cf25d33886a -r 257b51c37c5a auto/options --- a/auto/options Fri Sep 11 17:04:40 2015 +0300 +++ b/auto/options Fri Sep 11 20:13:06 2015 +0300 @@ -58,7 +58,7 @@ HTTP_CACHE=YES HTTP_CHARSET=YES HTTP_GZIP=YES HTTP_SSL=NO -HTTP_SPDY=NO +HTTP_V2=NO HTTP_SSI=YES HTTP_POSTPONE=NO HTTP_REALIP=NO @@ -210,7 +210,7 @@ do --http-scgi-temp-path=*) NGX_HTTP_SCGI_TEMP_PATH="$value" ;; --with-http_ssl_module) HTTP_SSL=YES ;; - --with-http_spdy_module) HTTP_SPDY=YES ;; + --with-http_v2_module) HTTP_V2=YES ;; --with-http_realip_module) HTTP_REALIP=YES ;; --with-http_addition_module) HTTP_ADDITION=YES ;; --with-http_xslt_module) HTTP_XSLT=YES ;; @@ -378,7 +378,7 @@ cat << END --with-ipv6 enable IPv6 support --with-http_ssl_module enable ngx_http_ssl_module - --with-http_spdy_module enable ngx_http_spdy_module + --with-http_v2_module enable ngx_http_v2_module --with-http_realip_module enable ngx_http_realip_module --with-http_addition_module enable ngx_http_addition_module --with-http_xslt_module enable ngx_http_xslt_module diff -r 3cf25d33886a -r 257b51c37c5a auto/sources --- a/auto/sources Fri Sep 11 17:04:40 2015 +0300 +++ b/auto/sources Fri Sep 11 20:13:06 2015 +0300 @@ -317,13 +317,17 @@ HTTP_POSTPONE_FILTER_SRCS=src/http/ngx_h HTTP_FILE_CACHE_SRCS=src/http/ngx_http_file_cache.c -HTTP_SPDY_MODULE=ngx_http_spdy_module -HTTP_SPDY_FILTER_MODULE=ngx_http_spdy_filter_module -HTTP_SPDY_DEPS="src/http/ngx_http_spdy.h \ - src/http/ngx_http_spdy_module.h" -HTTP_SPDY_SRCS="src/http/ngx_http_spdy.c \ - src/http/ngx_http_spdy_module.c \ - src/http/ngx_http_spdy_filter_module.c" +HTTP_V2_MODULE=ngx_http_v2_module +HTTP_V2_FILTER_MODULE=ngx_http_v2_filter_module +HTTP_V2_INCS="src/http/v2" +HTTP_V2_DEPS="src/http/v2/ngx_http_v2.h \ + src/http/v2/ngx_http_v2_module.h" +HTTP_V2_SRCS="src/http/v2/ngx_http_v2.c \ + src/http/v2/ngx_http_v2_table.c \ + src/http/v2/ngx_http_v2_huff_decode.c \ + src/http/v2/ngx_http_v2_huff_encode.c \ + src/http/v2/ngx_http_v2_module.c \ + src/http/v2/ngx_http_v2_filter_module.c" HTTP_CHARSET_FILTER_MODULE=ngx_http_charset_filter_module diff -r 3cf25d33886a -r 257b51c37c5a src/core/ngx_connection.h --- a/src/core/ngx_connection.h Fri Sep 11 17:04:40 2015 +0300 +++ b/src/core/ngx_connection.h Fri Sep 11 20:13:06 2015 +0300 @@ -118,7 +118,7 @@ typedef enum { #define NGX_LOWLEVEL_BUFFERED 0x0f #define NGX_SSL_BUFFERED 0x01 -#define NGX_SPDY_BUFFERED 0x02 +#define NGX_HTTP_V2_BUFFERED 0x02 struct ngx_connection_s { diff -r 3cf25d33886a -r 257b51c37c5a src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Fri Sep 11 17:04:40 2015 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Fri Sep 11 20:13:06 2015 +0300 @@ -326,10 +326,10 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t #if (NGX_DEBUG) unsigned int i; #endif -#if (NGX_HTTP_SPDY) +#if (NGX_HTTP_V2) ngx_http_connection_t *hc; #endif -#if (NGX_HTTP_SPDY || NGX_DEBUG) +#if (NGX_HTTP_V2 || NGX_DEBUG) ngx_connection_t *c; c = ngx_ssl_get_connection(ssl_conn); @@ -342,12 +342,13 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t } #endif -#if (NGX_HTTP_SPDY) +#if (NGX_HTTP_V2) hc = c->data; - if (hc->addr_conf->spdy) { - srv = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; - srvlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + if (hc->addr_conf->http2) { + srv = + (unsigned char *) NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; + srvlen = sizeof(NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; } else #endif @@ -378,22 +379,23 @@ static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn, const unsigned char **out, unsigned int *outlen, void *arg) { -#if (NGX_HTTP_SPDY || NGX_DEBUG) +#if (NGX_HTTP_V2 || NGX_DEBUG) ngx_connection_t *c; c = ngx_ssl_get_connection(ssl_conn); ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "SSL NPN advertised"); #endif -#if (NGX_HTTP_SPDY) +#if (NGX_HTTP_V2) { ngx_http_connection_t *hc; hc = c->data; - if (hc->addr_conf->spdy) { - *out = (unsigned char *) NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; - *outlen = sizeof(NGX_SPDY_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; + if (hc->addr_conf->http2) { + *out = + (unsigned char *) NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE; + *outlen = sizeof(NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1; return SSL_TLSEXT_ERR_OK; } diff -r 3cf25d33886a -r 257b51c37c5a src/http/ngx_http.c --- a/src/http/ngx_http.c Fri Sep 11 17:04:40 2015 +0300 +++ b/src/http/ngx_http.c Fri Sep 11 20:13:06 2015 +0300 @@ -1233,8 +1233,8 @@ ngx_http_add_addresses(ngx_conf_t *cf, n #if (NGX_HTTP_SSL) ngx_uint_t ssl; #endif -#if (NGX_HTTP_SPDY) - ngx_uint_t spdy; +#if (NGX_HTTP_V2) + ngx_uint_t http2; #endif /* @@ -1290,8 +1290,8 @@ ngx_http_add_addresses(ngx_conf_t *cf, n #if (NGX_HTTP_SSL) ssl = lsopt->ssl || addr[i].opt.ssl; #endif -#if (NGX_HTTP_SPDY) - spdy = lsopt->spdy || addr[i].opt.spdy; +#if (NGX_HTTP_V2) + http2 = lsopt->http2 || addr[i].opt.http2; #endif if (lsopt->set) { @@ -1324,8 +1324,8 @@ ngx_http_add_addresses(ngx_conf_t *cf, n #if (NGX_HTTP_SSL) addr[i].opt.ssl = ssl; #endif -#if (NGX_HTTP_SPDY) - addr[i].opt.spdy = spdy; +#if (NGX_HTTP_V2) + addr[i].opt.http2 = http2; #endif return NGX_OK; @@ -1357,14 +1357,17 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } } -#if (NGX_HTTP_SPDY && NGX_HTTP_SSL \ +#if (NGX_HTTP_V2 && NGX_HTTP_SSL \ && !defined TLSEXT_TYPE_application_layer_protocol_negotiation \ && !defined TLSEXT_TYPE_next_proto_neg) - if (lsopt->spdy && lsopt->ssl) { + + if (lsopt->http2 && lsopt->ssl) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, - "nginx was built without OpenSSL ALPN or NPN " - "support, SPDY is not enabled for %s", lsopt->addr); + "nginx was built with OpenSSL that lacks ALPN " + "and NPN support, HTTP/2 is not enabled for %s", + lsopt->addr); } + #endif addr = ngx_array_push(&port->addrs); @@ -1856,8 +1859,8 @@ ngx_http_add_addrs(ngx_conf_t *cf, ngx_h #if (NGX_HTTP_SSL) addrs[i].conf.ssl = addr[i].opt.ssl; #endif -#if (NGX_HTTP_SPDY) - addrs[i].conf.spdy = addr[i].opt.spdy; +#if (NGX_HTTP_V2) + addrs[i].conf.http2 = addr[i].opt.http2; #endif addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; @@ -1921,8 +1924,8 @@ ngx_http_add_addrs6(ngx_conf_t *cf, ngx_ #if (NGX_HTTP_SSL) addrs6[i].conf.ssl = addr[i].opt.ssl; #endif -#if (NGX_HTTP_SPDY) - addrs6[i].conf.spdy = addr[i].opt.spdy; +#if (NGX_HTTP_V2) + addrs6[i].conf.http2 = addr[i].opt.http2; #endif if (addr[i].hash.buckets == NULL diff -r 3cf25d33886a -r 257b51c37c5a src/http/ngx_http.h --- a/src/http/ngx_http.h Fri Sep 11 17:04:40 2015 +0300 +++ b/src/http/ngx_http.h Fri Sep 11 20:13:06 2015 +0300 @@ -20,8 +20,8 @@ typedef struct ngx_http_file_cache_s ng typedef struct ngx_http_log_ctx_s ngx_http_log_ctx_t; typedef struct ngx_http_chunked_s ngx_http_chunked_t; -#if (NGX_HTTP_SPDY) -typedef struct ngx_http_spdy_stream_s ngx_http_spdy_stream_t; +#if (NGX_HTTP_V2) +typedef struct ngx_http_v2_stream_s ngx_http_v2_stream_t; #endif typedef ngx_int_t (*ngx_http_header_handler_pt)(ngx_http_request_t *r, @@ -38,8 +38,8 @@ typedef u_char *(*ngx_http_log_handler_p #include #include -#if (NGX_HTTP_SPDY) -#include +#if (NGX_HTTP_V2) +#include #endif #if (NGX_HTTP_CACHE) #include diff -r 3cf25d33886a -r 257b51c37c5a src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Sep 11 17:04:40 2015 +0300 +++ b/src/http/ngx_http_core_module.c Fri Sep 11 20:13:06 2015 +0300 @@ -2132,13 +2132,6 @@ ngx_http_gzip_ok(ngx_http_request_t *r) return NGX_DECLINED; } -#if (NGX_HTTP_SPDY) From flygoast at 126.com Sun Sep 20 09:51:26 2015 From: flygoast at 126.com (flygoast) Date: Sun, 20 Sep 2015 17:51:26 +0800 (CST) Subject: Unexpected result with limit_req module Message-ID: <2d51e6d1.36d.14fea2a5152.Coremail.flygoast@126.com> Hi, all, I use limit_req module to limit QPS to my upstream, conf like this: http { limit_req_zone $request_uri zone=req_one:10m rate=10000r/s; server { ... location / { limit_req zone=req_one; } } } I use benchmarking tool to stress on one url, but only about 1000 request. I checked source code, found that: ms = (ngx_msec_int_t) (now - lr->last); excess = lr->excess - ctx->rate * ngx_abs(ms) / 1000 + 1000; if (excess < 0) { excess = 0; } *ep = excess; if ((ngx_uint_t) excess > limit->burst) { return NGX_BUSY; } At here, ms can be '0', so in a millisecond, only the first request can pass the checking. After I set burst value to (rate / 1000) in configuration file, the QPS result is expected. I'm not sure whether this is a bug or It's my fault in configuration. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 21 13:27:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Sep 2015 16:27:32 +0300 Subject: Unexpected result with limit_req module In-Reply-To: <2d51e6d1.36d.14fea2a5152.Coremail.flygoast@126.com> References: <2d51e6d1.36d.14fea2a5152.Coremail.flygoast@126.com> Message-ID: <20150921132732.GF62755@mdounin.ru> Hello! On Sun, Sep 20, 2015 at 05:51:26PM +0800, flygoast wrote: > Hi, all, > > > I use limit_req module to limit QPS to my upstream, conf like this: > http { > limit_req_zone $request_uri zone=req_one:10m rate=10000r/s; > server { > ... > location / { > limit_req zone=req_one; > } > } > } > > > I use benchmarking tool to stress on one url, but only about 1000 request. > > > I checked source code, found that: > ms = (ngx_msec_int_t) (now - lr->last); > > > excess = lr->excess - ctx->rate * ngx_abs(ms) / 1000 + 1000; > > > if (excess < 0) { > excess = 0; > } > > > *ep = excess; > > > if ((ngx_uint_t) excess > limit->burst) { > return NGX_BUSY; > } > > > At here, ms can be '0', so in a millisecond, only the first > request can pass the checking. After I set burst value to (rate > / 1000) in configuration file, the QPS result is expected. > > > I'm not sure whether this is a bug or It's my fault in configuration. That's expected behaviour. As nginx tracks time with millisecond resolution, two requests within the same millisecond essentially mean infinite rate - unless nginx can pass these requests and count them on a longer period of time. That is, if you use high rates, you have to configure burst as well. -- Maxim Dounin http://nginx.org/ From arut at nginx.com Mon Sep 21 20:10:49 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 21 Sep 2015 20:10:49 +0000 Subject: [nginx] Sub filter: fixed initialization in http{} level (ticket... Message-ID: details: http://hg.nginx.org/nginx/rev/fbbb1c1ce1eb branches: changeset: 6247:fbbb1c1ce1eb user: Roman Arutyunyan date: Mon Sep 21 23:08:34 2015 +0300 description: Sub filter: fixed initialization in http{} level (ticket #791). If sub_filter directive was only specified at http{} level, sub filter internal data remained uninitialized. That would lead to a crash in runtime. diffstat: src/http/modules/ngx_http_sub_filter_module.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (14 lines): diff -r 257b51c37c5a -r fbbb1c1ce1eb src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c Fri Sep 11 20:13:06 2015 +0300 +++ b/src/http/modules/ngx_http_sub_filter_module.c Mon Sep 21 23:08:34 2015 +0300 @@ -853,8 +853,9 @@ ngx_http_sub_merge_conf(ngx_conf_t *cf, conf->pairs = prev->pairs; conf->matches = prev->matches; conf->tables = prev->tables; + } - } else if (conf->dynamic == 0){ + if (conf->pairs && conf->dynamic == 0 && conf->tables == NULL) { pairs = conf->pairs->elts; n = conf->pairs->nelts; From vbart at nginx.com Mon Sep 21 22:41:49 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 21 Sep 2015 22:41:49 +0000 Subject: [nginx] HTTP/2: fixed HPACK header field parsing. Message-ID: details: http://hg.nginx.org/nginx/rev/f5380c244cd7 branches: changeset: 6248:f5380c244cd7 user: Valentin Bartenev date: Tue Sep 22 01:40:04 2015 +0300 description: HTTP/2: fixed HPACK header field parsing. diffstat: src/http/v2/ngx_http_v2.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff -r fbbb1c1ce1eb -r f5380c244cd7 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Mon Sep 21 23:08:34 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Tue Sep 22 01:40:04 2015 +0300 @@ -1451,6 +1451,11 @@ ngx_http_v2_state_field_skip(ngx_http_v2 h2c->state.field_rest -= size; + if (h2c->state.field_rest) { + return ngx_http_v2_state_save(h2c, end, end, + ngx_http_v2_state_field_skip); + } + return ngx_http_v2_state_process_header(h2c, pos + size, end); } From vbart at nginx.com Mon Sep 21 22:41:52 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 21 Sep 2015 22:41:52 +0000 Subject: [nginx] HTTP/2: fixed header block parsing with CONTINUATION fra... Message-ID: details: http://hg.nginx.org/nginx/rev/081a073e5164 branches: changeset: 6249:081a073e5164 user: Valentin Bartenev date: Tue Sep 22 01:40:04 2015 +0300 description: HTTP/2: fixed header block parsing with CONTINUATION frames (#792). It appears that the CONTINUATION frames don't need to be aligned to bounds of individual headers. diffstat: src/http/v2/ngx_http_v2.c | 211 ++++++++++++++++++++++++++++++--------------- src/http/v2/ngx_http_v2.h | 1 - 2 files changed, 139 insertions(+), 73 deletions(-) diffs (truncated from 320 to 300 lines): diff -r f5380c244cd7 -r 081a073e5164 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Tue Sep 22 01:40:04 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Tue Sep 22 01:40:04 2015 +0300 @@ -86,6 +86,8 @@ static u_char *ngx_http_v2_state_process u_char *pos, u_char *end); static u_char *ngx_http_v2_state_header_complete(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end); +static u_char *ngx_http_v2_handle_continuation(ngx_http_v2_connection_t *h2c, + u_char *pos, u_char *end, ngx_http_v2_handler_pt handler); static u_char *ngx_http_v2_state_priority(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end); static u_char *ngx_http_v2_state_rst_stream(ngx_http_v2_connection_t *h2c, @@ -1198,6 +1200,13 @@ ngx_http_v2_state_header_block(ngx_http_ ngx_http_v2_state_header_block); } + if (!(h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) + && h2c->state.length < NGX_HTTP_V2_INT_OCTETS) + { + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_header_block); + } + size_update = 0; indexed = 0; @@ -1295,6 +1304,13 @@ ngx_http_v2_state_field_len(ngx_http_v2_ ngx_int_t len; ngx_uint_t huff; + if (!(h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) + && h2c->state.length < NGX_HTTP_V2_INT_OCTETS) + { + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_field_len); + } + if (h2c->state.length < 1) { ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, "client sent header block with incorrect length"); @@ -1333,15 +1349,6 @@ ngx_http_v2_state_field_len(ngx_http_v2_ "http2 hpack %s string length: %i", huff ? "encoded" : "raw", len); - if ((size_t) len > h2c->state.length) { - ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, - "client sent header field with incorrect length"); - - return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); - } - - h2c->state.length -= len; - if ((size_t) len > h2c->state.field_limit) { ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, "client exceeded http2_max_field_size limit"); @@ -1385,6 +1392,11 @@ ngx_http_v2_state_field_huff(ngx_http_v2 size = h2c->state.field_rest; } + if (size > h2c->state.length) { + size = h2c->state.length; + } + + h2c->state.length -= size; h2c->state.field_rest -= size; if (ngx_http_v2_huff_decode(&h2c->state.field_state, pos, size, @@ -1399,14 +1411,27 @@ ngx_http_v2_state_field_huff(ngx_http_v2 return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_COMP_ERROR); } - if (h2c->state.field_rest != 0) { - return ngx_http_v2_state_save(h2c, end, end, + pos += size; + + if (h2c->state.field_rest == 0) { + *h2c->state.field_end = '\0'; + return ngx_http_v2_state_process_header(h2c, pos, end); + } + + if (h2c->state.length) { + return ngx_http_v2_state_save(h2c, pos, end, ngx_http_v2_state_field_huff); } - *h2c->state.field_end = '\0'; - - return ngx_http_v2_state_process_header(h2c, pos + size, end); + if (h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent header field with incorrect length"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); + } + + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_field_huff); } @@ -1422,18 +1447,36 @@ ngx_http_v2_state_field_raw(ngx_http_v2_ size = h2c->state.field_rest; } + if (size > h2c->state.length) { + size = h2c->state.length; + } + + h2c->state.length -= size; h2c->state.field_rest -= size; h2c->state.field_end = ngx_cpymem(h2c->state.field_end, pos, size); - if (h2c->state.field_rest) { - return ngx_http_v2_state_save(h2c, end, end, + pos += size; + + if (h2c->state.field_rest == 0) { + *h2c->state.field_end = '\0'; + return ngx_http_v2_state_process_header(h2c, pos, end); + } + + if (h2c->state.length) { + return ngx_http_v2_state_save(h2c, pos, end, ngx_http_v2_state_field_raw); } - *h2c->state.field_end = '\0'; - - return ngx_http_v2_state_process_header(h2c, pos + size, end); + if (h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent header field with incorrect length"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); + } + + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_field_raw); } @@ -1449,14 +1492,33 @@ ngx_http_v2_state_field_skip(ngx_http_v2 size = h2c->state.field_rest; } + if (size > h2c->state.length) { + size = h2c->state.length; + } + + h2c->state.length -= size; h2c->state.field_rest -= size; - if (h2c->state.field_rest) { - return ngx_http_v2_state_save(h2c, end, end, + pos += size; + + if (h2c->state.field_rest == 0) { + return ngx_http_v2_state_process_header(h2c, pos, end); + } + + if (h2c->state.length) { + return ngx_http_v2_state_save(h2c, pos, end, ngx_http_v2_state_field_skip); } - return ngx_http_v2_state_process_header(h2c, pos + size, end); + if (h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent header field with incorrect length"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); + } + + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_field_skip); } @@ -1631,16 +1693,15 @@ ngx_http_v2_state_header_complete(ngx_ht return pos; } + if (!(h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG)) { + return ngx_http_v2_handle_continuation(h2c, pos, end, + ngx_http_v2_state_header_complete); + } + stream = h2c->state.stream; if (stream) { - if (h2c->state.flags & NGX_HTTP_V2_END_HEADERS_FLAG) { - stream->end_headers = 1; - ngx_http_v2_run_request(stream->request); - - } else { - stream->header_limit = h2c->state.header_limit; - } + ngx_http_v2_run_request(stream->request); } else if (h2c->state.pool) { ngx_destroy_pool(h2c->state.pool); @@ -1657,6 +1718,51 @@ ngx_http_v2_state_header_complete(ngx_ht static u_char * +ngx_http_v2_handle_continuation(ngx_http_v2_connection_t *h2c, u_char *pos, + u_char *end, ngx_http_v2_handler_pt handler) +{ + u_char *p; + size_t len; + uint32_t head; + + len = h2c->state.length; + + if ((size_t) (end - pos) < len + NGX_HTTP_V2_FRAME_HEADER_SIZE) { + return ngx_http_v2_state_save(h2c, pos, end, handler); + } + + p = pos + len; + + head = ngx_http_v2_parse_uint32(p); + + if (ngx_http_v2_parse_type(head) != NGX_HTTP_V2_CONTINUATION_FRAME) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent inappropriate frame while CONTINUATION was expected"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + + h2c->state.length += ngx_http_v2_parse_length(head); + h2c->state.flags |= p[4]; + + if (h2c->state.sid != ngx_http_v2_parse_sid(&p[5])) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent CONTINUATION frame with incorrect identifier"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + + p = pos; + pos += NGX_HTTP_V2_FRAME_HEADER_SIZE; + + ngx_memcpy(pos, p, len); + + h2c->state.handler = handler; + return pos; +} + + +static u_char * ngx_http_v2_state_priority(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end) { @@ -2141,49 +2247,10 @@ static u_char * ngx_http_v2_state_continuation(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end) { - ngx_http_v2_node_t *node; - ngx_http_v2_stream_t *stream; - ngx_http_v2_srv_conf_t *h2scf; - - if (h2c->state.length == 0) { - ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, - "client sent CONTINUATION with empty header block"); - - return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); - } - - if (h2c->state.sid == 0) { - ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, - "client sent CONTINUATION frame with incorrect identifier"); - - return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); - } - - node = ngx_http_v2_get_node_by_id(h2c, h2c->state.sid, 0); - - if (node == NULL || node->stream == NULL) { - h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx, - ngx_http_v2_module); - - h2c->state.header_limit = h2scf->max_header_size; - - return ngx_http_v2_state_skip_headers(h2c, pos, end); - } - - stream = node->stream; - - if (stream->end_headers) { - ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, - "client sent unexpected CONTINUATION frame"); - - return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); - } - - h2c->state.stream = stream; - h2c->state.header_limit = stream->header_limit; - h2c->state.pool = stream->request->pool; From vbart at nginx.com Mon Sep 21 23:55:10 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 21 Sep 2015 23:55:10 +0000 Subject: [nginx] Increased the default number of output buffers. Message-ID: details: http://hg.nginx.org/nginx/rev/0256738454dc branches: changeset: 6250:0256738454dc user: Valentin Bartenev date: Tue Sep 15 17:49:15 2015 +0300 description: Increased the default number of output buffers. Since an output buffer can only be used for either reading or sending, small amounts of data left from the previous operation (due to some limits) must be sent before nginx will be able to read further into the buffer. Using only one output buffer can result in suboptimal behavior that manifests itself in forming and sending too small chunks of data. This is particularly painful with SPDY (or HTTP/2) where each such chunk needs to be prefixed with some header. The default flow-control window in HTTP/2 is 64k minus one bytes. With one 32k output buffer this results is one byte left after exhausting the window. With two 32k buffers the data will be read into the second free buffer before sending, thus the minimum output is increased to 32k + 1 bytes which is much better. diffstat: src/http/ngx_http_copy_filter_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 081a073e5164 -r 0256738454dc src/http/ngx_http_copy_filter_module.c --- a/src/http/ngx_http_copy_filter_module.c Tue Sep 22 01:40:04 2015 +0300 +++ b/src/http/ngx_http_copy_filter_module.c Tue Sep 15 17:49:15 2015 +0300 @@ -327,7 +327,7 @@ ngx_http_copy_filter_merge_conf(ngx_conf ngx_http_copy_filter_conf_t *prev = parent; ngx_http_copy_filter_conf_t *conf = child; - ngx_conf_merge_bufs_value(conf->bufs, prev->bufs, 1, 32768); + ngx_conf_merge_bufs_value(conf->bufs, prev->bufs, 2, 32768); return NULL; } From flygoast at 126.com Tue Sep 22 11:56:39 2015 From: flygoast at 126.com (Gu Feng) Date: Tue, 22 Sep 2015 19:56:39 +0800 Subject: [PATCH] Limit Req: make burst value could be explicitly set to 0 Message-ID: <055d05b5dc54a03b7640.1442922999@2015-424deMacBook-Pro.local> # HG changeset patch # User Gu Feng # Date 1442922745 -28800 # Tue Sep 22 19:52:25 2015 +0800 # Node ID 055d05b5dc54a03b764034a92f9bfb3f5a550993 # Parent 0256738454dc7527f9b0bbaf90b3053a70fd99ed Limit Req: make burst value could be explicitly set to 0. The default value of burst is 0 when no 'busrt' in directive limit_req. Maybe it's more reasonable that it could be explicitly set to 0. diff -r 0256738454dc -r 055d05b5dc54 src/http/modules/ngx_http_limit_req_module.c --- a/src/http/modules/ngx_http_limit_req_module.c Tue Sep 15 17:49:15 2015 +0300 +++ b/src/http/modules/ngx_http_limit_req_module.c Tue Sep 22 19:52:25 2015 +0800 @@ -893,7 +893,7 @@ ngx_http_limit_req(ngx_conf_t *cf, ngx_c if (ngx_strncmp(value[i].data, "burst=", 6) == 0) { burst = ngx_atoi(value[i].data + 6, value[i].len - 6); - if (burst <= 0) { + if (burst < 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid burst rate \"%V\"", &value[i]); return NGX_CONF_ERROR; From mdounin at mdounin.ru Tue Sep 22 14:11:23 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 14:11:23 +0000 Subject: [nginx] Style. Message-ID: details: http://hg.nginx.org/nginx/rev/cbb8c32f78b5 branches: changeset: 6251:cbb8c32f78b5 user: Maxim Dounin date: Tue Sep 22 17:09:50 2015 +0300 description: Style. diffstat: src/http/v2/ngx_http_v2_filter_module.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (13 lines): diff --git a/src/http/v2/ngx_http_v2_filter_module.c b/src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c +++ b/src/http/v2/ngx_http_v2_filter_module.c @@ -945,7 +945,8 @@ ngx_http_v2_filter_get_data_frame(ngx_ht buf = cl->buf; if (!buf->start) { - buf->start = ngx_palloc(stream->request->pool, NGX_HTTP_V2_FRAME_HEADER_SIZE); + buf->start = ngx_palloc(stream->request->pool, + NGX_HTTP_V2_FRAME_HEADER_SIZE); if (buf->start == NULL) { return NULL; } From mdounin at mdounin.ru Tue Sep 22 14:38:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 14:38:16 +0000 Subject: [nginx] nginx-1.9.5-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/942475e10cb4 branches: changeset: 6252:942475e10cb4 user: Maxim Dounin date: Tue Sep 22 17:36:21 2015 +0300 description: nginx-1.9.5-RELEASE diffstat: docs/xml/nginx/changes.xml | 97 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-) diffs (107 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,103 @@ + + + + +?????? ngx_http_v2_module (???????? ?????? ngx_http_spdy_module).
+??????? Dropbox ? Automattic ?? ????????????? ??????????. +
+ +the ngx_http_v2_module (replaces ngx_http_spdy_module).
+Thanks to Dropbox and Automattic for sponsoring this work. +
+
+ + + +?????? ?? ????????? ????????? output_buffers ?????????? ??? ??????. + + +now the "output_buffers" directive uses two buffers by default. + + + + + +?????? nginx ???????????? ???????????? ??????????? ???????????, +? ?? ?????????? ????????????? ???????????. + + +now nginx limits subrequests recursion, +not simultaneous subrequests. + + + + + +?????? ??? ???????? ??????? ?? ???? nginx ????????? ???? ?????????.
+??????? ???????? ???????? ? ?????? ????????. +
+ +now nginx checks the whole cache key when returning a response from cache.
+Thanks to Gena Makhomed and Sergey Brester. +
+
+ + + +??? ????????????? ???? +? ????? ????? ?????????? ????????? "header already sent"; +?????? ????????? ? 1.7.5. + + +"header already sent" alerts might appear in logs +when using cache; +the bug had appeared in 1.7.5. + + + + + +??? ????????????? CephFS ? ????????? timer_resolution ?? Linux +? ????? ????? ?????????? ????????? +"writev() failed (4: Interrupted system call)". + + +"writev() failed (4: Interrupted system call)" +errors might appear in logs +when using CephFS and the "timer_resolution" directive on Linux. + + + + + +? ????????? ?????? ????????????.
+??????? Markus Linnala. +
+ +in invalid configurations handling.
+Thanks to Markus Linnala. +
+
+ + + +??? ????????????? ????????? sub_filter ?? ?????? http +? ??????? ???????? ?????????? segmentation fault; +?????? ????????? ? 1.9.4. + + +a segmentation fault occurred in a worker process +if the "sub_filter" directive was used at http level; +the bug had appeared in 1.9.4. + + + +
+ + From mdounin at mdounin.ru Tue Sep 22 14:38:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 14:38:18 +0000 Subject: [nginx] release-1.9.5 tag Message-ID: details: http://hg.nginx.org/nginx/rev/0f313cf0a1ee branches: changeset: 6253:0f313cf0a1ee user: Maxim Dounin date: Tue Sep 22 17:36:22 2015 +0300 description: release-1.9.5 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -387,3 +387,4 @@ 884a967c369f73ab16ea859670d690fb094d3850 3a32d6e7404a79a0973bcd8d0b83181c5bf66074 release-1.9.2 e27a215601292872f545a733859e06d01af1017d release-1.9.3 5cb7e2eed2031e32d2e5422caf9402758c38a6ad release-1.9.4 +942475e10cb47654205ede7ccbe7d568698e665b release-1.9.5 From carlos-eduardo-rodrigues at telecom.pt Tue Sep 22 17:14:56 2015 From: carlos-eduardo-rodrigues at telecom.pt (Carlos Eduardo Ferreira Rodrigues) Date: Tue, 22 Sep 2015 18:14:56 +0100 Subject: subrequest error with http2 Message-ID: Hi, I just upgraded nginx to 1.9.5 on our testing enviroment, and immediately started seeing this error on http2 requests: 2015/09/22 18:04:06 [alert] 27305#27305: *1 epoll_ctl(1, 17) failed (17: File exists), client: x.x.x.x, server: _, request: "GET ... HTTP/2.0", subrequest: "...", host: "..." The subrequest is being made from Lua code using "ngx.location.capture()", so I understand this may be an issue with ngx_lua and not nginx itself. However, this worked fine with nginx 1.8.0/1.9.4 and spdy and nothing else has changed. Best regards, -- Carlos Rodrigues From mdounin at mdounin.ru Tue Sep 22 17:34:40 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Sep 2015 20:34:40 +0300 Subject: subrequest error with http2 In-Reply-To: References: Message-ID: <20150922173440.GH13202@mdounin.ru> Hello! On Tue, Sep 22, 2015 at 06:14:56PM +0100, Carlos Eduardo Ferreira Rodrigues wrote: > Hi, > > I just upgraded nginx to 1.9.5 on our testing enviroment, and immediately started seeing this error on http2 requests: > > 2015/09/22 18:04:06 [alert] 27305#27305: *1 epoll_ctl(1, 17) failed (17: File exists), client: x.x.x.x, server: _, request: "GET ... HTTP/2.0", subrequest: "...", host: "..." > > The subrequest is being made from Lua code using "ngx.location.capture()", so I understand this may be an issue with ngx_lua and not nginx itself. However, this worked fine with nginx 1.8.0/1.9.4 and spdy and nothing else has changed. The lua module deeply integrates into nginx internals (far beyond what we consider to be nginx modules API), and there is no surprise it's broken by the changes in nginx 1.9.5. -- Maxim Dounin http://nginx.org/ From michal at 3scale.net Wed Sep 23 17:58:19 2015 From: michal at 3scale.net (Michal Cichra) Date: Wed, 23 Sep 2015 10:58:19 -0700 Subject: Load SSL certificates from system's store Message-ID: Hi there, There is very basic patch to nginx (which is the same with 1.9.5) to allow loading all SSL certificates from CApath. When doing proxy with ssl verification, nginx needs ssl certificates to be loaded through file. That causes trouble for dynamic proxies, that can proxy to any host. Workaround would be pack all certificates from CApath and load them to nginx. However, that is not very cross platform as on OSX it can use keychain. I understand there are some drawbacks (like memory usage), so I?d make it configurable with off by default. See the gist https://gist.github.com/mikz/4dae10a0ef94de7c8139 and discussion on openresty mailing list: https://groups.google.com/forum/#!searchin/openresty-en/ssl/openresty-en/SuqORBK9ys0/Yz0ypcRyV4UJ Thanks for feedback Michal Cichra From mdounin at mdounin.ru Wed Sep 23 18:58:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Sep 2015 21:58:39 +0300 Subject: Load SSL certificates from system's store In-Reply-To: References: Message-ID: <20150923185839.GP13202@mdounin.ru> Hello! On Wed, Sep 23, 2015 at 10:58:19AM -0700, Michal Cichra wrote: > Hi there, > > There is very basic patch to nginx (which is the same with 1.9.5) to allow loading all SSL certificates from CApath. > > When doing proxy with ssl verification, nginx needs ssl certificates to be loaded through file. > That causes trouble for dynamic proxies, that can proxy to any host. Workaround would be pack all certificates from CApath and load them to nginx. > However, that is not very cross platform as on OSX it can use keychain. > I understand there are some drawbacks (like memory usage), so I?d make it configurable with off by default. > > See the gist https://gist.github.com/mikz/4dae10a0ef94de7c8139 > and discussion on openresty mailing list: https://groups.google.com/forum/#!searchin/openresty-en/ssl/openresty-en/SuqORBK9ys0/Yz0ypcRyV4UJ I don't see anything changed since my previous response to your proposal: http://mailman.nginx.org/pipermail/nginx/2014-September/045068.html If you want things to actually happen you may want to go ahead and start working on a real patch. (Just a side note: talking about OS X doesn't really make sense, as it's not a server platform.) -- Maxim Dounin http://nginx.org/ From michal at 3scale.net Wed Sep 23 20:30:19 2015 From: michal at 3scale.net (Michal Cichra) Date: Wed, 23 Sep 2015 13:30:19 -0700 Subject: Load SSL certificates from system's store In-Reply-To: <20150923185839.GP13202@mdounin.ru> References: <20150923185839.GP13202@mdounin.ru> Message-ID: <24FC43B7-7D71-4F47-99EC-EEB0E79C57F4@3scale.net> Hi Maxim, sorry for double posting. I was talking to some developers here on nginx.conf and they suggested proposing it on dev list. I could not find the previous post. Re OSX: it might not be server platform, but development one. Our use case is running a proxy in your production/dev that records all the traffic and can modify it (https://github.com/apitools/monitor). So the OSX use case is very strong as easy deployment to any platform that nginx works with. Cheers Michal Cichra > On 23 Sep 2015, at 11:58, Maxim Dounin wrote: > > Hello! > > On Wed, Sep 23, 2015 at 10:58:19AM -0700, Michal Cichra wrote: > >> Hi there, >> >> There is very basic patch to nginx (which is the same with 1.9.5) to allow loading all SSL certificates from CApath. >> >> When doing proxy with ssl verification, nginx needs ssl certificates to be loaded through file. >> That causes trouble for dynamic proxies, that can proxy to any host. Workaround would be pack all certificates from CApath and load them to nginx. >> However, that is not very cross platform as on OSX it can use keychain. >> I understand there are some drawbacks (like memory usage), so I?d make it configurable with off by default. >> >> See the gist https://gist.github.com/mikz/4dae10a0ef94de7c8139 >> and discussion on openresty mailing list: https://groups.google.com/forum/#!searchin/openresty-en/ssl/openresty-en/SuqORBK9ys0/Yz0ypcRyV4UJ > > I don't see anything changed since my previous response to your > proposal: > > http://mailman.nginx.org/pipermail/nginx/2014-September/045068.html > > If you want things to actually happen you may want to go ahead and > start working on a real patch. > > (Just a side note: talking about OS X doesn't really make sense, > as it's not a server platform.) > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From hungnv at opensource.com.vn Thu Sep 24 09:18:42 2015 From: hungnv at opensource.com.vn (Hung Nguyen) Date: Thu, 24 Sep 2015 16:18:42 +0700 Subject: Some times header cannot be sent due too high traffic Message-ID: <2543FD54-46B6-4607-8900-E0755FB21FAC@opensource.com.vn> Hello, In my module I do something like this: nlog->action = "sending file to client"; r->headers_out.status = NGX_HTTP_OK; r->headers_out.content_length_n = bucket->content_length; r->headers_out.last_modified_time = of.mtime; r->headers_out.content_type.len = sizeof ("text/html") - 1; r->headers_out.content_type.data = (u_char *) "text/html"; rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { ngx_log_error(NGX_LOG_ALERT, nlog, ngx_errno, ngx_close_file_n "ngx_http_send_header failed"); return rc; } it?s ok when traffic is low, but when there are many user request to nginx, error log is full of these similar errors: 2015/09/17 08:57:16 [alert] 9915#0: *5205 close()ngx_http_send_header failed while sending file to client, client: 127.0.0.1, server: my.local, request: "GET /slides/128553.pdf?secl=LMz1w4xNwd9pt_88-ROxkw§=1442496276 HTTP/1.0", host: ?m y.server.com", referrer: "http://my.server.com/uppod.swf ? Is this a normal behavior, or there much be something wrong in my module cause nginx cannot send header to client? Thanks, H?ng -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 24 17:44:43 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Sep 2015 17:44:43 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/74ec27cb67c1 branches: changeset: 6254:74ec27cb67c1 user: Maxim Dounin date: Thu Sep 24 17:18:42 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1009005 -#define NGINX_VERSION "1.9.5" +#define nginx_version 1009006 +#define NGINX_VERSION "1.9.6" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From mdounin at mdounin.ru Thu Sep 24 17:44:46 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Sep 2015 17:44:46 +0000 Subject: [nginx] SSL: compatibility with OpenSSL master branch. Message-ID: details: http://hg.nginx.org/nginx/rev/b40af2fd1c16 branches: changeset: 6255:b40af2fd1c16 user: Maxim Dounin date: Thu Sep 24 17:19:08 2015 +0300 description: SSL: compatibility with OpenSSL master branch. RAND_pseudo_bytes() is deprecated in the OpenSSL master branch, so the only use was changed to RAND_bytes(). Access to internal structures is no longer possible, so now we don't try to set SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS even if it's defined. diffstat: src/event/ngx_event_openssl.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (28 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1158,6 +1158,7 @@ ngx_ssl_handshake(ngx_connection_t *c) c->recv_chain = ngx_ssl_recv_chain; c->send_chain = ngx_ssl_send_chain; +#if OPENSSL_VERSION_NUMBER < 0x10100000L #ifdef SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS /* initial handshake done, disable renegotiation (CVE-2009-3555) */ @@ -1166,6 +1167,7 @@ ngx_ssl_handshake(ngx_connection_t *c) } #endif +#endif return NGX_OK; } @@ -2861,7 +2863,7 @@ ngx_ssl_session_ticket_key_callback(ngx_ ngx_hex_dump(buf, key[0].name, 16) - buf, buf, SSL_session_reused(ssl_conn) ? "reused" : "new"); - RAND_pseudo_bytes(iv, 16); + RAND_bytes(iv, 16); EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key[0].aes_key, iv); HMAC_Init_ex(hctx, key[0].hmac_key, 16, ngx_ssl_session_ticket_md(), NULL); From lordnynex at gmail.com Fri Sep 25 17:38:12 2015 From: lordnynex at gmail.com (Lord Nynex) Date: Fri, 25 Sep 2015 10:38:12 -0700 Subject: Some times header cannot be sent due too high traffic In-Reply-To: <2543FD54-46B6-4607-8900-E0755FB21FAC@opensource.com.vn> References: <2543FD54-46B6-4607-8900-E0755FB21FAC@opensource.com.vn> Message-ID: Hello, I'm just guessing here, but it sounds like the connection is closed before the headers are sent. This is a common problem unrelated to nginx (unless your module has some sort of serious performance issue elsewhere). This is especially common for requests generated by mobile clients who's networks are often unavailable. The thing that stands out to me is your conditional to catch errors is very similar to the conditional used in ngx_http_request.c, hoever, it's missing NGX_HTTP_CLIENT_CLOSED_REQUEST. If this is the case, your condition is satisfied because NGX_HTTP_CLIENT_CLOSED_REQUEST is > NGX_OK. I recommend you amend your logging to include more information about why the request has failed. If you find that NGX_HTTP_CLIENT_CLOSED_REQUEST is returned, I would look for culprits elsewhere. Perhaps the load is greater than the server/kernels link speed? On Thu, Sep 24, 2015 at 2:18 AM, Hung Nguyen wrote: > Hello, > > In my module I do something like this: > > nlog->action = "sending file to client"; > r->headers_out.status = NGX_HTTP_OK; > r->headers_out.content_length_n = bucket->content_length; > r->headers_out.last_modified_time = of.mtime; > r->headers_out.content_type.len = sizeof ("text/html") - 1; > r->headers_out.content_type.data = (u_char *) "text/html"; > rc = ngx_http_send_header(r); > if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { > ngx_log_error(NGX_LOG_ALERT, nlog, ngx_errno, ngx_close_file_n > "ngx_http_send_header failed"); > return rc; > } > > > > it?s ok when traffic is low, but when there are many user request to > nginx, error log is full of these similar errors: > > 2015/09/17 08:57:16 [alert] 9915#0: *5205 close()ngx_http_send_header > failed while sending file to client, client: 127.0.0.1, server: my.local, > request: "GET > /slides/128553.pdf?secl=LMz1w4xNwd9pt_88-ROxkw§=1442496276 HTTP/1.0", > host: ?m y.server.com", > referrer: "http://my.server.com/uppod.swf ? > > > > > Is this a normal behavior, or there much be something wrong in my module > cause nginx cannot send header to client? > > > Thanks, > > H?ng > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klnusbaum at gmail.com Sun Sep 27 04:16:37 2015 From: klnusbaum at gmail.com (Kurtis Nusbaum) Date: Sat, 26 Sep 2015 21:16:37 -0700 Subject: [PATCH] Extract out version info function Message-ID: <1d5846fa79e64a0920c1.1443327397@Bender.local> # HG changeset patch # User Kurtis Nusbaum # Date 1436715098 25200 # Sun Jul 12 08:31:38 2015 -0700 # Node ID 1d5846fa79e64a0920c189a266849bf23b6cab63 # Parent b40af2fd1c1665bc79bd6c50233dd6c834f60b6b Extract out version info function. The code for displaying version info and configuration info seemed to be cluttering up the main function. I was finding it hard to read main. This extracts out all of the logic for displaying version and configuration info into its own function, thus making main easier to read. diff -r b40af2fd1c16 -r 1d5846fa79e6 src/core/nginx.c --- a/src/core/nginx.c Thu Sep 24 17:19:08 2015 +0300 +++ b/src/core/nginx.c Sun Jul 12 08:31:38 2015 -0700 @@ -10,6 +10,7 @@ #include +static void ngx_show_version_info(); static ngx_int_t ngx_add_inherited_sockets(ngx_cycle_t *cycle); static ngx_int_t ngx_get_options(int argc, char *const *argv); static ngx_int_t ngx_process_options(ngx_cycle_t *cycle); @@ -194,65 +195,7 @@ } if (ngx_show_version) { - ngx_write_stderr("nginx version: " NGINX_VER_BUILD NGX_LINEFEED); - - if (ngx_show_help) { - ngx_write_stderr( - "Usage: nginx [-?hvVtTq] [-s signal] [-c filename] " - "[-p prefix] [-g directives]" NGX_LINEFEED - NGX_LINEFEED - "Options:" NGX_LINEFEED - " -?,-h : this help" NGX_LINEFEED - " -v : show version and exit" NGX_LINEFEED - " -V : show version and configure options then exit" - NGX_LINEFEED - " -t : test configuration and exit" NGX_LINEFEED - " -T : test configuration, dump it and exit" - NGX_LINEFEED - " -q : suppress non-error messages " - "during configuration testing" NGX_LINEFEED - " -s signal : send signal to a master process: " - "stop, quit, reopen, reload" NGX_LINEFEED -#ifdef NGX_PREFIX - " -p prefix : set prefix path (default: " - NGX_PREFIX ")" NGX_LINEFEED -#else - " -p prefix : set prefix path (default: NONE)" NGX_LINEFEED -#endif - " -c filename : set configuration file (default: " - NGX_CONF_PATH ")" NGX_LINEFEED - " -g directives : set global directives out of configuration " - "file" NGX_LINEFEED NGX_LINEFEED - ); - } - - if (ngx_show_configure) { - -#ifdef NGX_COMPILER - ngx_write_stderr("built by " NGX_COMPILER NGX_LINEFEED); -#endif - -#if (NGX_SSL) - if (SSLeay() == SSLEAY_VERSION_NUMBER) { - ngx_write_stderr("built with " OPENSSL_VERSION_TEXT - NGX_LINEFEED); - } else { - ngx_write_stderr("built with " OPENSSL_VERSION_TEXT - " (running with "); - ngx_write_stderr((char *) (uintptr_t) - SSLeay_version(SSLEAY_VERSION)); - ngx_write_stderr(")" NGX_LINEFEED); - } -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME - ngx_write_stderr("TLS SNI support enabled" NGX_LINEFEED); -#else - ngx_write_stderr("TLS SNI support disabled" NGX_LINEFEED); -#endif -#endif - - ngx_write_stderr("configure arguments:" NGX_CONFIGURE NGX_LINEFEED); - } - + ngx_show_version_info(); if (!ngx_test_config) { return 0; } @@ -419,6 +362,69 @@ } +static void +ngx_show_version_info() +{ + ngx_write_stderr("nginx version: " NGINX_VER_BUILD NGX_LINEFEED); + + if (ngx_show_help) { + ngx_write_stderr( + "Usage: nginx [-?hvVtTq] [-s signal] [-c filename] " + "[-p prefix] [-g directives]" NGX_LINEFEED + NGX_LINEFEED + "Options:" NGX_LINEFEED + " -?,-h : this help" NGX_LINEFEED + " -v : show version and exit" NGX_LINEFEED + " -V : show version and configure options then exit" + NGX_LINEFEED + " -t : test configuration and exit" NGX_LINEFEED + " -T : test configuration, dump it and exit" + NGX_LINEFEED + " -q : suppress non-error messages " + "during configuration testing" NGX_LINEFEED + " -s signal : send signal to a master process: " + "stop, quit, reopen, reload" NGX_LINEFEED +#ifdef NGX_PREFIX + " -p prefix : set prefix path (default: " NGX_PREFIX ")" + NGX_LINEFEED +#else + " -p prefix : set prefix path (default: NONE)" NGX_LINEFEED +#endif + " -c filename : set configuration file (default: " NGX_CONF_PATH + ")" NGX_LINEFEED + " -g directives : set global directives out of configuration " + "file" NGX_LINEFEED NGX_LINEFEED + ); + } + + if (ngx_show_configure) { + +#ifdef NGX_COMPILER + ngx_write_stderr("built by " NGX_COMPILER NGX_LINEFEED); +#endif + +#if (NGX_SSL) + if (SSLeay() == SSLEAY_VERSION_NUMBER) { + ngx_write_stderr("built with " OPENSSL_VERSION_TEXT NGX_LINEFEED); + } else { + ngx_write_stderr("built with " OPENSSL_VERSION_TEXT + " (running with "); + ngx_write_stderr((char *) (uintptr_t) + SSLeay_version(SSLEAY_VERSION)); + ngx_write_stderr(")" NGX_LINEFEED); + } +#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME + ngx_write_stderr("TLS SNI support enabled" NGX_LINEFEED); +#else + ngx_write_stderr("TLS SNI support disabled" NGX_LINEFEED); +#endif +#endif + + ngx_write_stderr("configure arguments:" NGX_CONFIGURE NGX_LINEFEED); + } +} + + static ngx_int_t ngx_add_inherited_sockets(ngx_cycle_t *cycle) { From mat999 at gmail.com Sun Sep 27 21:49:39 2015 From: mat999 at gmail.com (SplitIce) Date: Mon, 28 Sep 2015 07:49:39 +1000 Subject: HTTP2 Firefox Compatibility Message-ID: Hi All, Yesterday we discovered a possible compatibility issue with a certain configuration, HTTP2 and Firefox. This configuration works successfully in Chrome and other HTTP2 enabled browsers, however Firefox users are unable to connect (connection reset). The pertinent part of the configuration is a port with SSLv3 enabled in the supported protocols (risk associated with POODLE attack has been accounted and mitigated for separately). Test configuration: server { listen 443 ssl http2; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; [...] } Connect with Firefox (fail), connect with Chrome (pass). Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Sun Sep 27 23:03:24 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 28 Sep 2015 01:03:24 +0200 Subject: HTTP2 Firefox Compatibility In-Reply-To: References: Message-ID: <7006b52ff4164ad15bed21c52e930f1a@none.at> Hi. Am 27-09-2015 23:49, schrieb SplitIce: > Hi All, > > Yesterday we discovered a possible compatibility issue with a certain > configuration, HTTP2 and Firefox. This configuration works successfully > in Chrome and other HTTP2 enabled browsers, however Firefox users are > unable to connect (connection reset). > > The pertinent part of the configuration is a port with SSLv3 enabled in > the supported protocols (risk associated with POODLE attack has been > accounted and mitigated for separately). Please can you post the output of 'nginx -V' and a anonymized config. which version of firefox is in use? Firefox have deactivated sslv3 by default. https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/ https://www.mozilla.org/en-US/firefox/34.0/releasenotes/ Disabled SSLv3 What shows this output of "Protocol Features" for your client? https://www.ssllabs.com/ssltest/viewMyClient.html Which value have 'about:config' => security.tls.version.min ? > Test configuration: > > server { > listen 443 ssl http2; > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > [...] > } > > Connect with Firefox (fail), connect with Chrome (pass). Is it possible to use http2 with sslv3?! http://nginx.org/en/docs/http/ngx_http_v2_module.html ##### cite from above link Note that accepting HTTP/2 connections over TLS requires the ?Application-Layer Protocol Negotiation? (ALPN) TLS extension support, which is available only since OpenSSL version 1.0.2. Using the ?Next Protocol Negotiation? (NPN) TLS extension for this purpose (available since OpenSSL version 1.0.1) is not guaranteed. ##### What show the firefox network analyzer tool? https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor Is it possible to use debug log? http://nginx.org/en/docs/debugging_log.html > Regards, > Mathew Cheers Aleks From henry.houfeng at gmail.com Mon Sep 28 06:03:32 2015 From: henry.houfeng at gmail.com (Henry H) Date: Mon, 28 Sep 2015 16:03:32 +1000 Subject: bug in ngx_palloc Message-ID: Hi everyone, I just happened to find a bug in ngx_palloc, m = ngx_align_ptr(p->d.last, NGX_ALIGNMENT); After 'm' is aligned, it might bigger than p->d.end. So the following statement will be wrong: if ((size_t) (p->d.end - m) >= size) It should be changed to: if ( (md.end) && ((size_t) (p->d.end - m) >= size)) Regards, Henry From vbart at nginx.com Mon Sep 28 17:04:50 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 28 Sep 2015 17:04:50 +0000 Subject: [nginx] HTTP/2: fixed $server_protocol value (ticket #800). Message-ID: details: http://hg.nginx.org/nginx/rev/9dfc4ba140f9 branches: changeset: 6256:9dfc4ba140f9 user: Valentin Bartenev date: Mon Sep 28 20:02:05 2015 +0300 description: HTTP/2: fixed $server_protocol value (ticket #800). diffstat: src/http/v2/ngx_http_v2.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff -r b40af2fd1c16 -r 9dfc4ba140f9 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Thu Sep 24 17:19:08 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Mon Sep 28 20:02:05 2015 +0300 @@ -2762,6 +2762,8 @@ ngx_http_v2_create_stream(ngx_http_v2_co return NULL; } + ngx_str_set(&r->http_protocol, "HTTP/2.0"); + r->http_version = NGX_HTTP_VERSION_20; r->valid_location = 1; From tolga.ceylan at gmail.com Mon Sep 28 19:49:33 2015 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Mon, 28 Sep 2015 12:49:33 -0700 Subject: [nginx] upstream keepalive recv MSG_PEEK check Message-ID: Hi All, A quick code base question, regarding the close (read) handler in http_upstream_keepalive_module.c: ngx_http_upstream_keepalive_close_handler() has a recv() call with MSG_PEEK and if this returns NGX_EAGAIN, then it reschedules a read on epoll for my platform (Linux x86_64). My understanding is that this is a guard against spurious wake ups for whatever reason from epoll wait. If an actual data is received this is an error (since the connection is in keep alive pool and should not receive data from upstreams) and the connection is closed in this case. But if NGX_EAGAIN is detected, then there's no data and we continue with adding a read event handler to epoll. If my assumption is correct, then under what cases/circumstances do these spurious wake ups occur? In other words, why would the read handler be called if there's no data and recv() will return EAGAIN? In other words, in ngx_http_upstream_keepalive_close_handler, can't we skip the recv() check and proceed to close? Or do these spurious wake ups occur in non-epoll platforms? Regards, Tolga Ceylan From mdounin at mdounin.ru Mon Sep 28 20:08:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Sep 2015 23:08:54 +0300 Subject: bug in ngx_palloc In-Reply-To: References: Message-ID: <20150928200854.GA13202@mdounin.ru> Hello! On Mon, Sep 28, 2015 at 04:03:32PM +1000, Henry H wrote: > Hi everyone, > > I just happened to find a bug in ngx_palloc, > > m = ngx_align_ptr(p->d.last, NGX_ALIGNMENT); > > After 'm' is aligned, it might bigger than p->d.end. So the following > statement will be wrong: > > if ((size_t) (p->d.end - m) >= size) > > It should be changed to: > if ( (md.end) && ((size_t) (p->d.end - m) >= size)) The problem here can only happen if p->d.end is not properly aligned. This is not something expected to happen with correct use of the pool allocation interface. See here for further details: https://trac.nginx.org/nginx/ticket/686 -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Sep 28 20:13:32 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Sep 2015 23:13:32 +0300 Subject: [nginx] upstream keepalive recv MSG_PEEK check In-Reply-To: References: Message-ID: <20150928201331.GB13202@mdounin.ru> Hello! On Mon, Sep 28, 2015 at 12:49:33PM -0700, Tolga Ceylan wrote: > Hi All, > > A quick code base question, regarding the close (read) handler > in http_upstream_keepalive_module.c: > > ngx_http_upstream_keepalive_close_handler() > > has a recv() call with MSG_PEEK and if this returns NGX_EAGAIN, > then it reschedules a read on epoll for my platform (Linux x86_64). > > My understanding is that this is a guard against spurious wake ups > for whatever reason from epoll wait. If an actual data is received > this is an error (since the connection is in keep alive pool and should not > receive data from upstreams) and the connection is closed in this case. > But if NGX_EAGAIN is detected, then there's no data and we continue > with adding a read event handler to epoll. > > If my assumption is correct, then under what cases/circumstances do > these spurious wake ups occur? In other words, why would the read handler > be called if there's no data and recv() will return EAGAIN? In other words, > in ngx_http_upstream_keepalive_close_handler, can't we skip the recv() > check and proceed to close? > > Or do these spurious wake ups occur in non-epoll platforms? Detailed explanation and a test case can be found here: http://mdounin.ru/hg/ngx_http_upstream_keepalive/rev/9a4ee6fe1c6d -- Maxim Dounin http://nginx.org/ From tolga.ceylan at gmail.com Mon Sep 28 20:28:02 2015 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Mon, 28 Sep 2015 13:28:02 -0700 Subject: [nginx] upstream keepalive recv MSG_PEEK check In-Reply-To: <20150928201331.GB13202@mdounin.ru> References: <20150928201331.GB13202@mdounin.ru> Message-ID: On Mon, Sep 28, 2015 at 1:13 PM, Maxim Dounin wrote: > > Detailed explanation and a test case can be found here: > > http://mdounin.ru/hg/ngx_http_upstream_keepalive/rev/9a4ee6fe1c6d > Thank you for the quick response. Regards, Tolga Ceylan From arut at nginx.com Tue Sep 29 16:00:25 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 29 Sep 2015 19:00:25 +0300 Subject: Slice module Message-ID: <20150929160025.GC21123@Romans-MacBook-Air.local> Hello, I'm happy to publish the experimental Slice module. The module makes it possible to split a big upstream response into smaller parts and cache them independently. The module supports range requests. When a part of a file is requested, only the required slice upstream requests are made. If caching is enabled, future requests will only go to upstream for missing slices. The module adds - "slice" directive setting the slice size. - "$slice_range" variable, which must be added to the cache key expression and passed to upstream as the Range header value. The variable holds current slice range in the HTTP Range field format. Build ----- Use the --with-http_slice_module configure script option. Example ------- location / { slice 1m; proxy_cache cache; proxy_cache_key $uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_cache_valid 200 206 1h; proxy_pass http://127.0.0.1:9000; } Known issues ------------ The module can lead to excessive memory and file handle usage. Thanks for testing. -- Best wishes, Roman Arutyunyan -------------- next part -------------- diff -r 0f313cf0a1ee auto/modules --- a/auto/modules Tue Sep 22 17:36:22 2015 +0300 +++ b/auto/modules Mon Sep 28 17:07:15 2015 +0300 @@ -73,6 +73,11 @@ if [ $HTTP_SSI = YES ]; then fi +if [ $HTTP_SLICE = YES ]; then + HTTP_POSTPONE=YES +fi + + if [ $HTTP_ADDITION = YES ]; then HTTP_POSTPONE=YES fi @@ -140,6 +145,11 @@ if [ $HTTP_SSI = YES ]; then HTTP_SRCS="$HTTP_SRCS $HTTP_SSI_SRCS" fi +if [ $HTTP_SLICE = YES ]; then + HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES $HTTP_SLICE_FILTER_MODULE" + HTTP_SRCS="$HTTP_SRCS $HTTP_SLICE_SRCS" +fi + if [ $HTTP_CHARSET = YES ]; then HTTP_FILTER_MODULES="$HTTP_FILTER_MODULES $HTTP_CHARSET_FILTER_MODULE" HTTP_SRCS="$HTTP_SRCS $HTTP_CHARSET_SRCS" diff -r 0f313cf0a1ee auto/options --- a/auto/options Tue Sep 22 17:36:22 2015 +0300 +++ b/auto/options Mon Sep 28 17:07:15 2015 +0300 @@ -60,6 +60,7 @@ HTTP_GZIP=YES HTTP_SSL=NO HTTP_V2=NO HTTP_SSI=YES +HTTP_SLICE=NO HTTP_POSTPONE=NO HTTP_REALIP=NO HTTP_XSLT=NO @@ -226,6 +227,7 @@ do --with-http_random_index_module) HTTP_RANDOM_INDEX=YES ;; --with-http_secure_link_module) HTTP_SECURE_LINK=YES ;; --with-http_degradation_module) HTTP_DEGRADATION=YES ;; + --with-http_slice_module) HTTP_SLICE=YES ;; --without-http_charset_module) HTTP_CHARSET=NO ;; --without-http_gzip_module) HTTP_GZIP=NO ;; @@ -395,6 +397,7 @@ cat << END --with-http_secure_link_module enable ngx_http_secure_link_module --with-http_degradation_module enable ngx_http_degradation_module --with-http_stub_status_module enable ngx_http_stub_status_module + --with-http_slice_module enable ngx_http_slice_module --without-http_charset_module disable ngx_http_charset_module --without-http_gzip_module disable ngx_http_gzip_module diff -r 0f313cf0a1ee auto/sources --- a/auto/sources Tue Sep 22 17:36:22 2015 +0300 +++ b/auto/sources Mon Sep 28 17:07:15 2015 +0300 @@ -347,6 +347,10 @@ HTTP_SSI_DEPS=src/http/modules/ngx_http_ HTTP_SSI_SRCS=src/http/modules/ngx_http_ssi_filter_module.c +HTTP_SLICE_FILTER_MODULE=ngx_http_slice_filter_module +HTTP_SLICE_SRCS=src/http/modules/ngx_http_slice_filter_module.c + + HTTP_XSLT_FILTER_MODULE=ngx_http_xslt_filter_module HTTP_XSLT_SRCS=src/http/modules/ngx_http_xslt_filter_module.c diff -r 0f313cf0a1ee src/http/modules/ngx_http_slice_filter_module.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/http/modules/ngx_http_slice_filter_module.c Mon Sep 28 17:07:15 2015 +0300 @@ -0,0 +1,777 @@ + +/* + * Copyright (C) Roman Arutyunyan + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include + + +typedef struct { + size_t size; +} ngx_http_slice_loc_conf_t; + + +typedef struct { + off_t start; + off_t end; + off_t content_length; + off_t offset; + ngx_str_t range; + ngx_str_t etag; + unsigned last:1; + unsigned bad_range:1; +} ngx_http_slice_ctx_t; + + +static ngx_int_t ngx_http_slice_handler(ngx_http_request_t *r); +static ngx_int_t ngx_http_slice_parse_request_range(ngx_http_request_t *r, + ngx_http_slice_ctx_t *ctx); +static ngx_int_t ngx_http_slice_header_filter(ngx_http_request_t *r); +static ngx_int_t ngx_http_slice_parse_response_range(ngx_http_request_t *r, + ngx_http_slice_ctx_t *ctx); +static ngx_int_t ngx_http_slice_body_filter(ngx_http_request_t *r, + ngx_chain_t *in); +static ngx_int_t ngx_http_slice_range_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); +static void *ngx_http_slice_create_loc_conf(ngx_conf_t *cf); +static char *ngx_http_slice_merge_loc_conf(ngx_conf_t *cf, void *parent, + void *child); +static ngx_int_t ngx_http_slice_add_variables(ngx_conf_t *cf); +static ngx_int_t ngx_http_slice_init(ngx_conf_t *cf); + + +static ngx_command_t ngx_http_slice_filter_commands[] = { + + { ngx_string("slice"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_size_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_slice_loc_conf_t, size), + NULL }, + + ngx_null_command +}; + + +static ngx_http_module_t ngx_http_slice_filter_module_ctx = { + ngx_http_slice_add_variables, /* preconfiguration */ + ngx_http_slice_init, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + NULL, /* create server configuration */ + NULL, /* merge server configuration */ + + ngx_http_slice_create_loc_conf, /* create location configuration */ + ngx_http_slice_merge_loc_conf /* merge location configuration */ +}; + + +ngx_module_t ngx_http_slice_filter_module = { + NGX_MODULE_V1, + &ngx_http_slice_filter_module_ctx, /* module context */ + ngx_http_slice_filter_commands, /* module directives */ + NGX_HTTP_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_str_t ngx_http_slice_range_name = ngx_string("slice_range"); + +static ngx_http_output_header_filter_pt ngx_http_next_header_filter; +static ngx_http_output_body_filter_pt ngx_http_next_body_filter; + + +static ngx_int_t +ngx_http_slice_handler(ngx_http_request_t *r) +{ + off_t start, end; + u_char *p; + ngx_http_slice_ctx_t *ctx; + ngx_http_slice_loc_conf_t *slcf; + + slcf = ngx_http_get_module_loc_conf(r, ngx_http_slice_filter_module); + if (slcf->size == 0) { + return NGX_DECLINED; + } + + ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module); + if (ctx) { + return NGX_DECLINED; + } + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice handler"); + + ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_slice_ctx_t)); + if (ctx == NULL) { + return NGX_ERROR; + } + + ngx_http_set_ctx(r, ctx, ngx_http_slice_filter_module); + + if (ngx_http_slice_parse_request_range(r, ctx) != NGX_OK) { + ctx->bad_range = 1; + } + + if (ctx->start > 0) { + start = slcf->size * (ctx->start / slcf->size); + + } else { + start = 0; + } + + end = start + slcf->size - 1; + + p = ngx_pnalloc(r->pool, sizeof("bytes=-") - 1 + 2 * NGX_OFF_T_LEN); + if (p == NULL) { + return NGX_ERROR; + } + + ctx->range.data = p; + ctx->range.len = ngx_sprintf(p, "bytes=%O-%O", start, end) - p; + + return NGX_DECLINED; +} + + +static ngx_int_t +ngx_http_slice_parse_request_range(ngx_http_request_t *r, + ngx_http_slice_ctx_t *ctx) +{ + off_t start, end, cutoff, cutlim; + u_char *p; + ngx_uint_t suffix; + + if (r->method == NGX_HTTP_HEAD) { + return NGX_OK; + } + + if (r->headers_in.range == NULL + || r->headers_in.range->value.len < 7 + || ngx_strncasecmp(r->headers_in.range->value.data, + (u_char *) "bytes=", 6) + != 0) + { + ctx->end = -1; + return NGX_OK; + } + + p = r->headers_in.range->value.data + 6; + + cutoff = NGX_MAX_OFF_T_VALUE / 10; + cutlim = NGX_MAX_OFF_T_VALUE % 10; + + start = 0; + end = 0; + suffix = 0; + + while (*p == ' ') { p++; } + + if (*p != '-') { + if (*p < '0' || *p > '9') { + return NGX_ERROR; + } + + while (*p >= '0' && *p <= '9') { + if (start >= cutoff && (start > cutoff || *p - '0' > cutlim)) { + return NGX_ERROR; + } + + start = start * 10 + *p++ - '0'; + } + + while (*p == ' ') { p++; } + + if (*p++ != '-') { + return NGX_ERROR; + } + + while (*p == ' ') { p++; } + + if (*p == ',') { + ctx->end = -1; + return NGX_OK; + } + + if (*p == '\0') { + ctx->start = start; + ctx->end = -1; + return NGX_OK; + } + + } else { + suffix = 1; + p++; + } + + if (*p < '0' || *p > '9') { + return NGX_ERROR; + } + + while (*p >= '0' && *p <= '9') { + if (end >= cutoff && (end > cutoff || *p - '0' > cutlim)) { + return NGX_ERROR; + } + + end = end * 10 + *p++ - '0'; + } + + while (*p == ' ') { p++; } + + if (*p != ',' && *p != '\0') { + return NGX_ERROR; + } + + if (suffix) { + ctx->start = -end; + ctx->end = -1; + return NGX_OK; + } + + if (*p == ',') { + ctx->end = -1; + return NGX_OK; + } + + ctx->start = start; + ctx->end = end + 1; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_slice_header_filter(ngx_http_request_t *r) +{ + ngx_table_elt_t *h; + ngx_http_slice_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module); + if (ctx == NULL) { + return ngx_http_next_header_filter(r); + } + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice header filter"); + + if (r->headers_out.status != NGX_HTTP_OK + && r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) + { + if (r != r->main) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "bad status code %ui in slice response", + r->headers_out.status); + return NGX_ERROR; + } + + return ngx_http_next_header_filter(r); + } + + if (ngx_http_slice_parse_response_range(r, ctx) != NGX_OK) { + if (r != r->main) { + return NGX_ERROR; + } + + return ngx_http_filter_finalize_request(r, NULL, NGX_HTTP_BAD_GATEWAY); + } + + if (r != r->main) { + if (ctx->etag.len) { + h = r->headers_out.etag; + + if (h == NULL + || h->value.len != ctx->etag.len + || ngx_strncmp(h->value.data, ctx->etag.data, ctx->etag.len) + != 0) + { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "bad etag in slice response"); + return NGX_ERROR; + } + } + + return ngx_http_next_header_filter(r); + } + + if (ctx->start < 0) { + ctx->start += ctx->content_length; + if (ctx->start < 0) { + ctx->start = 0; + } + } + + if (ctx->end == -1 || ctx->end > ctx->content_length) { + ctx->end = ctx->content_length; + } + + if (ctx->start >= ctx->end) { + ctx->bad_range = 1; + } + + if (ctx->bad_range) { + h = ngx_list_push(&r->headers_out.headers); + if (h == NULL) { + return NGX_ERROR; + } + + h->hash = 1; + ngx_str_set(&h->key, "Content-Range"); + + h->value.data = ngx_pnalloc(r->pool, + sizeof("bytes */") - 1 + NGX_OFF_T_LEN); + if (h->value.data == NULL) { + return NGX_ERROR; + } + + h->value.len = ngx_sprintf(h->value.data, "bytes */%O", + ctx->content_length) + - h->value.data; + + ngx_http_clear_content_length(r); + + r->headers_out.content_range = h; + r->headers_out.status = NGX_HTTP_RANGE_NOT_SATISFIABLE; + + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + if (r->headers_in.range == NULL) { + ngx_http_clear_content_length(r); + + r->headers_out.content_length_n = ctx->content_length; + r->headers_out.status = NGX_HTTP_OK; + r->headers_out.status_line.len = 0; + + } else { + h = ngx_list_push(&r->headers_out.headers); + if (h == NULL) { + return NGX_ERROR; + } + + h->hash = 1; + ngx_str_set(&h->key, "Content-Range"); + + h->value.data = ngx_pnalloc(r->pool, + sizeof("bytes -/") - 1 + 3 * NGX_OFF_T_LEN); + if (h->value.data == NULL) { + return NGX_ERROR; + } + + h->value.len = ngx_sprintf(h->value.data, "bytes %O-%O/%O", + ctx->start, ctx->end - 1, + ctx->content_length) + - h->value.data; + + ngx_http_clear_content_length(r); + + r->headers_out.content_range = h; + r->headers_out.content_length_n = ctx->end - ctx->start; + r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; + r->headers_out.status_line.len = 0; + } + + return ngx_http_next_header_filter(r); +} + + +static ngx_int_t +ngx_http_slice_parse_response_range(ngx_http_request_t *r, + ngx_http_slice_ctx_t *ctx) +{ + off_t start, end, content_length, cutoff, cutlim; + u_char *p; + ngx_table_elt_t *h; + + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { + h = r->headers_out.content_range; + if (h == NULL) { + return NGX_ERROR; + } + + h->hash = 0; + r->headers_out.content_range = NULL; + + cutoff = NGX_MAX_OFF_T_VALUE / 10; + cutlim = NGX_MAX_OFF_T_VALUE % 10; + + start = 0; + end = 0; + content_length = 0; + + if (h->value.len < 7 + || ngx_strncmp(h->value.data, "bytes ", 6) != 0) + { + return NGX_ERROR; + } + + p = h->value.data + 6; + + while (*p == ' ') { p++; } + + if (*p < '0' || *p > '9') { + return NGX_ERROR; + } + + while (*p >= '0' && *p <= '9') { + if (start >= cutoff && (start > cutoff || *p - '0' > cutlim)) { + return NGX_ERROR; + } + + start = start * 10 + *p++ - '0'; + } + + while (*p == ' ') { p++; } + + if (*p++ != '-') { + return NGX_ERROR; + } + + while (*p == ' ') { p++; } + + if (*p < '0' || *p > '9') { + return NGX_ERROR; + } + + while (*p >= '0' && *p <= '9') { + if (end >= cutoff && (end > cutoff || *p - '0' > cutlim)) { + return NGX_ERROR; + } + + end = end * 10 + *p++ - '0'; + } + + end++; + + while (*p == ' ') { p++; } + + if (*p++ != '/') { + return NGX_ERROR; + } + + while (*p == ' ') { p++; } + + if (*p == '*') { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "no complete length in slice response"); + return NGX_ERROR; + } + + if (*p < '0' || *p > '9') { + return NGX_ERROR; + } + + while (*p >= '0' && *p <= '9') { + if (content_length >= cutoff + && (content_length > cutoff || *p - '0' > cutlim)) + { + return NGX_ERROR; + } + + content_length = content_length * 10 + *p++ - '0'; + } + + while (*p == ' ') { p++; } + + if (*p != '\0') { + return NGX_ERROR; + } + + } else { /* r->headers_out.status == NGX_HTTP_OK */ + + content_length = r->headers_out.content_length_n; + + if (content_length == -1) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "no content length in slice response"); + return NGX_ERROR; + } + + start = 0; + end = content_length; + } + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice range: %O-%O/%O", + start, end, content_length); + + /* make sure we received at least one byte from the range */ + + if (ctx->start >= 0 + && !(ctx->start >= start && ctx->start < end)) + { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "bad range in slice response: %O-%O", start, end); + return NGX_ERROR; + } + + ctx->offset = start; + ctx->content_length = content_length; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_slice_body_filter(ngx_http_request_t *r, ngx_chain_t *in) +{ + off_t start, end; + u_char *p; + ngx_buf_t *b; + ngx_int_t rc; + ngx_chain_t *out, *cl, **ll; + ngx_http_request_t *sr; + ngx_http_slice_ctx_t *ctx, *sctx, *pctx; + ngx_http_slice_loc_conf_t *slcf; + + ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module); + if (ctx == NULL) { + return ngx_http_next_body_filter(r, in); + } + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice body filter"); + + if (r->headers_out.status != NGX_HTTP_OK + && r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) + { + return ngx_http_next_body_filter(r, in); + } + + out = NULL; + ll = &out; + + for (cl = in; cl; cl = cl->next) { + + b = cl->buf; + + start = ctx->offset; + end = ctx->offset + ngx_buf_size(b); + + ctx->offset = end; + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice body buf: %O-%O, l:%d", + start, end, (int) b->last_buf); + + if (b->last_buf) { + b->last_buf = 0; + b->sync = 1; + ctx->last = 1; + } + + if (ngx_buf_special(b)) { + *ll = cl; + ll = &cl->next; + continue; + } + + if (ctx->end <= start || ctx->start >= end) { + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice body skip"); + + if (b->in_file) { + b->file_pos = b->file_last; + } + + b->pos = b->last; + b->sync = 1; + + continue; + } + + if (ctx->start > start) { + + if (b->in_file) { + b->file_pos += ctx->start - start; + } + + if (ngx_buf_in_memory(b)) { + b->pos += (size_t) (ctx->start - start); + } + } + + if (ctx->end <= end) { + + if (b->in_file) { + b->file_last -= end - ctx->end; + } + + if (ngx_buf_in_memory(b)) { + b->last -= (size_t) (end - ctx->end); + } + + b->last_buf = 1; + *ll = cl; + cl->next = NULL; + + break; + } + + *ll = cl; + ll = &cl->next; + } + + rc = ngx_http_next_body_filter(r, out); + if (rc == NGX_ERROR) { + return NGX_ERROR; + } + + if (r != r->main && out) { + pctx = ngx_http_get_module_ctx(r->main, ngx_http_slice_filter_module); + if (pctx) { + pctx->offset = ctx->offset; + } + } + + if (r != r->main + || !ctx->last + || ctx->offset >= ctx->end) + { + return rc; + } + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice next offset:%O, start:%O, end:%O", + ctx->offset, ctx->start, ctx->end); + + if (ngx_http_subrequest(r, &r->uri, &r->args, &sr, NULL, 0) != NGX_OK) { + return NGX_ERROR; + } + + sctx = ngx_pcalloc(r->pool, sizeof(ngx_http_slice_ctx_t)); + if (sctx == NULL) { + return NGX_ERROR; + } + + ngx_http_set_ctx(sr, sctx, ngx_http_slice_filter_module); + + sctx->start = ngx_max(ctx->start, ctx->offset); + sctx->end = ctx->end; + + if (r->headers_out.etag) { + sctx->etag = r->headers_out.etag->value; + } + + slcf = ngx_http_get_module_loc_conf(r, ngx_http_slice_filter_module); + + start = slcf->size * (sctx->start / slcf->size); + end = start + slcf->size - 1; + + p = ngx_pnalloc(r->pool, sizeof("bytes=-") - 1 + 2 * NGX_OFF_T_LEN); + if (p == NULL) { + return NGX_ERROR; + } + + sctx->range.data = p; + sctx->range.len = ngx_sprintf(p, "bytes=%O-%O", start, end) - p; + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http slice subrequest range: \"%V\"", &sctx->range); + + return rc; +} + + +static ngx_int_t +ngx_http_slice_range_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_http_slice_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module); + if (ctx == NULL) { + v->not_found = 1; + return NGX_OK; + } + + v->data= ctx->range.data; + v->valid = 1; + v->not_found = 0; + v->no_cacheable = 1; + v->len = ctx->range.len; + + return NGX_OK; +} + + +static void * +ngx_http_slice_create_loc_conf(ngx_conf_t *cf) +{ + ngx_http_slice_loc_conf_t *slcf; + + slcf = ngx_pcalloc(cf->pool, sizeof(ngx_http_slice_loc_conf_t)); + if (slcf == NULL) { + return NULL; + } + + slcf->size = NGX_CONF_UNSET_SIZE; + + return slcf; +} + + +static char * +ngx_http_slice_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child) +{ + ngx_http_slice_loc_conf_t *prev = parent; + ngx_http_slice_loc_conf_t *conf = child; + + ngx_conf_merge_size_value(conf->size, prev->size, 0); + + return NGX_CONF_OK; +} + + +static ngx_int_t +ngx_http_slice_add_variables(ngx_conf_t *cf) +{ + ngx_http_variable_t *var; + + var = ngx_http_add_variable(cf, &ngx_http_slice_range_name, 0); + if (var == NULL) { + return NGX_ERROR; + } + + var->get_handler = ngx_http_slice_range_variable; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_slice_init(ngx_conf_t *cf) +{ + ngx_http_handler_pt *h; + ngx_http_core_main_conf_t *cmcf; + + cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); + + h = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers); + if (h == NULL) { + return NGX_ERROR; + } + + *h = ngx_http_slice_handler; + + ngx_http_next_header_filter = ngx_http_top_header_filter; + ngx_http_top_header_filter = ngx_http_slice_header_filter; + + ngx_http_next_body_filter = ngx_http_top_body_filter; + ngx_http_top_body_filter = ngx_http_slice_body_filter; + + return NGX_OK; +} diff -r 0f313cf0a1ee src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Sep 22 17:36:22 2015 +0300 +++ b/src/http/ngx_http_upstream.c Mon Sep 28 17:07:15 2015 +0300 @@ -291,6 +291,11 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_process_transfer_encoding, 0, ngx_http_upstream_ignore_header_line, 0, 0 }, + { ngx_string("Content-Range"), + ngx_http_upstream_ignore_header_line, 0, + ngx_http_upstream_copy_header_line, + offsetof(ngx_http_headers_out_t, content_range), 0 }, + #if (NGX_HTTP_GZIP) { ngx_string("Content-Encoding"), ngx_http_upstream_process_header_line, From mdounin at mdounin.ru Wed Sep 30 16:58:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Sep 2015 16:58:38 +0000 Subject: [nginx] Extract out version info function. Message-ID: details: http://hg.nginx.org/nginx/rev/5eb4d7541107 branches: changeset: 6257:5eb4d7541107 user: Kurtis Nusbaum date: Sun Jul 12 08:31:38 2015 -0700 description: Extract out version info function. The code for displaying version info and configuration info seemed to be cluttering up the main function. I was finding it hard to read main. This extracts out all of the logic for displaying version and configuration info into its own function, thus making main easier to read. diffstat: src/core/nginx.c | 123 +++++++++++++++++++++++++++++------------------------- 1 files changed, 65 insertions(+), 58 deletions(-) diffs (147 lines): diff --git a/src/core/nginx.c b/src/core/nginx.c --- a/src/core/nginx.c +++ b/src/core/nginx.c @@ -10,6 +10,7 @@ #include +static void ngx_show_version_info(); static ngx_int_t ngx_add_inherited_sockets(ngx_cycle_t *cycle); static ngx_int_t ngx_get_options(int argc, char *const *argv); static ngx_int_t ngx_process_options(ngx_cycle_t *cycle); @@ -194,64 +195,7 @@ main(int argc, char *const *argv) } if (ngx_show_version) { - ngx_write_stderr("nginx version: " NGINX_VER_BUILD NGX_LINEFEED); - - if (ngx_show_help) { - ngx_write_stderr( - "Usage: nginx [-?hvVtTq] [-s signal] [-c filename] " - "[-p prefix] [-g directives]" NGX_LINEFEED - NGX_LINEFEED - "Options:" NGX_LINEFEED - " -?,-h : this help" NGX_LINEFEED - " -v : show version and exit" NGX_LINEFEED - " -V : show version and configure options then exit" - NGX_LINEFEED - " -t : test configuration and exit" NGX_LINEFEED - " -T : test configuration, dump it and exit" - NGX_LINEFEED - " -q : suppress non-error messages " - "during configuration testing" NGX_LINEFEED - " -s signal : send signal to a master process: " - "stop, quit, reopen, reload" NGX_LINEFEED -#ifdef NGX_PREFIX - " -p prefix : set prefix path (default: " - NGX_PREFIX ")" NGX_LINEFEED -#else - " -p prefix : set prefix path (default: NONE)" NGX_LINEFEED -#endif - " -c filename : set configuration file (default: " - NGX_CONF_PATH ")" NGX_LINEFEED - " -g directives : set global directives out of configuration " - "file" NGX_LINEFEED NGX_LINEFEED - ); - } - - if (ngx_show_configure) { - -#ifdef NGX_COMPILER - ngx_write_stderr("built by " NGX_COMPILER NGX_LINEFEED); -#endif - -#if (NGX_SSL) - if (SSLeay() == SSLEAY_VERSION_NUMBER) { - ngx_write_stderr("built with " OPENSSL_VERSION_TEXT - NGX_LINEFEED); - } else { - ngx_write_stderr("built with " OPENSSL_VERSION_TEXT - " (running with "); - ngx_write_stderr((char *) (uintptr_t) - SSLeay_version(SSLEAY_VERSION)); - ngx_write_stderr(")" NGX_LINEFEED); - } -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME - ngx_write_stderr("TLS SNI support enabled" NGX_LINEFEED); -#else - ngx_write_stderr("TLS SNI support disabled" NGX_LINEFEED); -#endif -#endif - - ngx_write_stderr("configure arguments:" NGX_CONFIGURE NGX_LINEFEED); - } + ngx_show_version_info(); if (!ngx_test_config) { return 0; @@ -419,6 +363,69 @@ main(int argc, char *const *argv) } +static void +ngx_show_version_info() +{ + ngx_write_stderr("nginx version: " NGINX_VER_BUILD NGX_LINEFEED); + + if (ngx_show_help) { + ngx_write_stderr( + "Usage: nginx [-?hvVtTq] [-s signal] [-c filename] " + "[-p prefix] [-g directives]" NGX_LINEFEED + NGX_LINEFEED + "Options:" NGX_LINEFEED + " -?,-h : this help" NGX_LINEFEED + " -v : show version and exit" NGX_LINEFEED + " -V : show version and configure options then exit" + NGX_LINEFEED + " -t : test configuration and exit" NGX_LINEFEED + " -T : test configuration, dump it and exit" + NGX_LINEFEED + " -q : suppress non-error messages " + "during configuration testing" NGX_LINEFEED + " -s signal : send signal to a master process: " + "stop, quit, reopen, reload" NGX_LINEFEED +#ifdef NGX_PREFIX + " -p prefix : set prefix path (default: " NGX_PREFIX ")" + NGX_LINEFEED +#else + " -p prefix : set prefix path (default: NONE)" NGX_LINEFEED +#endif + " -c filename : set configuration file (default: " NGX_CONF_PATH + ")" NGX_LINEFEED + " -g directives : set global directives out of configuration " + "file" NGX_LINEFEED NGX_LINEFEED + ); + } + + if (ngx_show_configure) { + +#ifdef NGX_COMPILER + ngx_write_stderr("built by " NGX_COMPILER NGX_LINEFEED); +#endif + +#if (NGX_SSL) + if (SSLeay() == SSLEAY_VERSION_NUMBER) { + ngx_write_stderr("built with " OPENSSL_VERSION_TEXT NGX_LINEFEED); + } else { + ngx_write_stderr("built with " OPENSSL_VERSION_TEXT + " (running with "); + ngx_write_stderr((char *) (uintptr_t) + SSLeay_version(SSLEAY_VERSION)); + ngx_write_stderr(")" NGX_LINEFEED); + } +#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME + ngx_write_stderr("TLS SNI support enabled" NGX_LINEFEED); +#else + ngx_write_stderr("TLS SNI support disabled" NGX_LINEFEED); +#endif +#endif + + ngx_write_stderr("configure arguments:" NGX_CONFIGURE NGX_LINEFEED); + } +} + + static ngx_int_t ngx_add_inherited_sockets(ngx_cycle_t *cycle) { From mdounin at mdounin.ru Wed Sep 30 17:02:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Sep 2015 20:02:03 +0300 Subject: [PATCH] Extract out version info function In-Reply-To: <1d5846fa79e64a0920c1.1443327397@Bender.local> References: <1d5846fa79e64a0920c1.1443327397@Bender.local> Message-ID: <20150930170203.GC30105@mdounin.ru> Hello! On Sat, Sep 26, 2015 at 09:16:37PM -0700, Kurtis Nusbaum wrote: > # HG changeset patch > # User Kurtis Nusbaum > # Date 1436715098 25200 > # Sun Jul 12 08:31:38 2015 -0700 > # Node ID 1d5846fa79e64a0920c189a266849bf23b6cab63 > # Parent b40af2fd1c1665bc79bd6c50233dd6c834f60b6b > Extract out version info function. > > The code for displaying version info and configuration info seemed to be > cluttering up the main function. I was finding it hard to read main. This > extracts out all of the logic for displaying version and configuration info > into its own function, thus making main easier to read. [...] Committed with a minor style change, thanks. -- Maxim Dounin http://nginx.org/