From mdounin at mdounin.ru Mon Jul 1 11:36:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 15:36:29 +0400 Subject: Help with shared memory usage In-Reply-To: References: Message-ID: <20130701113629.GO20717@mdounin.ru> Hello! On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote: > Hi, > > I'm trying to understand how the shared memory pool works inside the Nginx. > To do that, I made a very small module which create a shared memory zone > with 2097152 bytes, > and allocating and freeing blocks of memory, starting from 0 and increasing > by 1kb until the allocation fails. > > The strange parts to me were: > - the maximum block I could allocate was 128000 bytes > - each time the allocation fails, I started again from 0, but the maximum > allocated block changed with the following profile > 128000 > 87040 > 70656 > 62464 > 58368 > 54272 > 50176 > 46080 > 41984 > 37888 > 33792 > 29696 > > This is the expected behavior? > Can anyone help me explaining how shared memory works? > I have another module which do an intensive shared memory usage, and > understanding this can help me improve it solving some "no memory" messages. > > I put the code in attach. I've looked into this, and the behaviour is expected as per nginx slab allocator code and the way you do allocations in your test. Increasing allocations of large blocks immediately followed by freeing them result in free memory blocks split into smaller blocks, eventually resulting in at most page size allocations being possible. Take a look at ngx_slab_alloc_pages() and ngx_slab_free_pages() for details. Note that slab allocator nginx uses for allocations in shared memory is designed mostly for small allocations. It works well for allocations less than page size, but large allocations support is very simple. Probably it should be improved, but as of now nothing in nginx uses large allocations in shared memory. -- Maxim Dounin http://nginx.org/en/donation.html From mat999 at gmail.com Mon Jul 1 12:43:11 2013 From: mat999 at gmail.com (SplitIce) Date: Mon, 1 Jul 2013 22:43:11 +1000 Subject: ngx_http_limit_conn_module feature request Message-ID: Would it be possible to get a feature added to this module? What I would like is a variable containing the name of the zone of the rule that is responsible for the 503 error. Would be great for where there are limits on many factors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cubicdaiya at gmail.com Mon Jul 1 17:00:40 2013 From: cubicdaiya at gmail.com (cubicdaiya) Date: Tue, 2 Jul 2013 02:00:40 +0900 Subject: [PATCH]Valgind: a complaint about uninitialized bytes in epoll_data_t Message-ID: Hello! In Debian squeeze 32bit, Valgrind outputs the following message to nginx. ==17124== Syscall param epoll_ctl(event) points to uninitialised byte(s) ==17124== at 0x418F9CE: epoll_ctl (syscall-template.S:82) ==17124== by 0x805FB35: ngx_event_process_init (ngx_event.c:853) ==17124== by 0x8065A5B: ngx_worker_process_init (ngx_process_cycle.c:973) ==17124== by 0x8065EB8: ngx_worker_process_cycle (ngx_process_cycle.c:740) ==17124== by 0x8064815: ngx_spawn_process (ngx_process.c:198) ==17124== by 0x8065442: ngx_start_worker_processes (ngx_process_cycle.c:364) ==17124== by 0x80664A6: ngx_master_process_cycle (ngx_process_cycle.c:136) ==17124== by 0x804BA45: main (nginx.c:412) ==17124== Address 0xbe995f6c is on thread 1's stack ==17124== ==17124== Syscall param epoll_ctl(event) points to uninitialised byte(s) ==17124== at 0x418F9CE: epoll_ctl (syscall-template.S:82) ==17124== by 0x8063810: ngx_add_channel_event (ngx_channel.c:240) ==17124== by 0x8065B5D: ngx_worker_process_init (ngx_process_cycle.c:1009) ==17124== by 0x8065EB8: ngx_worker_process_cycle (ngx_process_cycle.c:740) ==17124== by 0x8064815: ngx_spawn_process (ngx_process.c:198) ==17124== by 0x8065442: ngx_start_worker_processes (ngx_process_cycle.c:364) ==17124== by 0x80664A6: ngx_master_process_cycle (ngx_process_cycle.c:136) ==17124== by 0x804BA45: main (nginx.c:412) ==17124== Address 0xbe99601c is on thread 1's stack The following patch eliminates this warning. Could you take a look at it? # HG changeset patch # User Tatsuhiko Kubo # Date 1372689447 -32400 # Node ID cd8fd5cd74294554bb3777821e8703cf0fdf61d7 # Parent b66ec10e901a6fa0fc19937ceeb52b5ea1fbb706 Valgrind: the complaint about uninitialized bytes in epoll_data_t. diff -r b66ec10e901a -r cd8fd5cd7429 src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Fri Jun 28 13:55:05 2013 +0400 +++ b/src/event/modules/ngx_epoll_module.c Mon Jul 01 23:37:27 2013 +0900 @@ -417,6 +417,9 @@ } ee.events = events | (uint32_t) flags; + + ngx_memzero(&ee.data, sizeof(epoll_data_t)); + ee.data.ptr = (void *) ((uintptr_t) c | ev->instance); ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, # environment ## uname -a Linux squeeze32 2.6.32-5-686 #1 SMP Fri May 10 08:33:48 UTC 2013 i686 GNU/Linux ## nginx -V nginx version: nginx/1.5.2 built by gcc 4.4.5 (Debian 4.4.5-8) configure arguments: --with-pcre -- Tatsuhiko Kubo E-Mail : cubicdaiya at gmail.com HP : http://cccis.jp/index_en.html Twitter : http://twitter.com/cubicdaiya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: uninitialized-bytes-fix.patch Type: application/octet-stream Size: 725 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Jul 1 18:07:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jul 2013 22:07:51 +0400 Subject: [PATCH]Valgind: a complaint about uninitialized bytes in epoll_data_t In-Reply-To: References: Message-ID: <20130701180751.GW20717@mdounin.ru> Hello! On Tue, Jul 02, 2013 at 02:00:40AM +0900, cubicdaiya wrote: > Hello! > > In Debian squeeze 32bit, > Valgrind outputs the following message to nginx. > > ==17124== Syscall param epoll_ctl(event) points to uninitialised byte(s) > ==17124== at 0x418F9CE: epoll_ctl (syscall-template.S:82) > ==17124== by 0x805FB35: ngx_event_process_init (ngx_event.c:853) [...] > The following patch eliminates this warning. Could you take a look at it? > > # HG changeset patch > # User Tatsuhiko Kubo > # Date 1372689447 -32400 > # Node ID cd8fd5cd74294554bb3777821e8703cf0fdf61d7 > # Parent b66ec10e901a6fa0fc19937ceeb52b5ea1fbb706 > Valgrind: the complaint about uninitialized bytes in epoll_data_t. > > diff -r b66ec10e901a -r cd8fd5cd7429 src/event/modules/ngx_epoll_module.c > --- a/src/event/modules/ngx_epoll_module.c Fri Jun 28 13:55:05 2013 +0400 > +++ b/src/event/modules/ngx_epoll_module.c Mon Jul 01 23:37:27 2013 +0900 > @@ -417,6 +417,9 @@ > } > > ee.events = events | (uint32_t) flags; > + > + ngx_memzero(&ee.data, sizeof(epoll_data_t)); > + > ee.data.ptr = (void *) ((uintptr_t) c | ev->instance); > > ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, I can't say I like the patch. Calls to epoll_ctl() are at hot path, and doing an unneeded memzero to please Valgrind on some archs doesn't looks like a good idea. Maybe under #if (NGX_VALGRIND) (and separately from normal assignments to ee structure; also not sure if we need memzero here, probably ee.data.u64 = 0 would be better). Note well that the same coding pattern is used in many places across the epoll module, and changing only one place doesn't make sense. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Jul 2 13:29:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 02 Jul 2013 13:29:16 +0000 Subject: [nginx] nginx-1.5.2-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/5bdca4812974 branches: changeset: 5257:5bdca4812974 user: Maxim Dounin date: Tue Jul 02 16:28:50 2013 +0400 description: nginx-1.5.2-RELEASE diffstat: docs/xml/nginx/changes.xml | 49 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 49 insertions(+), 0 deletions(-) diffs (59 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,55 @@ + + + + +?????? ????? ???????????? ????????? ???????? error_log. + + +now several "error_log" directives can be used. + + + + + +????? $r->header_in() ??????????? ????? ?? ????????? ???????? ????? +"Cookie" ? "X-Forwarded-For" ?? ????????? ???????; +?????? ????????? ? 1.3.14. + + +the $r->header_in() embedded perl method did not return value of the +"Cookie" and "X-Forwarded-For" request header lines; +the bug had appeared in 1.3.14. + + + + + +? ?????? ngx_http_spdy_module.
+??????? Jim Radford. +
+ +in the ngx_http_spdy_module.
+Thanks to Jim Radford. +
+
+ + + +nginx ?? ????????? ?? Linux ??? ????????????? x32 ABI.
+??????? ?????? ????????. +
+ +nginx could not be built on Linux with x32 ABI.
+Thanks to Serguei Ivantsov. +
+
+ +
+ + From mdounin at mdounin.ru Tue Jul 2 13:29:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 02 Jul 2013 13:29:17 +0000 Subject: [nginx] release-1.5.2 tag Message-ID: details: http://hg.nginx.org/nginx/rev/cdad9e47864f branches: changeset: 5258:cdad9e47864f user: Maxim Dounin date: Tue Jul 02 16:28:51 2013 +0400 description: release-1.5.2 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -357,3 +357,4 @@ 23159600bdea695db8f9d2890aaf73424303e49c 7809529022b83157067e7d1e2fb65d57db5f4d99 release-1.4.0 48a84bc3ff074a65a63e353b9796ff2b14239699 release-1.5.0 99eed1a88fc33f32d66e2ec913874dfef3e12fcc release-1.5.1 +5bdca4812974011731e5719a6c398b54f14a6d61 release-1.5.2 From jefftk at google.com Tue Jul 2 14:35:11 2013 From: jefftk at google.com (Jeff Kaufman) Date: Tue, 2 Jul 2013 10:35:11 -0400 Subject: Removing response headers Message-ID: In a header filter, to remove a filter that's already been set, I see two options: 1. set the header's hash to 0 2. actually delete the header from r->headers_out The second is much more complex and requires allocating memory (see ngx_http_headers_more_rm_header_helper in https://github.com/agentzh/headers-more-nginx-module/blob/master/src/ngx_http_headers_more_util.c#L294) so I'd rather use the first, but is there a reason to prefer the second? Jeff Kaufman ngx_pagespeed From mdounin at mdounin.ru Tue Jul 2 15:32:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jul 2013 19:32:25 +0400 Subject: Removing response headers In-Reply-To: References: Message-ID: <20130702153225.GJ20717@mdounin.ru> Hello! On Tue, Jul 02, 2013 at 10:35:11AM -0400, Jeff Kaufman wrote: > In a header filter, to remove a filter that's already been set, I see > two options: > > 1. set the header's hash to 0 > 2. actually delete the header from r->headers_out > > The second is much more complex and requires allocating memory (see > ngx_http_headers_more_rm_header_helper in > https://github.com/agentzh/headers-more-nginx-module/blob/master/src/ngx_http_headers_more_util.c#L294) > so I'd rather use the first, but is there a reason to prefer the > second? The headers more module uses this rather complicated code as it tries to modify input headers, which are not expected to be modified. As long as you are changing r->headers_out, it's ok to just set hash to 0 (note though that special headers like Location or Content-Length will also require various special handling). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Jul 2 22:03:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 02 Jul 2013 22:03:44 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/0c699e1d1071 branches: changeset: 5259:0c699e1d1071 user: Maxim Dounin date: Tue Jul 02 20:05:49 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005002 -#define NGINX_VERSION "1.5.2" +#define nginx_version 1005003 +#define NGINX_VERSION "1.5.3" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From dakota at brokenpipe.ru Wed Jul 3 00:34:26 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Wed, 3 Jul 2013 04:34:26 +0400 Subject: How to abort subrequest properly? In-Reply-To: References: Message-ID: Nobody knows? On Sun, Jun 30, 2013 at 3:04 PM, Marat Dakota wrote: > Hi, > > I am parsing a subrequest's body as it arrives. At some point I could > decide that the subrequest's body is not well-formed. I want to stop > receiving the rest of the subrequest's body and close its connection. My > main request and all other subrequests should continue working. > > I've tried something like ngx_http_finalize_request(sr, NGX_ABORT). It > looks like it's not the thing. > > What steps should be applied to abort a subrequest? > > Thanks. > > -- > Marat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huanh at me.com Wed Jul 3 02:03:00 2013 From: huanh at me.com (Huan Nguyen) Date: Wed, 03 Jul 2013 10:03:00 +0800 Subject: Progressive flushing fastcgi cache early References: <2DD76FCE-9619-475C-9409-FE780CC18317@me.com> Message-ID: <4F95803E-BC4E-42A4-97F7-AB2DD26FD2A6@me.com> Hi all, I'm try to improve our webpage start render time by flushing the cache buffer early. Our stack is Nginx, PHP-FPM. But nginx + fastcgi module doesn't allow me to flush as I wanted. It seems that nginx will keep all caches and flush all at once. I saw a few solutions whereas to set max tmp file size to 0 and smaller buffer size (1K for example) which I'm afraid it is not suitable for our case. It seems not reliable for our high scalable site. This and this nginx post articulate the same problem. I really appreciate if you can help explain a little bit on how fastcgi, nginx cache and php's cache work together. and if possible a hint on how to overcome this. I'm looking forward to hearing from you Thank you Huan -------------- next part -------------- An HTML attachment was scrubbed... URL: From quanglens at gmail.com Wed Jul 3 05:20:25 2013 From: quanglens at gmail.com (quang nguyen) Date: Wed, 3 Jul 2013 12:20:25 +0700 Subject: Progressive flushing fastcgi cache early In-Reply-To: <4F95803E-BC4E-42A4-97F7-AB2DD26FD2A6@me.com> References: <2DD76FCE-9619-475C-9409-FE780CC18317@me.com> <4F95803E-BC4E-42A4-97F7-AB2DD26FD2A6@me.com> Message-ID: in php.ini you set output_buffering = Off zlib.output_compression = Off in nginx.conf gzip off;proxy_buffering off; On Wed, Jul 3, 2013 at 9:03 AM, Huan Nguyen wrote: > Hi all, > > I'm try to improve our webpage start render time by flushing the cache > buffer early. Our stack is Nginx, PHP-FPM. But nginx + fastcgi module > doesn't allow me to flush as I wanted. It seems that nginx will keep all > caches and flush all at once. I saw a few solutions whereas to set max tmp > file size to 0 and smaller buffer size (1K for example) which I'm afraid it > is not suitable for our case. It seems not reliable for our high scalable > site. > > This > and this nginx post articulate > the same problem. > > I really appreciate if you can help explain a little bit on how fastcgi, > nginx cache and php's cache work together. and if possible a hint on how to > overcome this. > > I'm looking forward to hearing from you > > Thank you > > Huan > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Wed Jul 3 08:15:24 2013 From: vl at nginx.com (Homutov Vladimir) Date: Wed, 03 Jul 2013 08:15:24 +0000 Subject: [nginx] Core: consolidated log-related code. Message-ID: details: http://hg.nginx.org/nginx/rev/e088695737c3 branches: changeset: 5260:e088695737c3 user: Vladimir Homutov date: Fri Jun 28 17:24:54 2013 +0400 description: Core: consolidated log-related code. The stderr redirection code is moved to ngx_log_redirect_stderr(). The opening of the default log code is moved to ngx_log_open_default(). diffstat: src/core/nginx.c | 9 ++------- src/core/ngx_cycle.c | 28 +++++----------------------- src/core/ngx_log.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/core/ngx_log.h | 2 ++ 4 files changed, 51 insertions(+), 30 deletions(-) diffs (142 lines): diff -r 0c699e1d1071 -r e088695737c3 src/core/nginx.c --- a/src/core/nginx.c Tue Jul 02 20:05:49 2013 +0400 +++ b/src/core/nginx.c Fri Jun 28 17:24:54 2013 +0400 @@ -387,13 +387,8 @@ main(int argc, char *const *argv) return 1; } - if (!cycle->log_use_stderr && cycle->log->file->fd != ngx_stderr) { - - if (ngx_set_stderr(cycle->log->file->fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - ngx_set_stderr_n " failed"); - return 1; - } + if (ngx_log_redirect_stderr(cycle) != NGX_OK) { + return 1; } if (log->file->fd != ngx_stderr) { diff -r 0c699e1d1071 -r e088695737c3 src/core/ngx_cycle.c --- a/src/core/ngx_cycle.c Tue Jul 02 20:05:49 2013 +0400 +++ b/src/core/ngx_cycle.c Fri Jun 28 17:24:54 2013 +0400 @@ -36,8 +36,6 @@ ngx_tls_key_t ngx_core_tls_key; static ngx_connection_t dumb; /* STUB */ -static ngx_str_t error_log = ngx_string(NGX_ERROR_LOG_PATH); - ngx_cycle_t * ngx_init_cycle(ngx_cycle_t *old_cycle) @@ -338,13 +336,8 @@ ngx_init_cycle(ngx_cycle_t *old_cycle) } - if (cycle->new_log.file == NULL) { - cycle->new_log.file = ngx_conf_open_file(cycle, &error_log); - if (cycle->new_log.file == NULL) { - goto failed; - } - - cycle->new_log.log_level = NGX_LOG_ERR; + if (ngx_log_open_default(cycle) != NGX_OK) { + goto failed; } /* open the new files */ @@ -583,13 +576,8 @@ ngx_init_cycle(ngx_cycle_t *old_cycle) /* commit the new cycle configuration */ - if (!ngx_use_stderr && !cycle->log_use_stderr - && cycle->log->file->fd != ngx_stderr) - { - if (ngx_set_stderr(cycle->log->file->fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - ngx_set_stderr_n " failed"); - } + if (!ngx_use_stderr) { + (void) ngx_log_redirect_stderr(cycle); } pool->log = cycle->log; @@ -1230,13 +1218,7 @@ ngx_reopen_files(ngx_cycle_t *cycle, ngx file[i].fd = fd; } - if (!cycle->log_use_stderr && cycle->log->file->fd != ngx_stderr) { - - if (ngx_set_stderr(cycle->log->file->fd) == NGX_FILE_ERROR) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - ngx_set_stderr_n " failed"); - } - } + (void) ngx_log_redirect_stderr(cycle); } diff -r 0c699e1d1071 -r e088695737c3 src/core/ngx_log.c --- a/src/core/ngx_log.c Tue Jul 02 20:05:49 2013 +0400 +++ b/src/core/ngx_log.c Fri Jun 28 17:24:54 2013 +0400 @@ -363,6 +363,48 @@ ngx_log_init(u_char *prefix) } +ngx_int_t +ngx_log_open_default(ngx_cycle_t *cycle) +{ + static ngx_str_t error_log = ngx_string(NGX_ERROR_LOG_PATH); + + if (cycle->new_log.file == NULL) { + cycle->new_log.file = ngx_conf_open_file(cycle, &error_log); + if (cycle->new_log.file == NULL) { + return NGX_ERROR; + } + + cycle->new_log.log_level = NGX_LOG_ERR; + } + + return NGX_OK; +} + + +ngx_int_t +ngx_log_redirect_stderr(ngx_cycle_t *cycle) +{ + ngx_fd_t fd; + + if (cycle->log_use_stderr) { + return NGX_OK; + } + + fd = cycle->log->file->fd; + + if (fd != ngx_stderr) { + if (ngx_set_stderr(fd) == NGX_FILE_ERROR) { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, + ngx_set_stderr_n " failed"); + + return NGX_ERROR; + } + } + + return NGX_OK; +} + + static char * ngx_log_set_levels(ngx_conf_t *cf, ngx_log_t *log) { diff -r 0c699e1d1071 -r e088695737c3 src/core/ngx_log.h --- a/src/core/ngx_log.h Tue Jul 02 20:05:49 2013 +0400 +++ b/src/core/ngx_log.h Fri Jun 28 17:24:54 2013 +0400 @@ -225,6 +225,8 @@ ngx_log_t *ngx_log_init(u_char *prefix); void ngx_cdecl ngx_log_abort(ngx_err_t err, const char *fmt, ...); void ngx_cdecl ngx_log_stderr(ngx_err_t err, const char *fmt, ...); u_char *ngx_log_errno(u_char *buf, u_char *last, ngx_err_t err); +ngx_int_t ngx_log_open_default(ngx_cycle_t *cycle); +ngx_int_t ngx_log_redirect_stderr(ngx_cycle_t *cycle); char *ngx_log_set_log(ngx_conf_t *cf, ngx_log_t **head); From mdounin at mdounin.ru Wed Jul 3 11:54:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jul 2013 15:54:38 +0400 Subject: How to abort subrequest properly? In-Reply-To: References: Message-ID: <20130703115438.GR20717@mdounin.ru> Hello! On Wed, Jul 03, 2013 at 04:34:26AM +0400, Marat Dakota wrote: > Nobody knows? > > > On Sun, Jun 30, 2013 at 3:04 PM, Marat Dakota wrote: > > > Hi, > > > > I am parsing a subrequest's body as it arrives. At some point I could > > decide that the subrequest's body is not well-formed. I want to stop > > receiving the rest of the subrequest's body and close its connection. My > > main request and all other subrequests should continue working. > > > > I've tried something like ngx_http_finalize_request(sr, NGX_ABORT). It > > looks like it's not the thing. > > > > What steps should be applied to abort a subrequest? As of now, it's not really possible to abort a subrequest without aborting main request. -- Maxim Dounin http://nginx.org/en/donation.html From dakota at brokenpipe.ru Wed Jul 3 13:17:59 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Wed, 3 Jul 2013 17:17:59 +0400 Subject: How to abort subrequest properly? In-Reply-To: <20130703115438.GR20717@mdounin.ru> References: <20130703115438.GR20717@mdounin.ru> Message-ID: Hi Maxim, Are there any adequately hardcore methods to close subrequest's connection? I mean, what steps should be done internally? For now, I'm just setting a flag in subrequest's context and just ignoring the data in subrequest's body filter depending on this flag. It is ok, but if there is a relatively simple way to close the connection to avoid meaningless data transfers and meaningless waits for subrequest to be finished ? it would be nice. Thanks. -- Marat On Wed, Jul 3, 2013 at 3:54 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jul 03, 2013 at 04:34:26AM +0400, Marat Dakota wrote: > > > Nobody knows? > > > > > > On Sun, Jun 30, 2013 at 3:04 PM, Marat Dakota > wrote: > > > > > Hi, > > > > > > I am parsing a subrequest's body as it arrives. At some point I could > > > decide that the subrequest's body is not well-formed. I want to stop > > > receiving the rest of the subrequest's body and close its connection. > My > > > main request and all other subrequests should continue working. > > > > > > I've tried something like ngx_http_finalize_request(sr, NGX_ABORT). It > > > looks like it's not the thing. > > > > > > What steps should be applied to abort a subrequest? > > As of now, it's not really possible to abort a subrequest without > aborting main request. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From f_los_ch at yahoo.com Wed Jul 3 14:48:29 2013 From: f_los_ch at yahoo.com (Florian S.) Date: Wed, 03 Jul 2013 16:48:29 +0200 Subject: Stop handling SIGTERM and zombie processes after reconfigure Message-ID: <51D439BD.60004@yahoo.com> Hi together! I'm having occasionally trouble with worker processes left and nginx stopping handling signals (HUP and even TERM) in general. Upon reconfigure signal, the log shows four new processes being spawned, while the old four processes are shutting down: > [notice] 5159#0: using the "epoll" event method > [notice] 5159#0: nginx/1.4.1 > [notice] 5159#0: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) > [notice] 5159#0: OS: Linux 3.9.7-147-x86 > [notice] 5159#0: getrlimit(RLIMIT_NOFILE): 100000:100000 > [notice] 5159#0: start worker processes > [notice] 5159#0: start worker process 5330 > [notice] 5159#0: start worker process 5331 > [notice] 5159#0: start worker process 5332 > [notice] 5159#0: start worker process 5333 > [notice] 5159#0: signal 1 (SIGHUP) received, reconfiguring > [notice] 5159#0: reconfiguring > [notice] 5159#0: using the "epoll" event method > [notice] 5159#0: start worker processes > [notice] 5159#0: start worker process 12457 > [notice] 5159#0: start worker process 12458 > [notice] 5159#0: start worker process 12459 > [notice] 5159#0: start worker process 12460 > [notice] 5159#0: start cache manager process 12461 > [notice] 5159#0: start cache loader process 12462 > [notice] 5331#0: gracefully shutting down > [notice] 5330#0: gracefully shutting down > [notice] 5331#0: exiting > [notice] 5330#0: exiting > [notice] 5331#0: exit > [notice] 5330#0: exit > [notice] 5332#0: gracefully shutting down > [notice] 5159#0: signal 17 (SIGCHLD) received > [notice] 5159#0: worker process 5331 exited with code 0 > [notice] 5332#0: exiting > [notice] 5332#0: exit > [notice] 5333#0: gracefully shutting down > [notice] 5333#0: exiting > [notice] 5333#0: exit After that, nginx is fully operational and serving requests -- however, ps yields: > root 5159 0.0 0.0 6248 1696 ? Ss 10:43 0:00 nginx: master process /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf > nobody 5330 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > nobody 5332 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > nobody 5333 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > nobody 12457 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > nobody 12458 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > nobody 12459 0.0 0.0 8332 3544 ? S 10:44 0:00 nginx: worker process > nobody 12460 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > nobody 12461 0.0 0.0 6296 1068 ? S 10:44 0:00 nginx: cache manager process > nobody 12462 0.0 0.0 0 0 ? Z 10:44 0:00 [nginx] In the log one can see that SIGCHLD is only received once for 5331, which does not show up as zombie -- in contrast to the workers 5330, 5332, 5333, and the cache loader 12462. Much more serious is that neither > /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf -s(stop|reload) nor > kill 5159 seem to get handled by nginx anymore (nothing in the log and no effect). Maybe the master process is stuck waiting for some mutex?: > strace -p 5159 > Process 5159 attached - interrupt to quit > futex(0xb7658e6c, FUTEX_WAIT_PRIVATE, 2, NULL Unfortunately, I missed to get a core dump of the master process while it was running. Additionally, there is no debug log available, sorry. As I was not able to reliably reproduce this issue, I'll most probably have to wait... Many thanks in advance and kind regards, Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 3 15:38:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jul 2013 19:38:06 +0400 Subject: Stop handling SIGTERM and zombie processes after reconfigure In-Reply-To: <51D439BD.60004@yahoo.com> References: <51D439BD.60004@yahoo.com> Message-ID: <20130703153806.GU20717@mdounin.ru> Hello! On Wed, Jul 03, 2013 at 04:48:29PM +0200, Florian S. wrote: > Hi together! > > I'm having occasionally trouble with worker processes left > and nginx stopping handling signals (HUP and even TERM) in general. > > Upon reconfigure signal, the log shows four new processes being > spawned, while the old four processes are shutting down: > > > [notice] 5159#0: using the "epoll" event method > > [notice] 5159#0: nginx/1.4.1 > > [notice] 5159#0: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) > > [notice] 5159#0: OS: Linux 3.9.7-147-x86 > > [notice] 5159#0: getrlimit(RLIMIT_NOFILE): 100000:100000 > > [notice] 5159#0: start worker processes > > [notice] 5159#0: start worker process 5330 > > [notice] 5159#0: start worker process 5331 > > [notice] 5159#0: start worker process 5332 > > [notice] 5159#0: start worker process 5333 > > [notice] 5159#0: signal 1 (SIGHUP) received, reconfiguring > > [notice] 5159#0: reconfiguring > > [notice] 5159#0: using the "epoll" event method > > [notice] 5159#0: start worker processes > > [notice] 5159#0: start worker process 12457 > > [notice] 5159#0: start worker process 12458 > > [notice] 5159#0: start worker process 12459 > > [notice] 5159#0: start worker process 12460 > > [notice] 5159#0: start cache manager process 12461 > > [notice] 5159#0: start cache loader process 12462 > > [notice] 5331#0: gracefully shutting down > > [notice] 5330#0: gracefully shutting down > > [notice] 5331#0: exiting > > [notice] 5330#0: exiting > > [notice] 5331#0: exit > > [notice] 5330#0: exit > > [notice] 5332#0: gracefully shutting down > > [notice] 5159#0: signal 17 (SIGCHLD) received > > [notice] 5159#0: worker process 5331 exited with code 0 > > [notice] 5332#0: exiting > > [notice] 5332#0: exit > > [notice] 5333#0: gracefully shutting down > > [notice] 5333#0: exiting > > [notice] 5333#0: exit > > After that, nginx is fully operational and serving requests -- > however, ps yields: > > > root 5159 0.0 0.0 6248 1696 ? Ss 10:43 0:00 nginx: master > process /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf > > nobody 5330 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > > nobody 5332 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > > nobody 5333 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] > > nobody 12457 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > > nobody 12458 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > > nobody 12459 0.0 0.0 8332 3544 ? S 10:44 0:00 nginx: worker process > > nobody 12460 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process > > nobody 12461 0.0 0.0 6296 1068 ? S 10:44 0:00 nginx: cache > manager process > > nobody 12462 0.0 0.0 0 0 ? Z 10:44 0:00 [nginx] > > In the log one can see that SIGCHLD is only received once for 5331, > which does not show up as zombie -- in contrast to the workers 5330, > 5332, 5333, and the cache loader 12462. > Much more serious is that neither > > > /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf -s(stop|reload) > > nor > > > kill 5159 > > seem to get handled by nginx anymore (nothing in the log and no > effect). Maybe the master process is stuck waiting for some mutex?: > > >strace -p 5159 > > Process 5159 attached - interrupt to quit > > futex(0xb7658e6c, FUTEX_WAIT_PRIVATE, 2, NULL > > Unfortunately, I missed to get a core dump of the master process > while it was running. Additionally, there is no debug log available, > sorry. As I was not able to reliably reproduce this issue, I'll most > probably have to wait... It indeed looks like the master process is blocked somewhere. It would be interesting to see stack trace of a master process when this happens. (It's also good idea to make sure there are no 3rd party modules/patches, just in case.) -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Wed Jul 3 19:34:02 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 3 Jul 2013 12:34:02 -0700 Subject: Removing response headers In-Reply-To: References: Message-ID: Hello! On Tue, Jul 2, 2013 at 7:35 AM, Jeff Kaufman wrote: > In a header filter, to remove a filter that's already been set, I see > two options: > > 1. set the header's hash to 0 For *response* headers, this approach is recommended. > 2. actually delete the header from r->headers_out > > The second is much more complex and requires allocating memory (see > ngx_http_headers_more_rm_header_helper in > https://github.com/agentzh/headers-more-nginx-module/blob/master/src/ngx_http_headers_more_util.c#L294) > so I'd rather use the first, but is there a reason to prefer the > second? > The ngx_http_headers_more_rm_header_helper function is only used for removing *request* headers because setting ->hash to 0 does not work for request headers. Best regards, -agentzh From ru at nginx.com Wed Jul 3 19:52:19 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 03 Jul 2013 19:52:19 +0000 Subject: [nginx] Upstream: updated list of ngx_event_connect_peer() retur... Message-ID: details: http://hg.nginx.org/nginx/rev/af60a210cb78 branches: changeset: 5261:af60a210cb78 user: Ruslan Ermilov date: Wed Jul 03 12:04:13 2013 +0400 description: Upstream: updated list of ngx_event_connect_peer() return values. ngx_http_upstream_get_keepalive_peer() may return NGX_DONE to indicate that the cached keepalive connection is reused. diffstat: src/http/ngx_http_upstream.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r e088695737c3 -r af60a210cb78 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Jun 28 17:24:54 2013 +0400 +++ b/src/http/ngx_http_upstream.c Wed Jul 03 12:04:13 2013 +0400 @@ -1181,7 +1181,7 @@ ngx_http_upstream_connect(ngx_http_reque return; } - /* rc == NGX_OK || rc == NGX_AGAIN */ + /* rc == NGX_OK || rc == NGX_AGAIN || rc == NGX_DONE */ c = u->peer.connection; From agentzh at gmail.com Wed Jul 3 20:00:29 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 3 Jul 2013 13:00:29 -0700 Subject: How to abort subrequest properly? In-Reply-To: References: <20130703115438.GR20717@mdounin.ru> Message-ID: Hello! On Wed, Jul 3, 2013 at 6:17 AM, Marat Dakota wrote: > Are there any adequately hardcore methods to close subrequest's connection? > I mean, what steps should be done internally? > I was also thinking hard about this problem when I was implementing the "light thread" model in our ngx_lua module. The subrequest mechanism does not know the details of the target location's handlers. The target location's handler may introduce multiple upstream connections. It's not safe at all to assume any thing here. A possible work-around here is to actively call the cleanup handlers registered by the subrequest. But unfortunately, the cleanup handlers are registered into the main request, it's not easy to distinguish a specific subrequest's cleanup handlers from others. Another challenge here is that the subrequest is sharing the same memory pool with its ancestors, so freeing up all the memory associated with the subrequest is not possible. To conclude, it's not easy to abort a pending subrequest without aborting the main request. And I also decide to throw out an error in my ngx_lua module when the user Lua code is trying to abort a "light thread" with a pending subrequest. > For now, I'm just setting a flag in subrequest's context and just ignoring > the data in subrequest's body filter depending on this flag. It is ok, but > if there is a relatively simple way to close the connection to avoid > meaningless data transfers and meaningless waits for subrequest to be > finished ? it would be nice. > See above :) Best regards, -agentzh From dakota at brokenpipe.ru Wed Jul 3 20:40:31 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Thu, 4 Jul 2013 00:40:31 +0400 Subject: How to abort subrequest properly? In-Reply-To: References: <20130703115438.GR20717@mdounin.ru> Message-ID: Yichun, thanks for your notes. Even though they are not as optimistic as I would like them to be :). -- Marat On Thu, Jul 4, 2013 at 12:00 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Jul 3, 2013 at 6:17 AM, Marat Dakota wrote: > > Are there any adequately hardcore methods to close subrequest's > connection? > > I mean, what steps should be done internally? > > > > I was also thinking hard about this problem when I was implementing > the "light thread" model in our ngx_lua module. > > The subrequest mechanism does not know the details of the target > location's handlers. The target location's handler may introduce > multiple upstream connections. It's not safe at all to assume any > thing here. > > A possible work-around here is to actively call the cleanup handlers > registered by the subrequest. But unfortunately, the cleanup handlers > are registered into the main request, it's not easy to distinguish a > specific subrequest's cleanup handlers from others. Another challenge > here is that the subrequest is sharing the same memory pool with its > ancestors, so freeing up all the memory associated with the subrequest > is not possible. > > To conclude, it's not easy to abort a pending subrequest without > aborting the main request. And I also decide to throw out an error in > my ngx_lua module when the user Lua code is trying to abort a "light > thread" with a pending subrequest. > > > For now, I'm just setting a flag in subrequest's context and just > ignoring > > the data in subrequest's body filter depending on this flag. It is ok, > but > > if there is a relatively simple way to close the connection to avoid > > meaningless data transfers and meaningless waits for subrequest to be > > finished ? it would be nice. > > > > See above :) > > Best regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngx.eugaia at gmail.com Wed Jul 3 23:59:02 2013 From: ngx.eugaia at gmail.com (Marcus Clyne) Date: Wed, 03 Jul 2013 20:59:02 -0300 Subject: How to abort subrequest properly? In-Reply-To: <20130703115438.GR20717@mdounin.ru> References: <20130703115438.GR20717@mdounin.ru> Message-ID: <51D4BAC6.1090608@gmail.com> El 03/07/13 08:54, Maxim Dounin escribi?: > Hello! > > On Wed, Jul 03, 2013 at 04:34:26AM +0400, Marat Dakota wrote: > >> Nobody knows? >> >> >> On Sun, Jun 30, 2013 at 3:04 PM, Marat Dakota wrote: >> >>> Hi, >>> >>> I am parsing a subrequest's body as it arrives. At some point I could >>> decide that the subrequest's body is not well-formed. I want to stop >>> receiving the rest of the subrequest's body and close its connection. My >>> main request and all other subrequests should continue working. >>> >>> I've tried something like ngx_http_finalize_request(sr, NGX_ABORT). It >>> looks like it's not the thing. >>> >>> What steps should be applied to abort a subrequest? > As of now, it's not really possible to abort a subrequest without > aborting main request. > If the subrequest uses the upstream mechanism, wouldn't it be safe to just close the socket of upstream's connection? I'm assuming there would be an entry in the error log, but is there anything harmful that could come from it? Marcus. From ngx.eugaia at gmail.com Thu Jul 4 00:37:58 2013 From: ngx.eugaia at gmail.com (Marcus Clyne) Date: Wed, 03 Jul 2013 21:37:58 -0300 Subject: How to abort subrequest properly? In-Reply-To: <51D4BAC6.1090608@gmail.com> References: <20130703115438.GR20717@mdounin.ru> <51D4BAC6.1090608@gmail.com> Message-ID: <51D4C3E6.1080706@gmail.com> El 03/07/13 20:59, Marcus Clyne escribi?: > El 03/07/13 08:54, Maxim Dounin escribi?: >> Hello! >> >> On Wed, Jul 03, 2013 at 04:34:26AM +0400, Marat Dakota wrote: >> >>> Nobody knows? >>> >>> >>> On Sun, Jun 30, 2013 at 3:04 PM, Marat Dakota >>> wrote: >>> >>>> Hi, >>>> >>>> I am parsing a subrequest's body as it arrives. At some point I could >>>> decide that the subrequest's body is not well-formed. I want to stop >>>> receiving the rest of the subrequest's body and close its >>>> connection. My >>>> main request and all other subrequests should continue working. >>>> >>>> I've tried something like ngx_http_finalize_request(sr, NGX_ABORT). It >>>> looks like it's not the thing. >>>> >>>> What steps should be applied to abort a subrequest? >> As of now, it's not really possible to abort a subrequest without >> aborting main request. >> > If the subrequest uses the upstream mechanism, wouldn't it be safe to > just close the socket of upstream's connection? I'm assuming there > would be an entry in the error log, but is there anything harmful that > could come from it? > And you'd obviously want to finalize the subrequest, probably do something like ngx_http_finalize_request(sr, NGX_OK). I don't think you could just issue this on its own, since I think the subrequest's sockets wouldn't automatically be closed until the main request was finalized (but it's been a while since I've done core hacking, and I can't remember for sure). > Marcus. From agentzh at gmail.com Thu Jul 4 05:43:42 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 3 Jul 2013 22:43:42 -0700 Subject: How to abort subrequest properly? In-Reply-To: <51D4C3E6.1080706@gmail.com> References: <20130703115438.GR20717@mdounin.ru> <51D4BAC6.1090608@gmail.com> <51D4C3E6.1080706@gmail.com> Message-ID: Hello! On Wed, Jul 3, 2013 at 5:37 PM, Marcus Clyne wrote: >> If the subrequest uses the upstream mechanism, wouldn't it be safe to just >> close the socket of upstream's connection? I'm assuming there would be an >> entry in the error log, but is there anything harmful that could come from >> it? >> > And you'd obviously want to finalize the subrequest, probably do something > like ngx_http_finalize_request(sr, NGX_OK). I don't think you could just > issue this on its own, since I think the subrequest's sockets wouldn't > automatically be closed until the main request was finalized (but it's been > a while since I've done core hacking, and I can't remember for sure). > No. For modules based on ngx_http_upstream, the right way to shut it down is to call ngx_http_upstream_finalize_request. This is exactly what the cleanup handler registered by ngx_http_upstream does (i.e., the gx_http_upstream_cleanup function). And that's why I propose the solution of calling the cleanup handler on the subrequest. Calling ngx_http_finalize_request(sr, NGX_OK) does not really help here because the subrequest's cleanup handler that actually shuts down the upstream socket (and other resources like the resolver) will not be triggered until the main request is cleaned up. On the other hand, calling ngx_http_upstream_finalize_request directly on the subrequest should work for strict upstream modules but this is just too specific and hacky to be really interesting for any descent 3rd-party modules :) Best regards, -agentzh From ngx.eugaia at gmail.com Thu Jul 4 10:41:53 2013 From: ngx.eugaia at gmail.com (Marcus Clyne) Date: Thu, 04 Jul 2013 07:41:53 -0300 Subject: How to abort subrequest properly? In-Reply-To: References: <20130703115438.GR20717@mdounin.ru> <51D4BAC6.1090608@gmail.com> <51D4C3E6.1080706@gmail.com> Message-ID: <51D55171.6000709@gmail.com> El 04/07/13 02:43, Yichun Zhang (agentzh) escribi?: > Hello! > > On Wed, Jul 3, 2013 at 5:37 PM, Marcus Clyne wrote: >>> If the subrequest uses the upstream mechanism, wouldn't it be safe to just >>> close the socket of upstream's connection? I'm assuming there would be an >>> entry in the error log, but is there anything harmful that could come from >>> it? >>> >> And you'd obviously want to finalize the subrequest, probably do something >> like ngx_http_finalize_request(sr, NGX_OK). I don't think you could just >> issue this on its own, since I think the subrequest's sockets wouldn't >> automatically be closed until the main request was finalized (but it's been >> a while since I've done core hacking, and I can't remember for sure). >> > No. For modules based on ngx_http_upstream, the right way to shut it > down is to call ngx_http_upstream_finalize_request. This is exactly > what the cleanup handler registered by ngx_http_upstream does (i.e., > the gx_http_upstream_cleanup function). And that's why I propose the > solution of calling the cleanup handler on the subrequest. > > Calling ngx_http_finalize_request(sr, NGX_OK) does not really help > here because the subrequest's cleanup handler that actually shuts down > the upstream socket (and other resources like the resolver) will not > be triggered until the main request is cleaned up. Hence why I was suggesting closing the socket as well as calling ngx_http_finalize_request. I think the only thing you really need in the time before the main request finishing is that you don't get any data on the socket, and your request count is correct. The upstream request obviously needs to be closed properly, but if the upstream cleanup function is going to be called anyway by the main request, then I'm not sure it really makes any difference, and you'd be running through the cleanup function twice. > > On the other hand, calling ngx_http_upstream_finalize_request directly > on the subrequest should work for strict upstream modules but this is > just too specific and hacky to be really interesting for any descent > 3rd-party modules :) Hacky, for sure but it sounds like this is a custom setup anyway. This is obviously only going to be useful if the body is one that uses upstream requests. Marcus. From f_los_ch at yahoo.com Thu Jul 4 11:00:27 2013 From: f_los_ch at yahoo.com (Florian S.) Date: Thu, 04 Jul 2013 13:00:27 +0200 Subject: Stop handling SIGTERM and zombie processes after reconfigure In-Reply-To: <20130703153806.GU20717@mdounin.ru> References: <51D439BD.60004@yahoo.com> <20130703153806.GU20717@mdounin.ru> Message-ID: <51D555CB.5090801@yahoo.com> Hi again, On 03.07.2013 17:38, Maxim Dounin wrote: > Hello! > > On Wed, Jul 03, 2013 at 04:48:29PM +0200, Florian S. wrote: > >> Hi together! >> >> I'm having occasionally trouble with worker processes left >> and nginx stopping handling signals (HUP and even TERM) in general. >> >> Upon reconfigure signal, the log shows four new processes being >> spawned, while the old four processes are shutting down: >> >>> [notice] 5159#0: using the "epoll" event method >>> [notice] 5159#0: nginx/1.4.1 >>> [notice] 5159#0: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) >>> [notice] 5159#0: OS: Linux 3.9.7-147-x86 >>> [notice] 5159#0: getrlimit(RLIMIT_NOFILE): 100000:100000 >>> [notice] 5159#0: start worker processes >>> [notice] 5159#0: start worker process 5330 >>> [notice] 5159#0: start worker process 5331 >>> [notice] 5159#0: start worker process 5332 >>> [notice] 5159#0: start worker process 5333 >>> [notice] 5159#0: signal 1 (SIGHUP) received, reconfiguring >>> [notice] 5159#0: reconfiguring >>> [notice] 5159#0: using the "epoll" event method >>> [notice] 5159#0: start worker processes >>> [notice] 5159#0: start worker process 12457 >>> [notice] 5159#0: start worker process 12458 >>> [notice] 5159#0: start worker process 12459 >>> [notice] 5159#0: start worker process 12460 >>> [notice] 5159#0: start cache manager process 12461 >>> [notice] 5159#0: start cache loader process 12462 >>> [notice] 5331#0: gracefully shutting down >>> [notice] 5330#0: gracefully shutting down >>> [notice] 5331#0: exiting >>> [notice] 5330#0: exiting >>> [notice] 5331#0: exit >>> [notice] 5330#0: exit >>> [notice] 5332#0: gracefully shutting down >>> [notice] 5159#0: signal 17 (SIGCHLD) received >>> [notice] 5159#0: worker process 5331 exited with code 0 >>> [notice] 5332#0: exiting >>> [notice] 5332#0: exit >>> [notice] 5333#0: gracefully shutting down >>> [notice] 5333#0: exiting >>> [notice] 5333#0: exit >> >> After that, nginx is fully operational and serving requests -- >> however, ps yields: >> >>> root 5159 0.0 0.0 6248 1696 ? Ss 10:43 0:00 nginx: master >> process /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf >>> nobody 5330 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] >>> nobody 5332 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] >>> nobody 5333 0.0 0.0 0 0 ? Z 10:43 0:00 [nginx] >>> nobody 12457 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process >>> nobody 12458 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process >>> nobody 12459 0.0 0.0 8332 3544 ? S 10:44 0:00 nginx: worker process >>> nobody 12460 0.0 0.0 8332 2940 ? S 10:44 0:00 nginx: worker process >>> nobody 12461 0.0 0.0 6296 1068 ? S 10:44 0:00 nginx: cache >> manager process >>> nobody 12462 0.0 0.0 0 0 ? Z 10:44 0:00 [nginx] >> >> In the log one can see that SIGCHLD is only received once for 5331, >> which does not show up as zombie -- in contrast to the workers 5330, >> 5332, 5333, and the cache loader 12462. >> Much more serious is that neither >> >>> /chroots/nginx/nginx -c /chroots/nginx/conf/nginx.conf -s(stop|reload) >> >> nor >> >>> kill 5159 >> >> seem to get handled by nginx anymore (nothing in the log and no >> effect). Maybe the master process is stuck waiting for some mutex?: >> >>> strace -p 5159 >>> Process 5159 attached - interrupt to quit >>> futex(0xb7658e6c, FUTEX_WAIT_PRIVATE, 2, NULL >> >> Unfortunately, I missed to get a core dump of the master process >> while it was running. Additionally, there is no debug log available, >> sorry. As I was not able to reliably reproduce this issue, I'll most >> probably have to wait... > > It indeed looks like the master process is blocked somewhere. It > would be interesting to see stack trace of a master process when > this happens. > > (It's also good idea to make sure there are no 3rd party > modules/patches, just in case.) > Thanks for your quick reply. I finally managed to get a core dump (I killed the master process using signal 11 in order to enforce the dump, thats why gdb claims the segfault): > Program terminated with signal 11, Segmentation fault. > #0 0xb772c430 in dl_main (phdr=0x5, phnum=1, user_entry=0x80a97f9, auxv=0xbfd0956c) at rtld.c:1751 > 1751 rtld.c: Datei oder Verzeichnis nicht gefunden. > (gdb) bt > #0 0xb772c430 in dl_main (phdr=0x5, phnum=1, user_entry=0x80a97f9, auxv=0xbfd0956c) at rtld.c:1751 > #1 0xb7523bc6 in ?? () > #2 0x00000005 in ?? () > #3 0x00000001 in ?? () > #4 0x080a97f9 in ?? () > #5 0x0804c370 in syslog (__fmt=0x80a97f9 "%.*s", __pri=) at /usr/include/bits/syslog.h:32 > #6 ngx_log_error_core (level=6, log=0x967f084, fn=0x80adba2 "ngx_signal_handler", file=0x80ad731 "src/os/unix/ngx_process.c", line=430, err=0, fmt=0x80ad74b "signal %d (%s) received%s") at src/core/ngx_log.c:249 > #7 0x0806b890 in ngx_signal_handler (signo=17) at src/os/unix/ngx_process.c:429 > #8 0xb772c400 in dl_main (phdr=0x5, phnum=1, user_entry=0x80a97f9, auxv=0xbfd0a1ec) at rtld.c:1735 > #9 0xb7523bc6 in ?? () > #10 0x00000005 in ?? () > #11 0x00000001 in ?? () > #12 0x080a97f9 in ?? () > #13 0x0804c370 in syslog (__fmt=0x80a97f9 "%.*s", __pri=) at /usr/include/bits/syslog.h:32 > #14 ngx_log_error_core (level=6, log=0x967f084, fn=0x80adba2 "ngx_signal_handler", file=0x80ad731 "src/os/unix/ngx_process.c", line=430, err=0, fmt=0x80ad74b "signal %d (%s) received%s") at src/core/ngx_log.c:249 > #15 0x0806b890 in ngx_signal_handler (signo=29) at src/os/unix/ngx_process.c:429 > #16 0xb772c400 in dl_main (phdr=0xbfd0b0f0, phnum=3218125184, user_entry=0x10, auxv=0x967f084) at rtld.c:1735 > #17 0x0806f0da in ngx_master_process_cycle (cycle=0x967f078) at src/os/unix/ngx_process_cycle.c:169 > #18 0x0804b95c in main (argc=3, argv=0xbfd0b394) at src/core/nginx.c:417 > (gdb) Maybe the the concurrently running handlers for SIGCHLD and SIGIO lead to some blocking in dl_main? However, I am not aware of the side-effects and exact purpose of the dynamic linking at this point. And as you can see, I did not mention that I have the (semi-official?) syslog patch applied, which might indeed cause the problem when called from the signal handler. As you already pointed out, it seems to be a good idea to remove this patch and try to check whether the error persists. Kind regards, Florian From maxim at nginx.com Thu Jul 4 11:25:21 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 04 Jul 2013 15:25:21 +0400 Subject: Stop handling SIGTERM and zombie processes after reconfigure In-Reply-To: <51D555CB.5090801@yahoo.com> References: <51D439BD.60004@yahoo.com> <20130703153806.GU20717@mdounin.ru> <51D555CB.5090801@yahoo.com> Message-ID: <51D55BA1.7090600@nginx.com> On 7/4/13 3:00 PM, Florian S. wrote: [...] > And as you can see, I did not mention that I have the > (semi-official?) syslog patch applied, which might indeed cause the > problem when called from the signal handler. As you already pointed > out, it seems to be a good idea to remove this patch and try to > check whether the error persists. > Just curious, what's the patch? -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From f_los_ch at yahoo.com Thu Jul 4 12:07:14 2013 From: f_los_ch at yahoo.com (Florian S.) Date: Thu, 04 Jul 2013 14:07:14 +0200 Subject: Stop handling SIGTERM and zombie processes after reconfigure In-Reply-To: <51D55BA1.7090600@nginx.com> References: <51D439BD.60004@yahoo.com> <20130703153806.GU20717@mdounin.ru> <51D555CB.5090801@yahoo.com> <51D55BA1.7090600@nginx.com> Message-ID: <51D56572.1020405@yahoo.com> Hi, On 04.07.2013 13:25, Maxim Konovalov wrote: > On 7/4/13 3:00 PM, Florian S. wrote: > [...] >> And as you can see, I did not mention that I have the >> (semi-official?) syslog patch applied, which might indeed cause the >> problem when called from the signal handler. As you already pointed >> out, it seems to be a good idea to remove this patch and try to >> check whether the error persists. >> > Just curious, what's the patch? I have taken it from https://github.com/yaoweibin/nginx_syslog_patch (I just found another and apparently not related source: http://wiki.nginx.org/File:Syslog.patch) But as far as I am now I can tell that my actual problem is not reproducible when I exchange > error_log syslog:notice notice; with > error_log /tmp/nginx_err.log notice; So imho this patch might be problematic regarding the issue from my previous posts. My solution for now is to augment the syslog patch to strip all the log messages out of the functions called/calleable by ngx_signal_handler(), even though suppressing this relevant information is clearly not optimal. Kind regards, Florian From glebius at nginx.com Fri Jul 5 09:45:27 2013 From: glebius at nginx.com (Gleb Smirnoff) Date: Fri, 05 Jul 2013 09:45:27 +0000 Subject: [nginx] Make macros safe. Message-ID: details: http://hg.nginx.org/nginx/rev/626f288fa5ed branches: changeset: 5262:626f288fa5ed user: Gleb Smirnoff date: Fri Jul 05 11:42:25 2013 +0400 description: Make macros safe. diffstat: src/core/ngx_config.h | 4 ++-- src/os/unix/ngx_files.h | 10 +++++----- src/os/win32/ngx_win32_config.h | 6 +++--- 3 files changed, 10 insertions(+), 10 deletions(-) diffs (63 lines): diff -r af60a210cb78 -r 626f288fa5ed src/core/ngx_config.h --- a/src/core/ngx_config.h Wed Jul 03 12:04:13 2013 +0400 +++ b/src/core/ngx_config.h Fri Jul 05 11:42:25 2013 +0400 @@ -80,8 +80,8 @@ typedef uintptr_t ngx_uint_t; typedef intptr_t ngx_flag_t; -#define NGX_INT32_LEN sizeof("-2147483648") - 1 -#define NGX_INT64_LEN sizeof("-9223372036854775808") - 1 +#define NGX_INT32_LEN (sizeof("-2147483648") - 1) +#define NGX_INT64_LEN (sizeof("-9223372036854775808") - 1) #if (NGX_PTR_SIZE == 4) #define NGX_INT_T_LEN NGX_INT32_LEN diff -r af60a210cb78 -r 626f288fa5ed src/os/unix/ngx_files.h --- a/src/os/unix/ngx_files.h Wed Jul 03 12:04:13 2013 +0400 +++ b/src/os/unix/ngx_files.h Fri Jul 05 11:42:25 2013 +0400 @@ -72,8 +72,8 @@ typedef struct { #define NGX_FILE_RDWR O_RDWR #define NGX_FILE_CREATE_OR_OPEN O_CREAT #define NGX_FILE_OPEN 0 -#define NGX_FILE_TRUNCATE O_CREAT|O_TRUNC -#define NGX_FILE_APPEND O_WRONLY|O_APPEND +#define NGX_FILE_TRUNCATE (O_CREAT|O_TRUNC) +#define NGX_FILE_APPEND (O_WRONLY|O_APPEND) #define NGX_FILE_NONBLOCK O_NONBLOCK #if (NGX_HAVE_OPENAT) @@ -86,13 +86,13 @@ typedef struct { #endif #if defined(O_SEARCH) -#define NGX_FILE_SEARCH O_SEARCH|NGX_FILE_DIRECTORY +#define NGX_FILE_SEARCH (O_SEARCH|NGX_FILE_DIRECTORY) #elif defined(O_EXEC) -#define NGX_FILE_SEARCH O_EXEC|NGX_FILE_DIRECTORY +#define NGX_FILE_SEARCH (O_EXEC|NGX_FILE_DIRECTORY) #else -#define NGX_FILE_SEARCH O_RDONLY|NGX_FILE_DIRECTORY +#define NGX_FILE_SEARCH (O_RDONLY|NGX_FILE_DIRECTORY) #endif #endif /* NGX_HAVE_OPENAT */ diff -r af60a210cb78 -r 626f288fa5ed src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h Wed Jul 03 12:04:13 2013 +0400 +++ b/src/os/win32/ngx_win32_config.h Fri Jul 05 11:42:25 2013 +0400 @@ -142,11 +142,11 @@ typedef int sig_atomic_t #define NGX_PTR_SIZE 4 -#define NGX_SIZE_T_LEN sizeof("-2147483648") - 1 +#define NGX_SIZE_T_LEN (sizeof("-2147483648") - 1) #define NGX_MAX_SIZE_T_VALUE 2147483647 -#define NGX_TIME_T_LEN sizeof("-2147483648") - 1 +#define NGX_TIME_T_LEN (sizeof("-2147483648") - 1) #define NGX_TIME_T_SIZE 4 -#define NGX_OFF_T_LEN sizeof("-9223372036854775807") - 1 +#define NGX_OFF_T_LEN (sizeof("-9223372036854775807") - 1) #define NGX_MAX_OFF_T_VALUE 9223372036854775807 #define NGX_SIG_ATOMIC_T_SIZE 4 From vbart at nginx.com Fri Jul 5 14:59:17 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 5 Jul 2013 18:59:17 +0400 Subject: SPDY: what is the purpose of blocked frame In-Reply-To: References: <201306252031.45298.vbart@nginx.com> Message-ID: <201307051859.17606.vbart@nginx.com> On Wednesday 26 June 2013 12:34:17 Yury Kirpichev wrote: > Hello, > > Thanks for analysis and explanation. > Then how about the following workaround - > - queue blocked frames at the begining of queue in FIFO order. > (just remove from ngx_http_spdy_queue_blocked_frame the code: > if (frame->priority >= (*out)->priority) { > break; > } > ) > > - queue non-blocked frames after blocked in priority order: > static ngx_inline void > ngx_http_spdy_queue_frame(ngx_http_spdy_connection_t *sc, > ngx_http_spdy_out_frame_t *frame) > { > ngx_http_spdy_out_frame_t **out; > > for (out = &sc->last_out; *out *&& !(*out)->blocked*; out = > &(*out)->next) > { > if (frame->priority >= (*out)->priority) { > break; > } > } > > frame->next = *out; > *out = frame; > } > > Do you foresee any obvious drawback of such approach? > [..] At first glance I don't. Indeed it can be a better strategy, particularly since the SYN_STREAM frames are usually small. Have you tested it already? wbr, Valentin V. Bartenev From ykirpichev at gmail.com Sat Jul 6 18:31:25 2013 From: ykirpichev at gmail.com (Yury Kirpichev) Date: Sat, 6 Jul 2013 22:31:25 +0400 Subject: SPDY: what is the purpose of blocked frame In-Reply-To: <201307051859.17606.vbart@nginx.com> References: <201306252031.45298.vbart@nginx.com> <201307051859.17606.vbart@nginx.com> Message-ID: Yes, we did such modification in our test environment and it is working well so far. Moreover, it showed good results in case of intermixed requests with low and high priority are handled. BR/ Yury 2013/7/5 Valentin V. Bartenev > On Wednesday 26 June 2013 12:34:17 Yury Kirpichev wrote: > > Hello, > > > > Thanks for analysis and explanation. > > Then how about the following workaround - > > - queue blocked frames at the begining of queue in FIFO order. > > (just remove from ngx_http_spdy_queue_blocked_frame the code: > > if (frame->priority >= (*out)->priority) { > > break; > > } > > ) > > > > - queue non-blocked frames after blocked in priority order: > > static ngx_inline void > > ngx_http_spdy_queue_frame(ngx_http_spdy_connection_t *sc, > > ngx_http_spdy_out_frame_t *frame) > > { > > ngx_http_spdy_out_frame_t **out; > > > > for (out = &sc->last_out; *out *&& !(*out)->blocked*; out = > > &(*out)->next) > > { > > if (frame->priority >= (*out)->priority) { > > break; > > } > > } > > > > frame->next = *out; > > *out = frame; > > } > > > > Do you foresee any obvious drawback of such approach? > > > [..] > > At first glance I don't. Indeed it can be a better strategy, particularly > since the SYN_STREAM frames are usually small. > > Have you tested it already? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzefip at gmail.com Sun Jul 7 09:22:56 2013 From: jzefip at gmail.com (Julien Zefi) Date: Sun, 7 Jul 2013 03:22:56 -0600 Subject: API question: large data processing handler In-Reply-To: <20130625174910.GM20717@mdounin.ru> References: <20130604160831.GY72282@mdounin.ru> <201306042029.57511.vbart@nginx.com> <20130620091107.GK49779@mdounin.ru> <20130625174910.GM20717@mdounin.ru> Message-ID: Hi, > > > > > Haven't looked any further. > > > > > > > thanks for your comments. Taking in count tha changes provided i still > face > > this problem: > > > > #0 0x00000000004065d6 in ngx_palloc (pool=0x0, size=16) at > > src/core/ngx_palloc.c:122 > > #1 0x0000000000406a73 in ngx_pcalloc (pool=0x0, size=16) at > > src/core/ngx_palloc.c:305 > > #2 0x000000000046b76d in ngx_http_chunked_header_filter (r=0x6eebb0) > > at src/http/modules/ngx_http_chunked_filter_module.c:82 > > #3 0x000000000046bdc4 in ngx_http_range_header_filter (r=0x6eebb0) > > at src/http/modules/ngx_http_range_filter_module.c:160 > > > > why my pool is always NULL ? do i am missing some initialization > somewhere ? > > Part of the backtrace shown suggests you trigger request activity after > the request was freed. Most likely you've forgot r->main->count++. > > thanks, that helped for the next calls. When i receive a client in HTTP/1.1, my binary data is not being send with the transfer chunked encoding headers, i refer to the header and footer to each chunk of content, the HTTP header is set properly, so i would like to know if i need to do something to enable the chunked header/footer or is this is automatic ?, also.. whats the most probable thing that could mess up that feature when developing a plugin ? thks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jul 7 21:54:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jul 2013 01:54:38 +0400 Subject: API question: large data processing handler In-Reply-To: References: <20130604160831.GY72282@mdounin.ru> <201306042029.57511.vbart@nginx.com> <20130620091107.GK49779@mdounin.ru> <20130625174910.GM20717@mdounin.ru> Message-ID: <20130707215437.GB30405@mdounin.ru> Hello! On Sun, Jul 07, 2013 at 03:22:56AM -0600, Julien Zefi wrote: > Hi, > > > > > > > > > Haven't looked any further. > > > > > > > > > > thanks for your comments. Taking in count tha changes provided i still > > face > > > this problem: > > > > > > #0 0x00000000004065d6 in ngx_palloc (pool=0x0, size=16) at > > > src/core/ngx_palloc.c:122 > > > #1 0x0000000000406a73 in ngx_pcalloc (pool=0x0, size=16) at > > > src/core/ngx_palloc.c:305 > > > #2 0x000000000046b76d in ngx_http_chunked_header_filter (r=0x6eebb0) > > > at src/http/modules/ngx_http_chunked_filter_module.c:82 > > > #3 0x000000000046bdc4 in ngx_http_range_header_filter (r=0x6eebb0) > > > at src/http/modules/ngx_http_range_filter_module.c:160 > > > > > > why my pool is always NULL ? do i am missing some initialization > > somewhere ? > > > > Part of the backtrace shown suggests you trigger request activity after > > the request was freed. Most likely you've forgot r->main->count++. > > > > > thanks, that helped for the next calls. > > When i receive a client in HTTP/1.1, my binary data is not being send with > the transfer chunked encoding headers, i refer to the header and footer to > each chunk of content, the HTTP header is set properly, so i would like to > know if i need to do something to enable the chunked header/footer or is > this is automatic ?, also.. whats the most probable thing that could mess > up that feature when developing a plugin ? Chunked transfer encoding is enabled automatically as long as it's supported by a client and there is no content length. Try looking into src/http/modules/ngx_http_chunked_filter_module.c for more details. -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Tue Jul 9 03:38:51 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 8 Jul 2013 20:38:51 -0700 Subject: How to abort subrequest properly? In-Reply-To: <51D55171.6000709@gmail.com> References: <20130703115438.GR20717@mdounin.ru> <51D4BAC6.1090608@gmail.com> <51D4C3E6.1080706@gmail.com> <51D55171.6000709@gmail.com> Message-ID: Hello! On Thu, Jul 4, 2013 at 3:41 AM, Marcus Clyne wrote: > The upstream request obviously > needs to be closed properly, but if the upstream cleanup function is going > to be called anyway by the main request, then I'm not sure it really makes > any difference, and you'd be running through the cleanup function twice. > The cleanup function usually protects against being called multiple times by means of something like the following code snippet: if (u->cleanup) { *u->cleanup = NULL; u->cleanup = NULL; } So it won't be an issue here at all. Regards, -agentzh From jzefip at gmail.com Wed Jul 10 00:12:58 2013 From: jzefip at gmail.com (Julien Zefi) Date: Tue, 9 Jul 2013 18:12:58 -0600 Subject: handle NGX_AGAIN properly Message-ID: hi, i understand that NGX_AGAIN is returned when a chain could not be send because more data cannot be buffered on that socket. I need to understand the following: in my case, when i receive a request, i start a timer every 10ms and send out some data, then i create a new timer every10ms until i decide to finish sending out data (video frames). But if in some triggered callback by the timer the ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send that chain as soon as the socket becomes available again. But after that happens, how can i restore my timer cycle ? thnks. J.Z. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Jul 10 01:02:35 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 9 Jul 2013 18:02:35 -0700 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hello! On Tue, Jul 9, 2013 at 5:12 PM, Julien Zefi wrote: > But if in some triggered callback by the timer the > ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send that > chain as soon as the socket becomes available again. This assumption is not correct. Nginx will only flush out the pending data for you when r->write_event_handler is set to ngx_http_writer. This only (automatically) happens in ngx_http_finalize_request (by calling the ngx_http_set_write_handler function to do the assignment to r->write_event_handler). > But after that happens, > how can i restore my timer cycle ? > My suggestion is to register your own r->write_event_handler handler to propagate the pending outputs by calling ngx_http_output_filter with a NULL chain link pointer yourself. And in that handler, you can also restore your timer cycle and etc when all the pending outputs have been flushed out (into the system socket send buffers). I've been doing something like this in our ngx_lua module. You can check out the ngx.flush() API function's implementation in particular: http://wiki.nginx.org/HttpLuaModule#ngx.flush Best regards, -agentzh From agentzh at gmail.com Wed Jul 10 01:29:30 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 9 Jul 2013 18:29:30 -0700 Subject: [PATCH] Fixing buffer over-read when accepting unix domain sockets Message-ID: Hello! I've found a heap buffer over-read issue in the Nginx core via clang's AddressSanitizer tool when Nginx is accepting a unix domain socket in ngx_event_accept. At least on Linux, accept and accept4 syscalls always return a socket length of 2 for unix domain sockets, which makes later accesses to saun->sun_path in function ngx_sock_ntop invalid (because sizeof(sa->sa_family) == sizeof(short) == 2). The patch attached fixes this issue. Thanks! -agentzh --- nginx-1.4.1/src/event/ngx_event_accept.c 2013-05-06 03:26:50.000000000 -0700 +++ nginx-1.4.1-patched/src/event/ngx_event_accept.c 2013-07-09 17:41:42.688468839 -0700 @@ -268,7 +268,7 @@ ngx_event_accept(ngx_event_t *ev) wev->own_lock = &c->lock; #endif - if (ls->addr_ntop) { + if (ls->addr_ntop && socklen > sizeof(c->sockaddr->sa_family)) { c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); if (c->addr_text.data == NULL) { ngx_close_accepted_connection(c); -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.4.1-unix_socket_accept_over_read.patch Type: application/octet-stream Size: 548 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Jul 10 13:18:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jul 2013 17:18:40 +0400 Subject: [PATCH] Fixing buffer over-read when accepting unix domain sockets In-Reply-To: References: Message-ID: <20130710131839.GD66479@mdounin.ru> Hello! On Tue, Jul 09, 2013 at 06:29:30PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > I've found a heap buffer over-read issue in the Nginx core via clang's > AddressSanitizer tool when Nginx is accepting a unix domain socket in > ngx_event_accept. > > At least on Linux, accept and accept4 syscalls always return a socket > length of 2 for unix domain sockets, which makes later accesses to > saun->sun_path in function ngx_sock_ntop invalid (because > sizeof(sa->sa_family) == sizeof(short) == 2). Yep, there seems to be a problem with Linux accept() syscalls - it returns invalid sockaddr. > The patch attached fixes this issue. > > Thanks! > -agentzh > > --- nginx-1.4.1/src/event/ngx_event_accept.c 2013-05-06 03:26:50.000000000 -0700 > +++ nginx-1.4.1-patched/src/event/ngx_event_accept.c 2013-07-09 > 17:41:42.688468839 -0700 > @@ -268,7 +268,7 @@ ngx_event_accept(ngx_event_t *ev) > wev->own_lock = &c->lock; > #endif > > - if (ls->addr_ntop) { > + if (ls->addr_ntop && socklen > sizeof(c->sockaddr->sa_family)) { > c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); > if (c->addr_text.data == NULL) { > ngx_close_accepted_connection(c); The patch looks wrong - it doesn't initialize c->addr_text at all, while it's requested by a caller. -- Maxim Dounin http://nginx.org/en/donation.html From hnakamur at gmail.com Wed Jul 10 13:47:35 2013 From: hnakamur at gmail.com (Hiroaki Nakamura) Date: Wed, 10 Jul 2013 22:47:35 +0900 Subject: Request methods with hyphens Message-ID: Hi all, I found nginx rejects request methods with hyphens like VERSION-CONTROL with the status code 400. I got the following debug log: 2013/07/10 13:55:29 [info] 79048#0: *4 client sent invalid method while reading client request line, client: 127.0.0.1, server: localhost, request: "VERSION-CONTROL / HTTP/1.1" 2013/07/10 13:55:29 [debug] 79048#0: *4 http finalize request: 400, "?" a:1, c:1 I looked at the source code and found nginx will accept only 'A'-'Z' and '_' as request methods. http://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_parse.c?rev=626f288fa5ede7ee3cbeffe950cb9dd611e10c52#L270 RFC2616 says the method is case-sensitive and methods can have http://tools.ietf.org/html/rfc2616#section-5.1.1 5.1.1 Method The Method token indicates the method to be performed on the resource identified by the Request-URI. The method is case-sensitive. Method = "OPTIONS" ; Section 9.2 | "GET" ; Section 9.3 | "HEAD" ; Section 9.4 | "POST" ; Section 9.5 | "PUT" ; Section 9.6 | "DELETE" ; Section 9.7 | "TRACE" ; Section 9.8 | "CONNECT" ; Section 9.9 | extension-method extension-method = token http://tools.ietf.org/html/rfc2616#section-2.2 token = 1* separators = "(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\" | <"> | "/" | "[" | "]" | "?" | "=" | "{" | "}" | SP | HT Also, when a server rejects a method, the status code should be 405 or 501. http://tools.ietf.org/html/rfc2616#section-5.1.1 An origin server SHOULD return the status code 405 (Method Not Allowed) if the method is known by the origin server but not allowed for the requested resource, and 501 (Not Implemented) if the method is unrecognized or not implemented by the origin server. I wonder how to improve nginx on accepting or rejecting request methods. Comments are welcome. Hiroaki From ykirpichev at gmail.com Wed Jul 10 13:57:57 2013 From: ykirpichev at gmail.com (Yury Kirpichev) Date: Wed, 10 Jul 2013 17:57:57 +0400 Subject: SPDY: what is the purpose of blocked frame In-Reply-To: References: <201306252031.45298.vbart@nginx.com> <201307051859.17606.vbart@nginx.com> Message-ID: Hi, We have found one issue with proposed solution. The issue is that now SETTINGS frame will be send after SYN_REPLY frame. Though, it does not violate SPDYv2 protocol, it can be easily fixed by changing: @@ -1662,7 +1662,7 @@ ngx_http_spdy_send_settings(ngx_http_spdy_connection_t *sc) buf->last = p; - ngx_http_spdy_queue_frame(sc, frame); + ngx_http_spdy_queue_blocked_frame(sc, frame); return NGX_OK; } BR/ Yury 2013/7/6 Yury Kirpichev > Yes, we did such modification in our test environment and it is working > well so far. > Moreover, it showed good results in case of intermixed requests with low > and high priority are handled. > > BR/ Yury > > > 2013/7/5 Valentin V. Bartenev > > On Wednesday 26 June 2013 12:34:17 Yury Kirpichev wrote: >> > Hello, >> > >> > Thanks for analysis and explanation. >> > Then how about the following workaround - >> > - queue blocked frames at the begining of queue in FIFO order. >> > (just remove from ngx_http_spdy_queue_blocked_frame the code: >> > if (frame->priority >= (*out)->priority) { >> > break; >> > } >> > ) >> > >> > - queue non-blocked frames after blocked in priority order: >> > static ngx_inline void >> > ngx_http_spdy_queue_frame(ngx_http_spdy_connection_t *sc, >> > ngx_http_spdy_out_frame_t *frame) >> > { >> > ngx_http_spdy_out_frame_t **out; >> > >> > for (out = &sc->last_out; *out *&& !(*out)->blocked*; out = >> > &(*out)->next) >> > { >> > if (frame->priority >= (*out)->priority) { >> > break; >> > } >> > } >> > >> > frame->next = *out; >> > *out = frame; >> > } >> > >> > Do you foresee any obvious drawback of such approach? >> > >> [..] >> >> At first glance I don't. Indeed it can be a better strategy, particularly >> since the SYN_STREAM frames are usually small. >> >> Have you tested it already? >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 10 14:08:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jul 2013 18:08:48 +0400 Subject: Request methods with hyphens In-Reply-To: References: Message-ID: <20130710140848.GF66479@mdounin.ru> Hello! On Wed, Jul 10, 2013 at 10:47:35PM +0900, Hiroaki Nakamura wrote: > Hi all, > > I found nginx rejects request methods with hyphens like > VERSION-CONTROL with the status code 400. > I got the following debug log: > > 2013/07/10 13:55:29 [info] 79048#0: *4 client sent invalid method > while reading client request line, client: 127.0.0.1, server: > localhost, request: "VERSION-CONTROL / HTTP/1.1" > 2013/07/10 13:55:29 [debug] 79048#0: *4 http finalize request: 400, "?" a:1, c:1 Is it a method used by some real-world software? > I looked at the source code and found nginx will accept only 'A'-'Z' > and '_' as request methods. > http://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_parse.c?rev=626f288fa5ede7ee3cbeffe950cb9dd611e10c52#L270 > > RFC2616 says the method is case-sensitive and > methods can have > > http://tools.ietf.org/html/rfc2616#section-5.1.1 > > 5.1.1 Method > The Method token indicates the method to be performed on the > resource identified by the Request-URI. The method is case-sensitive. > > Method = "OPTIONS" ; Section 9.2 > | "GET" ; Section 9.3 > | "HEAD" ; Section 9.4 > | "POST" ; Section 9.5 > | "PUT" ; Section 9.6 > | "DELETE" ; Section 9.7 > | "TRACE" ; Section 9.8 > | "CONNECT" ; Section 9.9 > | extension-method > extension-method = token > > > http://tools.ietf.org/html/rfc2616#section-2.2 > > token = 1* > separators = "(" | ")" | "<" | ">" | "@" > | "," | ";" | ":" | "\" | <"> > | "/" | "[" | "]" | "?" | "=" > | "{" | "}" | SP | HT > > > Also, when a server rejects a method, the status code should be 405 or 501. > > http://tools.ietf.org/html/rfc2616#section-5.1.1 > > An origin server SHOULD return the status code 405 (Method Not Allowed) > if the method is known by the origin server but not allowed for the > requested resource, and 501 (Not Implemented) if the method is > unrecognized or not implemented by the origin server. > > I wonder how to improve nginx on accepting or rejecting request methods. > Comments are welcome. As of now nginx rejects anything which isn't uppercase latin letters (or underscore) as syntactically invalid (and hence 400). I don't think that current behaviour should be changed unless there are good reasons to. If there are good reasons, we probably should do something similar to underscores_in_headers, see http://nginx.org/r/underscores_in_headers. -- Maxim Dounin http://nginx.org/en/donation.html From c.kworr at gmail.com Wed Jul 10 15:35:59 2013 From: c.kworr at gmail.com (Volodymyr Kostyrko) Date: Wed, 10 Jul 2013 18:35:59 +0300 Subject: autoindex xml support proposition Message-ID: <51DD7F5F.6090804@gmail.com> Hi all. I recently played a bit with generating autoindex as XML and rendering it via XSL. The rough patch is here: http://limbo.xim.bz/ngx_http_autoindex_module.diff You can point http://limbo.xim.bz/ to look how it works and to get sample XSL and CSS. TODO: * correct content size calculation (I think I would need help with this one...); * draw a link to a parent directory. I know I'm generally bad at C but I tried to finalize DTD at first place. Comments welcome. -- Sphinx of black quartz, judge my vow. From mdounin at mdounin.ru Wed Jul 10 16:31:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jul 2013 20:31:23 +0400 Subject: autoindex xml support proposition In-Reply-To: <51DD7F5F.6090804@gmail.com> References: <51DD7F5F.6090804@gmail.com> Message-ID: <20130710163123.GI66479@mdounin.ru> Hello! On Wed, Jul 10, 2013 at 06:35:59PM +0300, Volodymyr Kostyrko wrote: > Hi all. > > I recently played a bit with generating autoindex as XML and > rendering it via XSL. The rough patch is here: > http://limbo.xim.bz/ngx_http_autoindex_module.diff > > You can point http://limbo.xim.bz/ to look how it works and to get > sample XSL and CSS. > > TODO: > * correct content size calculation (I think I would need help with > this one...); > * draw a link to a parent directory. > > I know I'm generally bad at C but I tried to finalize DTD at first place. > > Comments welcome. I fully support moving this toward XML as will allow various customizations, but I can't say I like the patch. One simple thing which catched my eye - please use distinct directive to activate XML output, don't abuse "autoindex" switch. Some more comments, in no particular order: - It would be cool if we'll be able to provide an autoindex XML output which is (X)HTML at the same time and works fine without xslt processing. - It looks like the code assumes client-side xslt processing as the only option. We have the xslt module though, and it would be fine to do server-side processing. -- Maxim Dounin http://nginx.org/en/donation.html From toshic.toshic at gmail.com Wed Jul 10 17:17:08 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:17:08 +0400 Subject: IPv6 support in resolver In-Reply-To: <20130617153021.GH72282@mdounin.ru> References: <20130617153021.GH72282@mdounin.ru> Message-ID: Hello, I've split this big patch into several small patches, taking into account your comments. I'll send each part in separate email. Here is the first one. commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 Author: Anton Kortunov Date: Wed Jul 10 13:14:52 2013 +0400 Added new structure ngx_ipaddr_t This structure contains family field and the union of ipv4/ipv6 structures in_addr_t and in6_addr. diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h index 6a5a368..077ed34 100644 --- a/src/core/ngx_inet.h +++ b/src/core/ngx_inet.h @@ -68,6 +68,16 @@ typedef struct { typedef struct { + ngx_uint_t family; + union { + in_addr_t v4; +#if (NGX_HAVE_INET6) + struct in6_addr v6; +#endif + } u; +} ngx_ipaddr_t; + +typedef struct { struct sockaddr *sockaddr; socklen_t socklen; ngx_str_t name; On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: > Hello! > > On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: > > > Hello, > > > > We needed this feature in our company, I found that it is in milestones > of > > version 1.5 but doesn't exist yet. So I've implemented it based in 1.3 > code > > and merged in current 1.5 code. When I wrote this code I mostly cared > about > > minimum intrusion into other parts of nginx. > > > > IPv6 fallback logic is not a straightforward implementation of suggested > by > > RFC. RFC states that IPv6 resolving have priority over IPv4, and it's not > > very good for Internet we have currently. With this patch you can specify > > priority, and in upstream and mail modules I've set IPv4 as preferred > > address family. > > > > Patch is pretty big and I hope it'll not break mailing list or mail > clients. > > You may want to try to split the patch into smaller patches to > simplify review. See also some hints here: > > http://nginx.org/en/docs/contributing_changes.html > > Some quick comments below. > > [...] > > > - addr = ntohl(ctx->addr); > > +failed: > > + > > + //addr = ntohl(ctx->addr); > > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, > > NGX_SOCKADDR_STRLEN); > > > > ngx_log_error(NGX_LOG_ALERT, r->log, 0, > > - "could not cancel %ud.%ud.%ud.%ud resolving", > > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, > > - (addr >> 8) & 0xff, addr & 0xff); > > + "could not cancel %s resolving", text); > > 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. > > 2. Don't use C++ style ("//") comments. > > 3. If some data is only needed for debug logging, keep relevant > calculations under #if (NGX_DEBUG). > > [...] > > > @@ -334,6 +362,7 @@ > > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, > > peers->peer[i].current_weight = 0; > > peers->peer[i].max_fails = 1; > > peers->peer[i].fail_timeout = 10; > > + > > } > > } > > > > Please avoid unrelated changes. > > [...] > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshic.toshic at gmail.com Wed Jul 10 17:17:43 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:17:43 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: commit 482bd2a0b6240a2b26409b9c7924ad01c814f293 Author: Anton Kortunov Date: Wed Jul 10 13:21:27 2013 +0400 Added NGX_RESOLVE_* constants Module developers can decide how to resolve hosts relating to IPv6: NGX_RESOLVE_AAAA - try to resolve only to IPv6 address NGX_RESOLVE_AAAA_A - IPv6 is preferred (recommended by standards) NGX_RESOLVE_A_AAAA - IPv4 is preferred (better strategy nowadays) diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h index ae34ca5..6fd81fe 100644 --- a/src/core/ngx_resolver.h +++ b/src/core/ngx_resolver.h @@ -20,6 +20,15 @@ #define NGX_RESOLVE_TXT 16 #define NGX_RESOLVE_DNAME 39 +#if (NGX_HAVE_INET6) + +#define NGX_RESOLVE_AAAA 28 +#define NGX_RESOLVE_A_AAAA 1000 +#define NGX_RESOLVE_AAAA_A 1001 +#define NGX_RESOLVE_RETRY 1002 + +#endif + #define NGX_RESOLVE_FORMERR 1 #define NGX_RESOLVE_SERVFAIL 2 #define NGX_RESOLVE_NXDOMAIN 3 On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: > Hello, > > I've split this big patch into several small patches, taking into account > your comments. I'll send each part in separate email. Here is the first one. > > commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 > Author: Anton Kortunov > Date: Wed Jul 10 13:14:52 2013 +0400 > > Added new structure ngx_ipaddr_t > > This structure contains family field > and the union of ipv4/ipv6 structures in_addr_t and in6_addr. > > diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h > index 6a5a368..077ed34 100644 > --- a/src/core/ngx_inet.h > +++ b/src/core/ngx_inet.h > @@ -68,6 +68,16 @@ typedef struct { > > > typedef struct { > + ngx_uint_t family; > + union { > + in_addr_t v4; > +#if (NGX_HAVE_INET6) > + struct in6_addr v6; > +#endif > + } u; > +} ngx_ipaddr_t; > + > +typedef struct { > struct sockaddr *sockaddr; > socklen_t socklen; > ngx_str_t name; > > > > On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: > >> Hello! >> >> On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: >> >> > Hello, >> > >> > We needed this feature in our company, I found that it is in milestones >> of >> > version 1.5 but doesn't exist yet. So I've implemented it based in 1.3 >> code >> > and merged in current 1.5 code. When I wrote this code I mostly cared >> about >> > minimum intrusion into other parts of nginx. >> > >> > IPv6 fallback logic is not a straightforward implementation of >> suggested by >> > RFC. RFC states that IPv6 resolving have priority over IPv4, and it's >> not >> > very good for Internet we have currently. With this patch you can >> specify >> > priority, and in upstream and mail modules I've set IPv4 as preferred >> > address family. >> > >> > Patch is pretty big and I hope it'll not break mailing list or mail >> clients. >> >> You may want to try to split the patch into smaller patches to >> simplify review. See also some hints here: >> >> http://nginx.org/en/docs/contributing_changes.html >> >> Some quick comments below. >> >> [...] >> >> > - addr = ntohl(ctx->addr); >> > +failed: >> > + >> > + //addr = ntohl(ctx->addr); >> > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, >> > NGX_SOCKADDR_STRLEN); >> > >> > ngx_log_error(NGX_LOG_ALERT, r->log, 0, >> > - "could not cancel %ud.%ud.%ud.%ud resolving", >> > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >> > - (addr >> 8) & 0xff, addr & 0xff); >> > + "could not cancel %s resolving", text); >> >> 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. >> >> 2. Don't use C++ style ("//") comments. >> >> 3. If some data is only needed for debug logging, keep relevant >> calculations under #if (NGX_DEBUG). >> >> [...] >> >> > @@ -334,6 +362,7 @@ >> > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, >> > peers->peer[i].current_weight = 0; >> > peers->peer[i].max_fails = 1; >> > peers->peer[i].fail_timeout = 10; >> > + >> > } >> > } >> > >> >> Please avoid unrelated changes. >> >> [...] >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshic.toshic at gmail.com Wed Jul 10 17:24:03 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:24:03 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: commit 8670b164784032b2911b3c34ac31ef52ddba5b60 Author: Anton Kortunov Date: Wed Jul 10 19:53:06 2013 +0400 IPv6 support in resolver for forward requests To resolve name into IPv6 address use NGX_RESOLVE_AAAA, NGX_RESOLVE_A_AAAA or NGX_RESOLVE_AAAA_A record type instead of NGX_RESOLVE_A diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c index d59d0c4..567368b 100644 --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -76,7 +76,7 @@ static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, ngx_str_t *name, uint32_t hash); static ngx_resolver_node_t *ngx_resolver_lookup_addr(ngx_resolver_t *r, - in_addr_t addr); + ngx_ipaddr_t addr, uint32_t hash); static void ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); static ngx_int_t ngx_resolver_copy(ngx_resolver_t *r, ngx_str_t *name, @@ -88,7 +88,7 @@ static void *ngx_resolver_calloc(ngx_resolver_t *r, size_t size); static void ngx_resolver_free(ngx_resolver_t *r, void *p); static void ngx_resolver_free_locked(ngx_resolver_t *r, void *p); static void *ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t size); -static in_addr_t *ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, +static ngx_ipaddr_t *ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t *src, ngx_uint_t n); static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, size_t len); @@ -270,13 +270,27 @@ ngx_resolver_cleanup_tree(ngx_resolver_t *r, ngx_rbtree_t *tree) ngx_resolver_ctx_t * ngx_resolve_start(ngx_resolver_t *r, ngx_resolver_ctx_t *temp) { - in_addr_t addr; + ngx_ipaddr_t addr; ngx_resolver_ctx_t *ctx; if (temp) { - addr = ngx_inet_addr(temp->name.data, temp->name.len); + addr.family = 0; - if (addr != INADDR_NONE) { + + addr.u.v4 = ngx_inet_addr(temp->name.data, temp->name.len); + + if (addr.u.v4 != INADDR_NONE) { + + addr.family = AF_INET; + +#if (NGX_HAVE_INET6) + } else if (ngx_inet6_addr(temp->name.data, temp->name.len, addr.u.v6.s6_addr) == NGX_OK) { + + addr.family = AF_INET6; +#endif + } + + if (addr.family) { temp->resolver = r; temp->state = NGX_OK; temp->naddrs = 1; @@ -417,7 +431,7 @@ static ngx_int_t ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) { uint32_t hash; - in_addr_t addr, *addrs; + ngx_ipaddr_t addr, *addrs; ngx_int_t rc; ngx_uint_t naddrs; ngx_resolver_ctx_t *next; @@ -429,7 +443,11 @@ ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) if (rn) { - if (rn->valid >= ngx_time()) { + if (rn->valid >= ngx_time() +#if (NGX_HAVE_INET6) + && rn->qtype != NGX_RESOLVE_RETRY +#endif + ) { ngx_log_debug0(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve cached"); @@ -446,7 +464,6 @@ ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) /* NGX_RESOLVE_A answer */ if (naddrs != 1) { - addr = 0; addrs = ngx_resolver_rotate(r, rn->u.addrs, naddrs); if (addrs == NULL) { return NGX_ERROR; @@ -506,6 +523,8 @@ ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) } while (ctx); return NGX_OK; + } else { + rn->qtype = ctx->type; } if (rn->waiting) { @@ -552,6 +571,7 @@ ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) rn->node.key = hash; rn->nlen = (u_short) ctx->name.len; rn->query = NULL; + rn->qtype = ctx->type; ngx_rbtree_insert(&r->name_rbtree, &rn->node); } @@ -1130,6 +1150,9 @@ found: switch (qtype) { case NGX_RESOLVE_A: +#if (NGX_HAVE_INET6) + case NGX_RESOLVE_AAAA: +#endif ngx_resolver_process_a(r, buf, n, ident, code, nan, i + sizeof(ngx_resolver_qs_t)); @@ -1178,7 +1201,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, size_t len; int32_t ttl; uint32_t hash; - in_addr_t addr, *addrs; + ngx_ipaddr_t addr, *addrs; ngx_str_t name; ngx_uint_t qtype, qident, naddrs, a, i, n, start; ngx_resolver_an_t *an; @@ -1212,12 +1235,57 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, goto failed; } - ngx_resolver_free(r, name.data); - if (code == 0 && nan == 0) { + +#if (NGX_HAVE_INET6) + /* + * If it was required dual type v4|v6 resolv create one more request + */ + if (rn->qtype == NGX_RESOLVE_A_AAAA + || rn->qtype == NGX_RESOLVE_AAAA_A) { + + ngx_queue_remove(&rn->queue); + + rn->valid = ngx_time() + (r->valid ? r->valid : ttl); + rn->expire = ngx_time() + r->expire; + + ngx_queue_insert_head(&r->name_expire_queue, &rn->queue); + + ctx = rn->waiting; + rn->waiting = NULL; + + if (ctx) { + ctx->name = name; + + switch (rn->qtype) { + + case NGX_RESOLVE_A_AAAA: + ctx->type = NGX_RESOLVE_AAAA; + break; + + case NGX_RESOLVE_AAAA_A: + ctx->type = NGX_RESOLVE_A; + break; + } + + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, + "restarting request for name %V, with type %ud", + &name, ctx->type); + + rn->qtype = NGX_RESOLVE_RETRY; + + (void) ngx_resolve_name_locked(r, ctx); + } + + return; + } +#endif + code = 3; /* NXDOMAIN */ } + ngx_resolver_free(r, name.data); + if (code) { next = rn->waiting; rn->waiting = NULL; @@ -1243,7 +1311,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, i = ans; naddrs = 0; - addr = 0; + addr.family = 0; addrs = NULL; cname = NULL; qtype = 0; @@ -1302,13 +1370,30 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, goto short_response; } - addr = htonl((buf[i] << 24) + (buf[i + 1] << 16) + addr.family = AF_INET; + addr.u.v4 = htonl((buf[i] << 24) + (buf[i + 1] << 16) + (buf[i + 2] << 8) + (buf[i + 3])); naddrs++; i += len; +#if (NGX_HAVE_INET6) + } else if (qtype == NGX_RESOLVE_AAAA) { + + i += sizeof(ngx_resolver_an_t); + + if (i + len > last) { + goto short_response; + } + + addr.family = AF_INET6; + ngx_memcpy(&addr.u.v6.s6_addr, &buf[i], 16); + + naddrs++; + + i += len; +#endif } else if (qtype == NGX_RESOLVE_CNAME) { cname = &buf[i] + sizeof(ngx_resolver_an_t); i += sizeof(ngx_resolver_an_t) + len; @@ -1333,7 +1418,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, } else { - addrs = ngx_resolver_alloc(r, naddrs * sizeof(in_addr_t)); + addrs = ngx_resolver_alloc(r, naddrs * sizeof(ngx_ipaddr_t)); if (addrs == NULL) { return; } @@ -1369,12 +1454,23 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, if (qtype == NGX_RESOLVE_A) { - addrs[n++] = htonl((buf[i] << 24) + (buf[i + 1] << 16) + addrs[n].family = AF_INET; + addrs[n++].u.v4 = htonl((buf[i] << 24) + (buf[i + 1] << 16) + (buf[i + 2] << 8) + (buf[i + 3])); if (n == naddrs) { break; } +#if (NGX_HAVE_INET6) + } else if (qtype == NGX_RESOLVE_AAAA) { + + addrs[n].family = AF_INET6; + ngx_memcpy(&addrs[n++].u.v6.s6_addr, &buf[i], 16); + + if (n == naddrs) { + break; + } +#endif } i += len; @@ -1383,7 +1479,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t last, rn->u.addrs = addrs; addrs = ngx_resolver_dup(r, rn->u.addrs, - naddrs * sizeof(in_addr_t)); + naddrs * sizeof(ngx_ipaddr_t)); if (addrs == NULL) { return; } @@ -1838,7 +1934,20 @@ ngx_resolver_create_name_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t *ctx) qs = (ngx_resolver_qs_t *) p; /* query type */ - qs->type_hi = 0; qs->type_lo = (u_char) ctx->type; + qs->type_hi = 0; qs->type_lo = (u_char) rn->qtype; + +#if (NGX_HAVE_INET6) + switch (rn->qtype) { + + case NGX_RESOLVE_A_AAAA: + qs->type_lo = NGX_RESOLVE_A; + break; + + case NGX_RESOLVE_AAAA_A: + qs->type_lo = NGX_RESOLVE_AAAA; + break; + } +#endif /* IP query class */ qs->class_hi = 0; qs->class_lo = 1; @@ -2136,13 +2245,13 @@ ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t size) } -static in_addr_t * -ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n) +static ngx_ipaddr_t * +ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t *src, ngx_uint_t n) { void *dst, *p; ngx_uint_t j; - dst = ngx_resolver_alloc(r, n * sizeof(in_addr_t)); + dst = ngx_resolver_alloc(r, n * sizeof(ngx_ipaddr_t)); if (dst == NULL) { return dst; @@ -2151,12 +2260,12 @@ ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n) j = ngx_random() % n; if (j == 0) { - ngx_memcpy(dst, src, n * sizeof(in_addr_t)); + ngx_memcpy(dst, src, n * sizeof(ngx_ipaddr_t)); return dst; } - p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(in_addr_t)); - ngx_memcpy(p, src, j * sizeof(in_addr_t)); + p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(ngx_ipaddr_t)); + ngx_memcpy(p, src, j * sizeof(ngx_ipaddr_t)); return dst; } diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h index 6fd81fe..d2a4606 100644 --- a/src/core/ngx_resolver.h +++ b/src/core/ngx_resolver.h @@ -67,10 +67,11 @@ typedef struct { u_short qlen; u_char *query; + ngx_int_t qtype; union { - in_addr_t addr; - in_addr_t *addrs; + ngx_ipaddr_t addr; + ngx_ipaddr_t *addrs; u_char *cname; } u; @@ -130,8 +131,8 @@ struct ngx_resolver_ctx_s { ngx_str_t name; ngx_uint_t naddrs; - in_addr_t *addrs; - in_addr_t addr; + ngx_ipaddr_t *addrs; + ngx_ipaddr_t addr; ngx_resolver_handler_pt handler; void *data; On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: > commit 482bd2a0b6240a2b26409b9c7924ad01c814f293 > Author: Anton Kortunov > Date: Wed Jul 10 13:21:27 2013 +0400 > > Added NGX_RESOLVE_* constants > > Module developers can decide how to resolve hosts relating to IPv6: > > NGX_RESOLVE_AAAA - try to resolve only to IPv6 address > NGX_RESOLVE_AAAA_A - IPv6 is preferred (recommended by standards) > NGX_RESOLVE_A_AAAA - IPv4 is preferred (better strategy nowadays) > > diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h > index ae34ca5..6fd81fe 100644 > --- a/src/core/ngx_resolver.h > +++ b/src/core/ngx_resolver.h > @@ -20,6 +20,15 @@ > #define NGX_RESOLVE_TXT 16 > #define NGX_RESOLVE_DNAME 39 > > +#if (NGX_HAVE_INET6) > + > +#define NGX_RESOLVE_AAAA 28 > +#define NGX_RESOLVE_A_AAAA 1000 > +#define NGX_RESOLVE_AAAA_A 1001 > +#define NGX_RESOLVE_RETRY 1002 > + > +#endif > + > #define NGX_RESOLVE_FORMERR 1 > #define NGX_RESOLVE_SERVFAIL 2 > #define NGX_RESOLVE_NXDOMAIN 3 > > > > On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: > >> Hello, >> >> I've split this big patch into several small patches, taking into account >> your comments. I'll send each part in separate email. Here is the first one. >> >> commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 >> Author: Anton Kortunov >> Date: Wed Jul 10 13:14:52 2013 +0400 >> >> Added new structure ngx_ipaddr_t >> >> This structure contains family field >> and the union of ipv4/ipv6 structures in_addr_t and in6_addr. >> >> diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h >> index 6a5a368..077ed34 100644 >> --- a/src/core/ngx_inet.h >> +++ b/src/core/ngx_inet.h >> @@ -68,6 +68,16 @@ typedef struct { >> >> >> typedef struct { >> + ngx_uint_t family; >> + union { >> + in_addr_t v4; >> +#if (NGX_HAVE_INET6) >> + struct in6_addr v6; >> +#endif >> + } u; >> +} ngx_ipaddr_t; >> + >> +typedef struct { >> struct sockaddr *sockaddr; >> socklen_t socklen; >> ngx_str_t name; >> >> >> >> On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: >>> >>> > Hello, >>> > >>> > We needed this feature in our company, I found that it is in >>> milestones of >>> > version 1.5 but doesn't exist yet. So I've implemented it based in 1.3 >>> code >>> > and merged in current 1.5 code. When I wrote this code I mostly cared >>> about >>> > minimum intrusion into other parts of nginx. >>> > >>> > IPv6 fallback logic is not a straightforward implementation of >>> suggested by >>> > RFC. RFC states that IPv6 resolving have priority over IPv4, and it's >>> not >>> > very good for Internet we have currently. With this patch you can >>> specify >>> > priority, and in upstream and mail modules I've set IPv4 as preferred >>> > address family. >>> > >>> > Patch is pretty big and I hope it'll not break mailing list or mail >>> clients. >>> >>> You may want to try to split the patch into smaller patches to >>> simplify review. See also some hints here: >>> >>> http://nginx.org/en/docs/contributing_changes.html >>> >>> Some quick comments below. >>> >>> [...] >>> >>> > - addr = ntohl(ctx->addr); >>> > +failed: >>> > + >>> > + //addr = ntohl(ctx->addr); >>> > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, >>> > NGX_SOCKADDR_STRLEN); >>> > >>> > ngx_log_error(NGX_LOG_ALERT, r->log, 0, >>> > - "could not cancel %ud.%ud.%ud.%ud resolving", >>> > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >>> > - (addr >> 8) & 0xff, addr & 0xff); >>> > + "could not cancel %s resolving", text); >>> >>> 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. >>> >>> 2. Don't use C++ style ("//") comments. >>> >>> 3. If some data is only needed for debug logging, keep relevant >>> calculations under #if (NGX_DEBUG). >>> >>> [...] >>> >>> > @@ -334,6 +362,7 @@ >>> > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, >>> > peers->peer[i].current_weight = 0; >>> > peers->peer[i].max_fails = 1; >>> > peers->peer[i].fail_timeout = 10; >>> > + >>> > } >>> > } >>> > >>> >>> Please avoid unrelated changes. >>> >>> [...] >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/en/donation.html >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshic.toshic at gmail.com Wed Jul 10 17:29:04 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:29:04 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: commit 524dd02549575cb9ad5e95444093f6b494dc59bc Author: Anton Kortunov Date: Wed Jul 10 20:43:59 2013 +0400 IPv6 reverse resolve support diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c index 567368b..06d46c1 100644 --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -71,7 +71,7 @@ static void ngx_resolver_process_response(ngx_resolver_t *r, u_char *buf, size_t n); static void ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t n, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan, ngx_uint_t ans); -static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, +void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan); static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, ngx_str_t *name, uint32_t hash); @@ -126,7 +126,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_str_t *names, ngx_uint_t n) ngx_resolver_rbtree_insert_value); ngx_rbtree_init(&r->addr_rbtree, &r->addr_sentinel, - ngx_rbtree_insert_value); + ngx_resolver_rbtree_insert_value); ngx_queue_init(&r->name_resend_queue); ngx_queue_init(&r->addr_resend_queue); @@ -649,17 +649,40 @@ failed: ngx_int_t ngx_resolve_addr(ngx_resolver_ctx_t *ctx) { + uint32_t hash; u_char *name; ngx_resolver_t *r; ngx_resolver_node_t *rn; r = ctx->resolver; + rn = NULL; + + hash = ctx->addr.family; + + switch(ctx->addr.family) { + + case AF_INET: + ctx->addr.u.v4 = ntohl(ctx->addr.u.v4); + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, sizeof(in_addr_t)); +ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, + "resolve addr hash: %xd, addr:%xd, family: %d", hash, ctx->addr.u.v4, ctx->addr.family); + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, sizeof(struct in6_addr)); + break; +#endif - ctx->addr = ntohl(ctx->addr); + default: + goto failed; + } /* lock addr mutex */ - rn = ngx_resolver_lookup_addr(r, ctx->addr); + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); + ngx_log_error(r->log_level, r->log, 0, + "resolve: in resolve_addr searching, hash = %xd, rn = %p", hash, rn); if (rn) { @@ -714,8 +737,10 @@ ngx_resolve_addr(ngx_resolver_ctx_t *ctx) goto failed; } - rn->node.key = ctx->addr; + rn->node.key = hash; rn->query = NULL; + rn->qtype = ctx->type; + rn->u.addr = ctx->addr; ngx_rbtree_insert(&r->addr_rbtree, &rn->node); } @@ -788,10 +813,11 @@ failed: void ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) { - in_addr_t addr; + uint32_t hash; ngx_resolver_t *r; ngx_resolver_ctx_t *w, **p; ngx_resolver_node_t *rn; + u_char text[NGX_SOCKADDR_STRLEN]; r = ctx->resolver; @@ -806,7 +832,25 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) if (ctx->state == NGX_AGAIN || ctx->state == NGX_RESOLVE_TIMEDOUT) { - rn = ngx_resolver_lookup_addr(r, ctx->addr); + hash = ctx->addr.family; + + switch(ctx->addr.family) { + + case AF_INET: + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, sizeof(in_addr_t)); + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, sizeof(struct in6_addr)); + break; +#endif + + default: + goto failed; + } + + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); if (rn) { p = &rn->waiting; @@ -824,12 +868,12 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) } } - addr = ntohl(ctx->addr); +failed: + + ngx_inet_ntop(ctx->addr.family, &ctx->addr.u, text, NGX_SOCKADDR_STRLEN); ngx_log_error(NGX_LOG_ALERT, r->log, 0, - "could not cancel %ud.%ud.%ud.%ud resolving", - (addr >> 24) & 0xff, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff); + "could not cancel %s resolving", text); } done: @@ -1582,13 +1626,14 @@ failed: } -static void +void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan) { - char *err; + char *err = NULL; + uint32_t hash = 0; size_t len; - in_addr_t addr; + ngx_ipaddr_t addr; int32_t ttl; ngx_int_t digit; ngx_str_t name; @@ -1596,12 +1641,16 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, ngx_resolver_an_t *an; ngx_resolver_ctx_t *ctx, *next; ngx_resolver_node_t *rn; + u_char text[NGX_SOCKADDR_STRLEN]; if (ngx_resolver_copy(r, NULL, buf, &buf[12], &buf[n]) != NGX_OK) { goto invalid_in_addr_arpa; } - addr = 0; + ngx_memzero(&addr, sizeof(ngx_ipaddr_t)); + + /* Try to parse request as in-addr.arpa */ + addr.family = AF_INET; i = 12; for (mask = 0; mask < 32; mask += 8) { @@ -1612,7 +1661,7 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, goto invalid_in_addr_arpa; } - addr += digit << mask; + addr.u.v4 += digit << mask; i += len; } @@ -1620,15 +1669,79 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, goto invalid_in_addr_arpa; } + i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); + + goto found; + +invalid_in_addr_arpa: + +#if (NGX_HAVE_INET6) + /* Try to parse request as ip6.arpa */ + addr.family = AF_INET6; + i = 12; + + for (len = 15; len < 16; len--) { + if (buf[i++] != 1) + goto invalid_arpa; + + digit = ngx_hextoi(&buf[i++], 1); + if (digit == NGX_ERROR || digit > 16) { + goto invalid_arpa; + } + + addr.u.v6.s6_addr[len] = digit; + + if (buf[i++] != 1) + goto invalid_arpa; + + + digit = ngx_hextoi(&buf[i++], 1); + if (digit == NGX_ERROR || digit > 16) { + goto invalid_arpa; + } + + addr.u.v6.s6_addr[len] += digit << 4; + } + + if (ngx_strcmp(&buf[i], "\3ip6\4arpa") != 0) { + goto invalid_arpa; + } + + i += sizeof("\3ip6\4arpa") + sizeof(ngx_resolver_qs_t); + +#else /* NGX_HAVE_INET6 */ + goto invalid_arpa; +#endif + +found: + /* lock addr mutex */ - rn = ngx_resolver_lookup_addr(r, addr); + hash = addr.family; + + switch(addr.family) { + + case AF_INET: + ngx_crc32_update(&hash, (u_char *)&addr.u.v4, sizeof(in_addr_t)); + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + ngx_crc32_update(&hash, (u_char *)&addr.u.v6, sizeof(struct in6_addr)); + break; +#endif + + default: + goto invalid; + } + + rn = ngx_resolver_lookup_addr(r, addr, hash); + + ngx_inet_ntop(addr.family, &addr.u, text, NGX_SOCKADDR_STRLEN); if (rn == NULL || rn->query == NULL) { ngx_log_error(r->log_level, r->log, 0, - "unexpected response for %ud.%ud.%ud.%ud", - (addr >> 24) & 0xff, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff); + "unexpected response for %s", text); goto failed; } @@ -1636,12 +1749,15 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, if (ident != qident) { ngx_log_error(r->log_level, r->log, 0, - "wrong ident %ui response for %ud.%ud.%ud.%ud, expect %ui", - ident, (addr >> 24) & 0xff, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff, qident); + "wrong ident %ui response for %s, expect %ui", + ident, text, qident); goto failed; } + ngx_log_error(r->log_level, r->log, 0, + "code: %d, nan: %d", + code, nan); + if (code == 0 && nan == 0) { code = 3; /* NXDOMAIN */ } @@ -1669,8 +1785,6 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, return; } - i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); - if (i + 2 + sizeof(ngx_resolver_an_t) > (ngx_uint_t) n) { goto short_response; } @@ -1750,10 +1864,10 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, return; -invalid_in_addr_arpa: +invalid_arpa: ngx_log_error(r->log_level, r->log, 0, - "invalid in-addr.arpa name in DNS response"); + "invalid in-addr.arpa or ip6.arpa name in DNS response"); return; short_response: @@ -1818,28 +1932,54 @@ ngx_resolver_lookup_name(ngx_resolver_t *r, ngx_str_t *name, uint32_t hash) static ngx_resolver_node_t * -ngx_resolver_lookup_addr(ngx_resolver_t *r, in_addr_t addr) +ngx_resolver_lookup_addr(ngx_resolver_t *r, ngx_ipaddr_t addr, uint32_t hash) { + ngx_int_t rc; ngx_rbtree_node_t *node, *sentinel; + ngx_resolver_node_t *rn; node = r->addr_rbtree.root; sentinel = r->addr_rbtree.sentinel; while (node != sentinel) { - if (addr < node->key) { + if (hash < node->key) { node = node->left; continue; } - if (addr > node->key) { + if (hash > node->key) { node = node->right; continue; } - /* addr == node->key */ + /* hash == node->key */ + + rn = (ngx_resolver_node_t *) node; + + rc = addr.family - rn->u.addr.family; + + if (rc == 0) { + + switch (addr.family) { + case AF_INET: + rc = ngx_memn2cmp((u_char *)&addr.u.v4, (u_char *)&rn->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + rc = ngx_memn2cmp((u_char *)&addr.u.v6, (u_char *)&rn->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct in6_addr)); + break; +#endif + } + + if (rc == 0) { + return rn; + } - return (ngx_resolver_node_t *) node; + } + + node = (rc < 0) ? node->left : node->right; } /* not found */ @@ -1854,6 +1994,7 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, { ngx_rbtree_node_t **p; ngx_resolver_node_t *rn, *rn_temp; + ngx_int_t rc; for ( ;; ) { @@ -1870,8 +2011,29 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, rn = (ngx_resolver_node_t *) node; rn_temp = (ngx_resolver_node_t *) temp; - p = (ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, rn_temp->nlen) - < 0) ? &temp->left : &temp->right; + if (rn->qtype == NGX_RESOLVE_PTR) { + rc = rn->u.addr.family - rn_temp->u.addr.family; + + if (rc == 0) { + + switch (rn->u.addr.family) { + case AF_INET: + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v4, (u_char *)&rn_temp->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); + break; + + #if (NGX_HAVE_INET6) + case AF_INET6: + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v6, (u_char *)&rn_temp->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct in6_addr)); + break; + #endif + } + } + + } else { + rc = ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, rn_temp->nlen); + } + + p = (rc < 0) ? &temp->left : &temp->right; } if (*p == sentinel) { @@ -1989,8 +2151,6 @@ ngx_resolver_create_name_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t *ctx) } -/* AF_INET only */ - static ngx_int_t ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t *ctx) { @@ -2001,7 +2161,7 @@ ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t *ctx) ngx_resolver_query_t *query; len = sizeof(ngx_resolver_query_t) - + sizeof(".255.255.255.255.in-addr.arpa.") - 1 + + NGX_PTR_QUERY_LEN + sizeof(ngx_resolver_qs_t); p = ngx_resolver_alloc(ctx->resolver, len); @@ -2028,18 +2188,50 @@ ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t *ctx) p += sizeof(ngx_resolver_query_t); - for (n = 0; n < 32; n += 8) { - d = ngx_sprintf(&p[1], "%ud", (ctx->addr >> n) & 0xff); - *p = (u_char) (d - &p[1]); - p = d; + switch (ctx->addr.family) { + + case AF_INET: + for (n = 0; n < 32; n += 8) { + d = ngx_sprintf(&p[1], "%ud", (ctx->addr.u.v4 >> n) & 0xff); + *p = (u_char) (d - &p[1]); + p = d; + } + + /* query type "PTR", IP query class */ + ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); + + rn->qlen = (u_short) + (p + sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t) + - rn->query); + + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + for (n = 15; n >= 0; n--) { + p = ngx_sprintf(p, "\1%xd\1%xd", + (ctx->addr.u.v6.s6_addr[n]) & 0xf, + (ctx->addr.u.v6.s6_addr[n] >> 4) & 0xf); + + } + + /* query type "PTR", IP query class */ + ngx_memcpy(p, "\3ip6\4arpa\0\0\14\0\1", 18); + + rn->qlen = (u_short) + (p + sizeof("\3ip6\4arpa") + sizeof(ngx_resolver_qs_t) + - rn->query); + + break; +#endif + + default: + return NGX_ERROR; } - /* query type "PTR", IP query class */ - ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); +ngx_log_debug2(NGX_LOG_DEBUG_CORE, ctx->resolver->log, 0, + "resolve: query %s, ident %i", (rn->query+12), ident & 0xffff); - rn->qlen = (u_short) - (p + sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t) - - rn->query); return NGX_OK; } diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h index d2a4606..a45b244 100644 --- a/src/core/ngx_resolver.h +++ b/src/core/ngx_resolver.h @@ -41,6 +41,11 @@ #define NGX_RESOLVER_MAX_RECURSION 50 +#if (NGX_HAVE_INET6) +#define NGX_PTR_QUERY_LEN (sizeof(".f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.ip6.arpa.") - 1) +#else +#define NGX_PTR_QUERY_LEN (sizeof(".255.255.255.255.in-addr.arpa.") - 1) +#endif typedef struct { ngx_connection_t *connection; On Wed, Jul 10, 2013 at 9:24 PM, ToSHiC wrote: > commit 8670b164784032b2911b3c34ac31ef52ddba5b60 > Author: Anton Kortunov > Date: Wed Jul 10 19:53:06 2013 +0400 > > IPv6 support in resolver for forward requests > > To resolve name into IPv6 address use NGX_RESOLVE_AAAA, > NGX_RESOLVE_A_AAAA or NGX_RESOLVE_AAAA_A record type instead of > NGX_RESOLVE_A > > diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c > index d59d0c4..567368b 100644 > --- a/src/core/ngx_resolver.c > +++ b/src/core/ngx_resolver.c > @@ -76,7 +76,7 @@ static void ngx_resolver_process_ptr(ngx_resolver_t *r, > u_char *buf, size_t n, > static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, > ngx_str_t *name, uint32_t hash); > static ngx_resolver_node_t *ngx_resolver_lookup_addr(ngx_resolver_t *r, > - in_addr_t addr); > + ngx_ipaddr_t addr, uint32_t hash); > static void ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, > ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); > static ngx_int_t ngx_resolver_copy(ngx_resolver_t *r, ngx_str_t *name, > @@ -88,7 +88,7 @@ static void *ngx_resolver_calloc(ngx_resolver_t *r, > size_t size); > static void ngx_resolver_free(ngx_resolver_t *r, void *p); > static void ngx_resolver_free_locked(ngx_resolver_t *r, void *p); > static void *ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t size); > -static in_addr_t *ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, > +static ngx_ipaddr_t *ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t > *src, > ngx_uint_t n); > static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, size_t > len); > > @@ -270,13 +270,27 @@ ngx_resolver_cleanup_tree(ngx_resolver_t *r, > ngx_rbtree_t *tree) > ngx_resolver_ctx_t * > ngx_resolve_start(ngx_resolver_t *r, ngx_resolver_ctx_t *temp) > { > - in_addr_t addr; > + ngx_ipaddr_t addr; > ngx_resolver_ctx_t *ctx; > > if (temp) { > - addr = ngx_inet_addr(temp->name.data, temp->name.len); > + addr.family = 0; > > - if (addr != INADDR_NONE) { > + > + addr.u.v4 = ngx_inet_addr(temp->name.data, temp->name.len); > + > + if (addr.u.v4 != INADDR_NONE) { > + > + addr.family = AF_INET; > + > +#if (NGX_HAVE_INET6) > + } else if (ngx_inet6_addr(temp->name.data, temp->name.len, > addr.u.v6.s6_addr) == NGX_OK) { > + > + addr.family = AF_INET6; > +#endif > + } > + > + if (addr.family) { > temp->resolver = r; > temp->state = NGX_OK; > temp->naddrs = 1; > @@ -417,7 +431,7 @@ static ngx_int_t > ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) > { > uint32_t hash; > - in_addr_t addr, *addrs; > + ngx_ipaddr_t addr, *addrs; > ngx_int_t rc; > ngx_uint_t naddrs; > ngx_resolver_ctx_t *next; > @@ -429,7 +443,11 @@ ngx_resolve_name_locked(ngx_resolver_t *r, > ngx_resolver_ctx_t *ctx) > > if (rn) { > > - if (rn->valid >= ngx_time()) { > + if (rn->valid >= ngx_time() > +#if (NGX_HAVE_INET6) > + && rn->qtype != NGX_RESOLVE_RETRY > +#endif > + ) { > > ngx_log_debug0(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve > cached"); > > @@ -446,7 +464,6 @@ ngx_resolve_name_locked(ngx_resolver_t *r, > ngx_resolver_ctx_t *ctx) > /* NGX_RESOLVE_A answer */ > > if (naddrs != 1) { > - addr = 0; > addrs = ngx_resolver_rotate(r, rn->u.addrs, naddrs); > if (addrs == NULL) { > return NGX_ERROR; > @@ -506,6 +523,8 @@ ngx_resolve_name_locked(ngx_resolver_t *r, > ngx_resolver_ctx_t *ctx) > } while (ctx); > > return NGX_OK; > + } else { > + rn->qtype = ctx->type; > } > > if (rn->waiting) { > @@ -552,6 +571,7 @@ ngx_resolve_name_locked(ngx_resolver_t *r, > ngx_resolver_ctx_t *ctx) > rn->node.key = hash; > rn->nlen = (u_short) ctx->name.len; > rn->query = NULL; > + rn->qtype = ctx->type; > > ngx_rbtree_insert(&r->name_rbtree, &rn->node); > } > @@ -1130,6 +1150,9 @@ found: > switch (qtype) { > > case NGX_RESOLVE_A: > +#if (NGX_HAVE_INET6) > + case NGX_RESOLVE_AAAA: > +#endif > > ngx_resolver_process_a(r, buf, n, ident, code, nan, > i + sizeof(ngx_resolver_qs_t)); > @@ -1178,7 +1201,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > size_t len; > int32_t ttl; > uint32_t hash; > - in_addr_t addr, *addrs; > + ngx_ipaddr_t addr, *addrs; > ngx_str_t name; > ngx_uint_t qtype, qident, naddrs, a, i, n, start; > ngx_resolver_an_t *an; > @@ -1212,12 +1235,57 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > goto failed; > } > > - ngx_resolver_free(r, name.data); > - > if (code == 0 && nan == 0) { > + > +#if (NGX_HAVE_INET6) > + /* > + * If it was required dual type v4|v6 resolv create one more request > + */ > + if (rn->qtype == NGX_RESOLVE_A_AAAA > + || rn->qtype == NGX_RESOLVE_AAAA_A) { > + > + ngx_queue_remove(&rn->queue); > + > + rn->valid = ngx_time() + (r->valid ? r->valid : ttl); > + rn->expire = ngx_time() + r->expire; > + > + ngx_queue_insert_head(&r->name_expire_queue, &rn->queue); > + > + ctx = rn->waiting; > + rn->waiting = NULL; > + > + if (ctx) { > + ctx->name = name; > + > + switch (rn->qtype) { > + > + case NGX_RESOLVE_A_AAAA: > + ctx->type = NGX_RESOLVE_AAAA; > + break; > + > + case NGX_RESOLVE_AAAA_A: > + ctx->type = NGX_RESOLVE_A; > + break; > + } > + > + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, > + "restarting request for name %V, with type > %ud", > + &name, ctx->type); > + > + rn->qtype = NGX_RESOLVE_RETRY; > + > + (void) ngx_resolve_name_locked(r, ctx); > + } > + > + return; > + } > +#endif > + > code = 3; /* NXDOMAIN */ > } > > + ngx_resolver_free(r, name.data); > + > if (code) { > next = rn->waiting; > rn->waiting = NULL; > @@ -1243,7 +1311,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > > i = ans; > naddrs = 0; > - addr = 0; > + addr.family = 0; > addrs = NULL; > cname = NULL; > qtype = 0; > @@ -1302,13 +1370,30 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > goto short_response; > } > > - addr = htonl((buf[i] << 24) + (buf[i + 1] << 16) > + addr.family = AF_INET; > + addr.u.v4 = htonl((buf[i] << 24) + (buf[i + 1] << 16) > + (buf[i + 2] << 8) + (buf[i + 3])); > > naddrs++; > > i += len; > > +#if (NGX_HAVE_INET6) > + } else if (qtype == NGX_RESOLVE_AAAA) { > + > + i += sizeof(ngx_resolver_an_t); > + > + if (i + len > last) { > + goto short_response; > + } > + > + addr.family = AF_INET6; > + ngx_memcpy(&addr.u.v6.s6_addr, &buf[i], 16); > + > + naddrs++; > + > + i += len; > +#endif > } else if (qtype == NGX_RESOLVE_CNAME) { > cname = &buf[i] + sizeof(ngx_resolver_an_t); > i += sizeof(ngx_resolver_an_t) + len; > @@ -1333,7 +1418,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > > } else { > > - addrs = ngx_resolver_alloc(r, naddrs * sizeof(in_addr_t)); > + addrs = ngx_resolver_alloc(r, naddrs * sizeof(ngx_ipaddr_t)); > if (addrs == NULL) { > return; > } > @@ -1369,12 +1454,23 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > > if (qtype == NGX_RESOLVE_A) { > > - addrs[n++] = htonl((buf[i] << 24) + (buf[i + 1] << 16) > + addrs[n].family = AF_INET; > + addrs[n++].u.v4 = htonl((buf[i] << 24) + (buf[i + 1] > << 16) > + (buf[i + 2] << 8) + (buf[i + > 3])); > > if (n == naddrs) { > break; > } > +#if (NGX_HAVE_INET6) > + } else if (qtype == NGX_RESOLVE_AAAA) { > + > + addrs[n].family = AF_INET6; > + ngx_memcpy(&addrs[n++].u.v6.s6_addr, &buf[i], 16); > + > + if (n == naddrs) { > + break; > + } > +#endif > } > > i += len; > @@ -1383,7 +1479,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char > *buf, size_t last, > rn->u.addrs = addrs; > > addrs = ngx_resolver_dup(r, rn->u.addrs, > - naddrs * sizeof(in_addr_t)); > + naddrs * sizeof(ngx_ipaddr_t)); > if (addrs == NULL) { > return; > } > @@ -1838,7 +1934,20 @@ ngx_resolver_create_name_query(ngx_resolver_node_t > *rn, ngx_resolver_ctx_t *ctx) > qs = (ngx_resolver_qs_t *) p; > > /* query type */ > - qs->type_hi = 0; qs->type_lo = (u_char) ctx->type; > + qs->type_hi = 0; qs->type_lo = (u_char) rn->qtype; > + > +#if (NGX_HAVE_INET6) > + switch (rn->qtype) { > + > + case NGX_RESOLVE_A_AAAA: > + qs->type_lo = NGX_RESOLVE_A; > + break; > + > + case NGX_RESOLVE_AAAA_A: > + qs->type_lo = NGX_RESOLVE_AAAA; > + break; > + } > +#endif > > /* IP query class */ > qs->class_hi = 0; qs->class_lo = 1; > @@ -2136,13 +2245,13 @@ ngx_resolver_dup(ngx_resolver_t *r, void *src, > size_t size) > } > > > -static in_addr_t * > -ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n) > +static ngx_ipaddr_t * > +ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t *src, ngx_uint_t n) > { > void *dst, *p; > ngx_uint_t j; > > - dst = ngx_resolver_alloc(r, n * sizeof(in_addr_t)); > + dst = ngx_resolver_alloc(r, n * sizeof(ngx_ipaddr_t)); > > if (dst == NULL) { > return dst; > @@ -2151,12 +2260,12 @@ ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t > *src, ngx_uint_t n) > j = ngx_random() % n; > > if (j == 0) { > - ngx_memcpy(dst, src, n * sizeof(in_addr_t)); > + ngx_memcpy(dst, src, n * sizeof(ngx_ipaddr_t)); > return dst; > } > > - p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(in_addr_t)); > - ngx_memcpy(p, src, j * sizeof(in_addr_t)); > + p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(ngx_ipaddr_t)); > + ngx_memcpy(p, src, j * sizeof(ngx_ipaddr_t)); > > return dst; > } > diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h > index 6fd81fe..d2a4606 100644 > --- a/src/core/ngx_resolver.h > +++ b/src/core/ngx_resolver.h > @@ -67,10 +67,11 @@ typedef struct { > u_short qlen; > > u_char *query; > + ngx_int_t qtype; > > union { > - in_addr_t addr; > - in_addr_t *addrs; > + ngx_ipaddr_t addr; > + ngx_ipaddr_t *addrs; > u_char *cname; > } u; > > @@ -130,8 +131,8 @@ struct ngx_resolver_ctx_s { > ngx_str_t name; > > ngx_uint_t naddrs; > - in_addr_t *addrs; > - in_addr_t addr; > + ngx_ipaddr_t *addrs; > + ngx_ipaddr_t addr; > > ngx_resolver_handler_pt handler; > void *data; > > > > On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: > >> commit 482bd2a0b6240a2b26409b9c7924ad01c814f293 >> Author: Anton Kortunov >> Date: Wed Jul 10 13:21:27 2013 +0400 >> >> Added NGX_RESOLVE_* constants >> >> Module developers can decide how to resolve hosts relating to IPv6: >> >> NGX_RESOLVE_AAAA - try to resolve only to IPv6 address >> NGX_RESOLVE_AAAA_A - IPv6 is preferred (recommended by standards) >> NGX_RESOLVE_A_AAAA - IPv4 is preferred (better strategy nowadays) >> >> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >> index ae34ca5..6fd81fe 100644 >> --- a/src/core/ngx_resolver.h >> +++ b/src/core/ngx_resolver.h >> @@ -20,6 +20,15 @@ >> #define NGX_RESOLVE_TXT 16 >> #define NGX_RESOLVE_DNAME 39 >> >> +#if (NGX_HAVE_INET6) >> + >> +#define NGX_RESOLVE_AAAA 28 >> +#define NGX_RESOLVE_A_AAAA 1000 >> +#define NGX_RESOLVE_AAAA_A 1001 >> +#define NGX_RESOLVE_RETRY 1002 >> + >> +#endif >> + >> #define NGX_RESOLVE_FORMERR 1 >> #define NGX_RESOLVE_SERVFAIL 2 >> #define NGX_RESOLVE_NXDOMAIN 3 >> >> >> >> On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: >> >>> Hello, >>> >>> I've split this big patch into several small patches, taking into >>> account your comments. I'll send each part in separate email. Here is the >>> first one. >>> >>> commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 >>> Author: Anton Kortunov >>> Date: Wed Jul 10 13:14:52 2013 +0400 >>> >>> Added new structure ngx_ipaddr_t >>> >>> This structure contains family field >>> and the union of ipv4/ipv6 structures in_addr_t and in6_addr. >>> >>> diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h >>> index 6a5a368..077ed34 100644 >>> --- a/src/core/ngx_inet.h >>> +++ b/src/core/ngx_inet.h >>> @@ -68,6 +68,16 @@ typedef struct { >>> >>> >>> typedef struct { >>> + ngx_uint_t family; >>> + union { >>> + in_addr_t v4; >>> +#if (NGX_HAVE_INET6) >>> + struct in6_addr v6; >>> +#endif >>> + } u; >>> +} ngx_ipaddr_t; >>> + >>> +typedef struct { >>> struct sockaddr *sockaddr; >>> socklen_t socklen; >>> ngx_str_t name; >>> >>> >>> >>> On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: >>> >>>> Hello! >>>> >>>> On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: >>>> >>>> > Hello, >>>> > >>>> > We needed this feature in our company, I found that it is in >>>> milestones of >>>> > version 1.5 but doesn't exist yet. So I've implemented it based in >>>> 1.3 code >>>> > and merged in current 1.5 code. When I wrote this code I mostly cared >>>> about >>>> > minimum intrusion into other parts of nginx. >>>> > >>>> > IPv6 fallback logic is not a straightforward implementation of >>>> suggested by >>>> > RFC. RFC states that IPv6 resolving have priority over IPv4, and it's >>>> not >>>> > very good for Internet we have currently. With this patch you can >>>> specify >>>> > priority, and in upstream and mail modules I've set IPv4 as preferred >>>> > address family. >>>> > >>>> > Patch is pretty big and I hope it'll not break mailing list or mail >>>> clients. >>>> >>>> You may want to try to split the patch into smaller patches to >>>> simplify review. See also some hints here: >>>> >>>> http://nginx.org/en/docs/contributing_changes.html >>>> >>>> Some quick comments below. >>>> >>>> [...] >>>> >>>> > - addr = ntohl(ctx->addr); >>>> > +failed: >>>> > + >>>> > + //addr = ntohl(ctx->addr); >>>> > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, >>>> > NGX_SOCKADDR_STRLEN); >>>> > >>>> > ngx_log_error(NGX_LOG_ALERT, r->log, 0, >>>> > - "could not cancel %ud.%ud.%ud.%ud resolving", >>>> > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >>>> > - (addr >> 8) & 0xff, addr & 0xff); >>>> > + "could not cancel %s resolving", text); >>>> >>>> 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. >>>> >>>> 2. Don't use C++ style ("//") comments. >>>> >>>> 3. If some data is only needed for debug logging, keep relevant >>>> calculations under #if (NGX_DEBUG). >>>> >>>> [...] >>>> >>>> > @@ -334,6 +362,7 @@ >>>> > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, >>>> > peers->peer[i].current_weight = 0; >>>> > peers->peer[i].max_fails = 1; >>>> > peers->peer[i].fail_timeout = 10; >>>> > + >>>> > } >>>> > } >>>> > >>>> >>>> Please avoid unrelated changes. >>>> >>>> [...] >>>> >>>> -- >>>> Maxim Dounin >>>> http://nginx.org/en/donation.html >>>> >>>> _______________________________________________ >>>> nginx-devel mailing list >>>> nginx-devel at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshic.toshic at gmail.com Wed Jul 10 17:30:58 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:30:58 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: commit 2bf37859004e3ff2b5dd9a11e1725153ca43ff32 Author: Anton Kortunov Date: Wed Jul 10 20:49:28 2013 +0400 IPv6 support in http server upstreams Try to resolve upstream server name to IPv4 address first, then to IPv6. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c index 16e6602..df522f7 100644 --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -638,7 +638,11 @@ ngx_http_upstream_init_request(ngx_http_request_t *r) } ctx->name = *host; +#if (NGX_HAVE_INET6) + ctx->type = NGX_RESOLVE_A_AAAA; +#else ctx->type = NGX_RESOLVE_A; +#endif ctx->handler = ngx_http_upstream_resolve_handler; ctx->data = r; ctx->timeout = clcf->resolver_timeout; @@ -912,16 +916,14 @@ ngx_http_upstream_resolve_handler(ngx_resolver_ctx_t *ctx) #if (NGX_DEBUG) { - in_addr_t addr; + u_char text[NGX_SOCKADDR_STRLEN]; ngx_uint_t i; - for (i = 0; i < ctx->naddrs; i++) { - addr = ntohl(ur->addrs[i]); + for (i = 0; i < ur->naddrs; i++) { + ngx_inet_ntop(ur->addrs[i].family, &ur->addrs[i].u, text, NGX_SOCKADDR_STRLEN); - ngx_log_debug4(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "name was resolved to %ud.%ud.%ud.%ud", - (addr >> 24) & 0xff, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff); + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "name was resolved to %s", text); } } #endif diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h index fd4e36b..9e88a9a 100644 --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -254,7 +254,7 @@ typedef struct { ngx_uint_t no_port; /* unsigned no_port:1 */ ngx_uint_t naddrs; - in_addr_t *addrs; + ngx_ipaddr_t *addrs; struct sockaddr *sockaddr; socklen_t socklen; diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c index e0c6c58..cf9d6a0 100644 --- a/src/http/ngx_http_upstream_round_robin.c +++ b/src/http/ngx_http_upstream_round_robin.c @@ -268,6 +268,9 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, size_t len; ngx_uint_t i, n; struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif ngx_http_upstream_rr_peers_t *peers; ngx_http_upstream_rr_peer_data_t *rrp; @@ -306,27 +309,52 @@ ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, for (i = 0; i < ur->naddrs; i++) { - len = NGX_INET_ADDRSTRLEN + sizeof(":65536") - 1; + len = NGX_SOCKADDR_STRLEN; p = ngx_pnalloc(r->pool, len); if (p == NULL) { return NGX_ERROR; } - len = ngx_inet_ntop(AF_INET, &ur->addrs[i], p, NGX_INET_ADDRSTRLEN); + len = ngx_inet_ntop(ur->addrs[i].family, &ur->addrs[i].u, p, NGX_SOCKADDR_STRLEN - sizeof(":65535") + 1); len = ngx_sprintf(&p[len], ":%d", ur->port) - p; - sin = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in)); - if (sin == NULL) { + switch (ur->addrs[i].family) { + + case AF_INET: + sin = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in)); + if (sin == NULL) { + return NGX_ERROR; + } + + sin->sin_family = AF_INET; + sin->sin_port = htons(ur->port); + sin->sin_addr.s_addr = ur->addrs[i].u.v4; + + peers->peer[i].sockaddr = (struct sockaddr *) sin; + peers->peer[i].socklen = sizeof(struct sockaddr_in); + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in6)); + if (sin6 == NULL) { + return NGX_ERROR; + } + + sin6->sin6_family = AF_INET6; + sin6->sin6_port = htons(ur->port); + sin6->sin6_addr = ur->addrs[i].u.v6; + + peers->peer[i].sockaddr = (struct sockaddr *) sin6; + peers->peer[i].socklen = sizeof(struct sockaddr_in6); + break; +#endif + + default: return NGX_ERROR; } - sin->sin_family = AF_INET; - sin->sin_port = htons(ur->port); - sin->sin_addr.s_addr = ur->addrs[i]; - - peers->peer[i].sockaddr = (struct sockaddr *) sin; - peers->peer[i].socklen = sizeof(struct sockaddr_in); peers->peer[i].name.len = len; peers->peer[i].name.data = p; peers->peer[i].weight = 1; On Wed, Jul 10, 2013 at 9:29 PM, ToSHiC wrote: > commit 524dd02549575cb9ad5e95444093f6b494dc59bc > Author: Anton Kortunov > Date: Wed Jul 10 20:43:59 2013 +0400 > > IPv6 reverse resolve support > > diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c > index 567368b..06d46c1 100644 > --- a/src/core/ngx_resolver.c > +++ b/src/core/ngx_resolver.c > @@ -71,7 +71,7 @@ static void ngx_resolver_process_response(ngx_resolver_t > *r, u_char *buf, > size_t n); > static void ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t > n, > ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan, ngx_uint_t ans); > -static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, > size_t n, > +void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, > ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan); > static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, > ngx_str_t *name, uint32_t hash); > @@ -126,7 +126,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_str_t *names, > ngx_uint_t n) > ngx_resolver_rbtree_insert_value); > > ngx_rbtree_init(&r->addr_rbtree, &r->addr_sentinel, > - ngx_rbtree_insert_value); > + ngx_resolver_rbtree_insert_value); > > ngx_queue_init(&r->name_resend_queue); > ngx_queue_init(&r->addr_resend_queue); > @@ -649,17 +649,40 @@ failed: > ngx_int_t > ngx_resolve_addr(ngx_resolver_ctx_t *ctx) > { > + uint32_t hash; > u_char *name; > ngx_resolver_t *r; > ngx_resolver_node_t *rn; > > r = ctx->resolver; > + rn = NULL; > + > + hash = ctx->addr.family; > + > + switch(ctx->addr.family) { > + > + case AF_INET: > + ctx->addr.u.v4 = ntohl(ctx->addr.u.v4); > + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, > sizeof(in_addr_t)); > +ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, > + "resolve addr hash: %xd, addr:%xd, family: %d", hash, > ctx->addr.u.v4, ctx->addr.family); > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, sizeof(struct > in6_addr)); > + break; > +#endif > > - ctx->addr = ntohl(ctx->addr); > + default: > + goto failed; > + } > > /* lock addr mutex */ > > - rn = ngx_resolver_lookup_addr(r, ctx->addr); > + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); > + ngx_log_error(r->log_level, r->log, 0, > + "resolve: in resolve_addr searching, hash = %xd, rn = > %p", hash, rn); > > if (rn) { > > @@ -714,8 +737,10 @@ ngx_resolve_addr(ngx_resolver_ctx_t *ctx) > goto failed; > } > > - rn->node.key = ctx->addr; > + rn->node.key = hash; > rn->query = NULL; > + rn->qtype = ctx->type; > + rn->u.addr = ctx->addr; > > ngx_rbtree_insert(&r->addr_rbtree, &rn->node); > } > @@ -788,10 +813,11 @@ failed: > void > ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) > { > - in_addr_t addr; > + uint32_t hash; > ngx_resolver_t *r; > ngx_resolver_ctx_t *w, **p; > ngx_resolver_node_t *rn; > + u_char text[NGX_SOCKADDR_STRLEN]; > > r = ctx->resolver; > > @@ -806,7 +832,25 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) > > if (ctx->state == NGX_AGAIN || ctx->state == NGX_RESOLVE_TIMEDOUT) { > > - rn = ngx_resolver_lookup_addr(r, ctx->addr); > + hash = ctx->addr.family; > + > + switch(ctx->addr.family) { > + > + case AF_INET: > + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, > sizeof(in_addr_t)); > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, > sizeof(struct in6_addr)); > + break; > +#endif > + > + default: > + goto failed; > + } > + > + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); > > if (rn) { > p = &rn->waiting; > @@ -824,12 +868,12 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) > } > } > > - addr = ntohl(ctx->addr); > +failed: > + > + ngx_inet_ntop(ctx->addr.family, &ctx->addr.u, text, > NGX_SOCKADDR_STRLEN); > > ngx_log_error(NGX_LOG_ALERT, r->log, 0, > - "could not cancel %ud.%ud.%ud.%ud resolving", > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, > - (addr >> 8) & 0xff, addr & 0xff); > + "could not cancel %s resolving", text); > } > > done: > @@ -1582,13 +1626,14 @@ failed: > } > > > -static void > +void > ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, > ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan) > { > - char *err; > + char *err = NULL; > + uint32_t hash = 0; > size_t len; > - in_addr_t addr; > + ngx_ipaddr_t addr; > int32_t ttl; > ngx_int_t digit; > ngx_str_t name; > @@ -1596,12 +1641,16 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > ngx_resolver_an_t *an; > ngx_resolver_ctx_t *ctx, *next; > ngx_resolver_node_t *rn; > + u_char text[NGX_SOCKADDR_STRLEN]; > > if (ngx_resolver_copy(r, NULL, buf, &buf[12], &buf[n]) != NGX_OK) { > goto invalid_in_addr_arpa; > } > > - addr = 0; > + ngx_memzero(&addr, sizeof(ngx_ipaddr_t)); > + > + /* Try to parse request as in-addr.arpa */ > + addr.family = AF_INET; > i = 12; > > for (mask = 0; mask < 32; mask += 8) { > @@ -1612,7 +1661,7 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > goto invalid_in_addr_arpa; > } > > - addr += digit << mask; > + addr.u.v4 += digit << mask; > i += len; > } > > @@ -1620,15 +1669,79 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > goto invalid_in_addr_arpa; > } > > + i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); > + > + goto found; > + > +invalid_in_addr_arpa: > + > +#if (NGX_HAVE_INET6) > + /* Try to parse request as ip6.arpa */ > + addr.family = AF_INET6; > + i = 12; > + > + for (len = 15; len < 16; len--) { > + if (buf[i++] != 1) > + goto invalid_arpa; > + > + digit = ngx_hextoi(&buf[i++], 1); > + if (digit == NGX_ERROR || digit > 16) { > + goto invalid_arpa; > + } > + > + addr.u.v6.s6_addr[len] = digit; > + > + if (buf[i++] != 1) > + goto invalid_arpa; > + > + > + digit = ngx_hextoi(&buf[i++], 1); > + if (digit == NGX_ERROR || digit > 16) { > + goto invalid_arpa; > + } > + > + addr.u.v6.s6_addr[len] += digit << 4; > + } > + > + if (ngx_strcmp(&buf[i], "\3ip6\4arpa") != 0) { > + goto invalid_arpa; > + } > + > + i += sizeof("\3ip6\4arpa") + sizeof(ngx_resolver_qs_t); > + > +#else /* NGX_HAVE_INET6 */ > + goto invalid_arpa; > +#endif > + > +found: > + > /* lock addr mutex */ > > - rn = ngx_resolver_lookup_addr(r, addr); > + hash = addr.family; > + > + switch(addr.family) { > + > + case AF_INET: > + ngx_crc32_update(&hash, (u_char *)&addr.u.v4, sizeof(in_addr_t)); > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + ngx_crc32_update(&hash, (u_char *)&addr.u.v6, sizeof(struct > in6_addr)); > + break; > +#endif > + > + default: > + goto invalid; > + } > + > + rn = ngx_resolver_lookup_addr(r, addr, hash); > + > + ngx_inet_ntop(addr.family, &addr.u, text, NGX_SOCKADDR_STRLEN); > > if (rn == NULL || rn->query == NULL) { > ngx_log_error(r->log_level, r->log, 0, > - "unexpected response for %ud.%ud.%ud.%ud", > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, > - (addr >> 8) & 0xff, addr & 0xff); > + "unexpected response for %s", text); > goto failed; > } > > @@ -1636,12 +1749,15 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > > if (ident != qident) { > ngx_log_error(r->log_level, r->log, 0, > - "wrong ident %ui response for %ud.%ud.%ud.%ud, expect > %ui", > - ident, (addr >> 24) & 0xff, (addr >> 16) & 0xff, > - (addr >> 8) & 0xff, addr & 0xff, qident); > + "wrong ident %ui response for %s, expect %ui", > + ident, text, qident); > goto failed; > } > > + ngx_log_error(r->log_level, r->log, 0, > + "code: %d, nan: %d", > + code, nan); > + > if (code == 0 && nan == 0) { > code = 3; /* NXDOMAIN */ > } > @@ -1669,8 +1785,6 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > return; > } > > - i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); > - > if (i + 2 + sizeof(ngx_resolver_an_t) > (ngx_uint_t) n) { > goto short_response; > } > @@ -1750,10 +1864,10 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char > *buf, size_t n, > > return; > -invalid_in_addr_arpa: > +invalid_arpa: > > ngx_log_error(r->log_level, r->log, 0, > - "invalid in-addr.arpa name in DNS response"); > + "invalid in-addr.arpa or ip6.arpa name in DNS > response"); > return; > > short_response: > @@ -1818,28 +1932,54 @@ ngx_resolver_lookup_name(ngx_resolver_t *r, > ngx_str_t *name, uint32_t hash) > > > static ngx_resolver_node_t * > -ngx_resolver_lookup_addr(ngx_resolver_t *r, in_addr_t addr) > +ngx_resolver_lookup_addr(ngx_resolver_t *r, ngx_ipaddr_t addr, uint32_t > hash) > { > + ngx_int_t rc; > ngx_rbtree_node_t *node, *sentinel; > + ngx_resolver_node_t *rn; > > node = r->addr_rbtree.root; > sentinel = r->addr_rbtree.sentinel; > > while (node != sentinel) { > > - if (addr < node->key) { > + if (hash < node->key) { > node = node->left; > continue; > } > > - if (addr > node->key) { > + if (hash > node->key) { > node = node->right; > continue; > } > > - /* addr == node->key */ > + /* hash == node->key */ > + > + rn = (ngx_resolver_node_t *) node; > + > + rc = addr.family - rn->u.addr.family; > + > + if (rc == 0) { > + > + switch (addr.family) { > + case AF_INET: > + rc = ngx_memn2cmp((u_char *)&addr.u.v4, (u_char > *)&rn->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + rc = ngx_memn2cmp((u_char *)&addr.u.v6, (u_char > *)&rn->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct in6_addr)); > + break; > +#endif > + } > + > + if (rc == 0) { > + return rn; > + } > > - return (ngx_resolver_node_t *) node; > + } > + > + node = (rc < 0) ? node->left : node->right; > } > > /* not found */ > @@ -1854,6 +1994,7 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t > *temp, > { > ngx_rbtree_node_t **p; > ngx_resolver_node_t *rn, *rn_temp; > + ngx_int_t rc; > > for ( ;; ) { > > @@ -1870,8 +2011,29 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t > *temp, > rn = (ngx_resolver_node_t *) node; > rn_temp = (ngx_resolver_node_t *) temp; > > - p = (ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, > rn_temp->nlen) > - < 0) ? &temp->left : &temp->right; > + if (rn->qtype == NGX_RESOLVE_PTR) { > + rc = rn->u.addr.family - rn_temp->u.addr.family; > + > + if (rc == 0) { > + > + switch (rn->u.addr.family) { > + case AF_INET: > + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v4, > (u_char *)&rn_temp->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); > + break; > + > + #if (NGX_HAVE_INET6) > + case AF_INET6: > + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v6, > (u_char *)&rn_temp->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct > in6_addr)); > + break; > + #endif > + } > + } > + > + } else { > + rc = ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, > rn_temp->nlen); > + } > + > + p = (rc < 0) ? &temp->left : &temp->right; > } > > if (*p == sentinel) { > @@ -1989,8 +2151,6 @@ ngx_resolver_create_name_query(ngx_resolver_node_t > *rn, ngx_resolver_ctx_t *ctx) > } > > > -/* AF_INET only */ > - > static ngx_int_t > ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, > ngx_resolver_ctx_t *ctx) > { > @@ -2001,7 +2161,7 @@ ngx_resolver_create_addr_query(ngx_resolver_node_t > *rn, ngx_resolver_ctx_t *ctx) > ngx_resolver_query_t *query; > > len = sizeof(ngx_resolver_query_t) > - + sizeof(".255.255.255.255.in-addr.arpa.") - 1 > + + NGX_PTR_QUERY_LEN > + sizeof(ngx_resolver_qs_t); > > p = ngx_resolver_alloc(ctx->resolver, len); > @@ -2028,18 +2188,50 @@ ngx_resolver_create_addr_query(ngx_resolver_node_t > *rn, ngx_resolver_ctx_t *ctx) > p += sizeof(ngx_resolver_query_t); > > - for (n = 0; n < 32; n += 8) { > - d = ngx_sprintf(&p[1], "%ud", (ctx->addr >> n) & 0xff); > - *p = (u_char) (d - &p[1]); > - p = d; > + switch (ctx->addr.family) { > + > + case AF_INET: > + for (n = 0; n < 32; n += 8) { > + d = ngx_sprintf(&p[1], "%ud", (ctx->addr.u.v4 >> n) & 0xff); > + *p = (u_char) (d - &p[1]); > + p = d; > + } > + > + /* query type "PTR", IP query class */ > + ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); > + > + rn->qlen = (u_short) > + (p + sizeof("\7in-addr\4arpa") + > sizeof(ngx_resolver_qs_t) > + - rn->query); > + > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + for (n = 15; n >= 0; n--) { > + p = ngx_sprintf(p, "\1%xd\1%xd", > + (ctx->addr.u.v6.s6_addr[n]) & 0xf, > + (ctx->addr.u.v6.s6_addr[n] >> 4) & 0xf); > + > + } > + > + /* query type "PTR", IP query class */ > + ngx_memcpy(p, "\3ip6\4arpa\0\0\14\0\1", 18); > + > + rn->qlen = (u_short) > + (p + sizeof("\3ip6\4arpa") + > sizeof(ngx_resolver_qs_t) > + - rn->query); > + > + break; > +#endif > + > + default: > + return NGX_ERROR; > } > > - /* query type "PTR", IP query class */ > - ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); > +ngx_log_debug2(NGX_LOG_DEBUG_CORE, ctx->resolver->log, 0, > + "resolve: query %s, ident %i", (rn->query+12), ident & > 0xffff); > > - rn->qlen = (u_short) > - (p + sizeof("\7in-addr\4arpa") + > sizeof(ngx_resolver_qs_t) > - - rn->query); > > return NGX_OK; > } > diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h > index d2a4606..a45b244 100644 > --- a/src/core/ngx_resolver.h > +++ b/src/core/ngx_resolver.h > @@ -41,6 +41,11 @@ > > #define NGX_RESOLVER_MAX_RECURSION 50 > > +#if (NGX_HAVE_INET6) > +#define NGX_PTR_QUERY_LEN > (sizeof(".f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.ip6.arpa.") > - 1) > +#else > +#define NGX_PTR_QUERY_LEN (sizeof(".255.255.255.255.in-addr.arpa.") - 1) > +#endif > > typedef struct { > ngx_connection_t *connection; > > > > On Wed, Jul 10, 2013 at 9:24 PM, ToSHiC wrote: > >> commit 8670b164784032b2911b3c34ac31ef52ddba5b60 >> Author: Anton Kortunov >> Date: Wed Jul 10 19:53:06 2013 +0400 >> >> IPv6 support in resolver for forward requests >> >> To resolve name into IPv6 address use NGX_RESOLVE_AAAA, >> NGX_RESOLVE_A_AAAA or NGX_RESOLVE_AAAA_A record type instead of >> NGX_RESOLVE_A >> >> diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c >> index d59d0c4..567368b 100644 >> --- a/src/core/ngx_resolver.c >> +++ b/src/core/ngx_resolver.c >> @@ -76,7 +76,7 @@ static void ngx_resolver_process_ptr(ngx_resolver_t *r, >> u_char *buf, size_t n, >> static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, >> ngx_str_t *name, uint32_t hash); >> static ngx_resolver_node_t *ngx_resolver_lookup_addr(ngx_resolver_t *r, >> - in_addr_t addr); >> + ngx_ipaddr_t addr, uint32_t hash); >> static void ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, >> ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); >> static ngx_int_t ngx_resolver_copy(ngx_resolver_t *r, ngx_str_t *name, >> @@ -88,7 +88,7 @@ static void *ngx_resolver_calloc(ngx_resolver_t *r, >> size_t size); >> static void ngx_resolver_free(ngx_resolver_t *r, void *p); >> static void ngx_resolver_free_locked(ngx_resolver_t *r, void *p); >> static void *ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t size); >> -static in_addr_t *ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, >> +static ngx_ipaddr_t *ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t >> *src, >> ngx_uint_t n); >> static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, >> size_t len); >> >> @@ -270,13 +270,27 @@ ngx_resolver_cleanup_tree(ngx_resolver_t *r, >> ngx_rbtree_t *tree) >> ngx_resolver_ctx_t * >> ngx_resolve_start(ngx_resolver_t *r, ngx_resolver_ctx_t *temp) >> { >> - in_addr_t addr; >> + ngx_ipaddr_t addr; >> ngx_resolver_ctx_t *ctx; >> >> if (temp) { >> - addr = ngx_inet_addr(temp->name.data, temp->name.len); >> + addr.family = 0; >> >> - if (addr != INADDR_NONE) { >> + >> + addr.u.v4 = ngx_inet_addr(temp->name.data, temp->name.len); >> + >> + if (addr.u.v4 != INADDR_NONE) { >> + >> + addr.family = AF_INET; >> + >> +#if (NGX_HAVE_INET6) >> + } else if (ngx_inet6_addr(temp->name.data, temp->name.len, >> addr.u.v6.s6_addr) == NGX_OK) { >> + >> + addr.family = AF_INET6; >> +#endif >> + } >> + >> + if (addr.family) { >> temp->resolver = r; >> temp->state = NGX_OK; >> temp->naddrs = 1; >> @@ -417,7 +431,7 @@ static ngx_int_t >> ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) >> { >> uint32_t hash; >> - in_addr_t addr, *addrs; >> + ngx_ipaddr_t addr, *addrs; >> ngx_int_t rc; >> ngx_uint_t naddrs; >> ngx_resolver_ctx_t *next; >> @@ -429,7 +443,11 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >> ngx_resolver_ctx_t *ctx) >> >> if (rn) { >> >> - if (rn->valid >= ngx_time()) { >> + if (rn->valid >= ngx_time() >> +#if (NGX_HAVE_INET6) >> + && rn->qtype != NGX_RESOLVE_RETRY >> +#endif >> + ) { >> >> ngx_log_debug0(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve >> cached"); >> >> @@ -446,7 +464,6 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >> ngx_resolver_ctx_t *ctx) >> /* NGX_RESOLVE_A answer */ >> >> if (naddrs != 1) { >> - addr = 0; >> addrs = ngx_resolver_rotate(r, rn->u.addrs, naddrs); >> if (addrs == NULL) { >> return NGX_ERROR; >> @@ -506,6 +523,8 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >> ngx_resolver_ctx_t *ctx) >> } while (ctx); >> >> return NGX_OK; >> + } else { >> + rn->qtype = ctx->type; >> } >> >> if (rn->waiting) { >> @@ -552,6 +571,7 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >> ngx_resolver_ctx_t *ctx) >> rn->node.key = hash; >> rn->nlen = (u_short) ctx->name.len; >> rn->query = NULL; >> + rn->qtype = ctx->type; >> >> ngx_rbtree_insert(&r->name_rbtree, &rn->node); >> } >> @@ -1130,6 +1150,9 @@ found: >> switch (qtype) { >> >> case NGX_RESOLVE_A: >> +#if (NGX_HAVE_INET6) >> + case NGX_RESOLVE_AAAA: >> +#endif >> >> ngx_resolver_process_a(r, buf, n, ident, code, nan, >> i + sizeof(ngx_resolver_qs_t)); >> @@ -1178,7 +1201,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> size_t len; >> int32_t ttl; >> uint32_t hash; >> - in_addr_t addr, *addrs; >> + ngx_ipaddr_t addr, *addrs; >> ngx_str_t name; >> ngx_uint_t qtype, qident, naddrs, a, i, n, start; >> ngx_resolver_an_t *an; >> @@ -1212,12 +1235,57 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> goto failed; >> } >> >> - ngx_resolver_free(r, name.data); >> - >> if (code == 0 && nan == 0) { >> + >> +#if (NGX_HAVE_INET6) >> + /* >> + * If it was required dual type v4|v6 resolv create one more request >> + */ >> + if (rn->qtype == NGX_RESOLVE_A_AAAA >> + || rn->qtype == NGX_RESOLVE_AAAA_A) { >> + >> + ngx_queue_remove(&rn->queue); >> + >> + rn->valid = ngx_time() + (r->valid ? r->valid : ttl); >> + rn->expire = ngx_time() + r->expire; >> + >> + ngx_queue_insert_head(&r->name_expire_queue, &rn->queue); >> + >> + ctx = rn->waiting; >> + rn->waiting = NULL; >> + >> + if (ctx) { >> + ctx->name = name; >> + >> + switch (rn->qtype) { >> + >> + case NGX_RESOLVE_A_AAAA: >> + ctx->type = NGX_RESOLVE_AAAA; >> + break; >> + >> + case NGX_RESOLVE_AAAA_A: >> + ctx->type = NGX_RESOLVE_A; >> + break; >> + } >> + >> + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, >> + "restarting request for name %V, with type >> %ud", >> + &name, ctx->type); >> + >> + rn->qtype = NGX_RESOLVE_RETRY; >> + >> + (void) ngx_resolve_name_locked(r, ctx); >> + } >> + >> + return; >> + } >> +#endif >> + >> code = 3; /* NXDOMAIN */ >> } >> >> + ngx_resolver_free(r, name.data); >> + >> if (code) { >> next = rn->waiting; >> rn->waiting = NULL; >> @@ -1243,7 +1311,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> >> i = ans; >> naddrs = 0; >> - addr = 0; >> + addr.family = 0; >> addrs = NULL; >> cname = NULL; >> qtype = 0; >> @@ -1302,13 +1370,30 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> goto short_response; >> } >> >> - addr = htonl((buf[i] << 24) + (buf[i + 1] << 16) >> + addr.family = AF_INET; >> + addr.u.v4 = htonl((buf[i] << 24) + (buf[i + 1] << 16) >> + (buf[i + 2] << 8) + (buf[i + 3])); >> >> naddrs++; >> >> i += len; >> >> +#if (NGX_HAVE_INET6) >> + } else if (qtype == NGX_RESOLVE_AAAA) { >> + >> + i += sizeof(ngx_resolver_an_t); >> + >> + if (i + len > last) { >> + goto short_response; >> + } >> + >> + addr.family = AF_INET6; >> + ngx_memcpy(&addr.u.v6.s6_addr, &buf[i], 16); >> + >> + naddrs++; >> + >> + i += len; >> +#endif >> } else if (qtype == NGX_RESOLVE_CNAME) { >> cname = &buf[i] + sizeof(ngx_resolver_an_t); >> i += sizeof(ngx_resolver_an_t) + len; >> @@ -1333,7 +1418,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> >> } else { >> >> - addrs = ngx_resolver_alloc(r, naddrs * sizeof(in_addr_t)); >> + addrs = ngx_resolver_alloc(r, naddrs * sizeof(ngx_ipaddr_t)); >> if (addrs == NULL) { >> return; >> } >> @@ -1369,12 +1454,23 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> >> if (qtype == NGX_RESOLVE_A) { >> >> - addrs[n++] = htonl((buf[i] << 24) + (buf[i + 1] << >> 16) >> + addrs[n].family = AF_INET; >> + addrs[n++].u.v4 = htonl((buf[i] << 24) + (buf[i + 1] >> << 16) >> + (buf[i + 2] << 8) + (buf[i + >> 3])); >> >> if (n == naddrs) { >> break; >> } >> +#if (NGX_HAVE_INET6) >> + } else if (qtype == NGX_RESOLVE_AAAA) { >> + >> + addrs[n].family = AF_INET6; >> + ngx_memcpy(&addrs[n++].u.v6.s6_addr, &buf[i], 16); >> + >> + if (n == naddrs) { >> + break; >> + } >> +#endif >> } >> >> i += len; >> @@ -1383,7 +1479,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >> *buf, size_t last, >> rn->u.addrs = addrs; >> >> addrs = ngx_resolver_dup(r, rn->u.addrs, >> - naddrs * sizeof(in_addr_t)); >> + naddrs * sizeof(ngx_ipaddr_t)); >> if (addrs == NULL) { >> return; >> } >> @@ -1838,7 +1934,20 @@ ngx_resolver_create_name_query(ngx_resolver_node_t >> *rn, ngx_resolver_ctx_t *ctx) >> qs = (ngx_resolver_qs_t *) p; >> >> /* query type */ >> - qs->type_hi = 0; qs->type_lo = (u_char) ctx->type; >> + qs->type_hi = 0; qs->type_lo = (u_char) rn->qtype; >> + >> +#if (NGX_HAVE_INET6) >> + switch (rn->qtype) { >> + >> + case NGX_RESOLVE_A_AAAA: >> + qs->type_lo = NGX_RESOLVE_A; >> + break; >> + >> + case NGX_RESOLVE_AAAA_A: >> + qs->type_lo = NGX_RESOLVE_AAAA; >> + break; >> + } >> +#endif >> >> /* IP query class */ >> qs->class_hi = 0; qs->class_lo = 1; >> @@ -2136,13 +2245,13 @@ ngx_resolver_dup(ngx_resolver_t *r, void *src, >> size_t size) >> } >> >> >> -static in_addr_t * >> -ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n) >> +static ngx_ipaddr_t * >> +ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t *src, ngx_uint_t n) >> { >> void *dst, *p; >> ngx_uint_t j; >> >> - dst = ngx_resolver_alloc(r, n * sizeof(in_addr_t)); >> + dst = ngx_resolver_alloc(r, n * sizeof(ngx_ipaddr_t)); >> >> if (dst == NULL) { >> return dst; >> @@ -2151,12 +2260,12 @@ ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t >> *src, ngx_uint_t n) >> j = ngx_random() % n; >> >> if (j == 0) { >> - ngx_memcpy(dst, src, n * sizeof(in_addr_t)); >> + ngx_memcpy(dst, src, n * sizeof(ngx_ipaddr_t)); >> return dst; >> } >> >> - p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(in_addr_t)); >> - ngx_memcpy(p, src, j * sizeof(in_addr_t)); >> + p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(ngx_ipaddr_t)); >> + ngx_memcpy(p, src, j * sizeof(ngx_ipaddr_t)); >> >> return dst; >> } >> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >> index 6fd81fe..d2a4606 100644 >> --- a/src/core/ngx_resolver.h >> +++ b/src/core/ngx_resolver.h >> @@ -67,10 +67,11 @@ typedef struct { >> u_short qlen; >> >> u_char *query; >> + ngx_int_t qtype; >> >> union { >> - in_addr_t addr; >> - in_addr_t *addrs; >> + ngx_ipaddr_t addr; >> + ngx_ipaddr_t *addrs; >> u_char *cname; >> } u; >> >> @@ -130,8 +131,8 @@ struct ngx_resolver_ctx_s { >> ngx_str_t name; >> >> ngx_uint_t naddrs; >> - in_addr_t *addrs; >> - in_addr_t addr; >> + ngx_ipaddr_t *addrs; >> + ngx_ipaddr_t addr; >> >> ngx_resolver_handler_pt handler; >> void *data; >> >> >> >> On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: >> >>> commit 482bd2a0b6240a2b26409b9c7924ad01c814f293 >>> Author: Anton Kortunov >>> Date: Wed Jul 10 13:21:27 2013 +0400 >>> >>> Added NGX_RESOLVE_* constants >>> >>> Module developers can decide how to resolve hosts relating to IPv6: >>> >>> NGX_RESOLVE_AAAA - try to resolve only to IPv6 address >>> NGX_RESOLVE_AAAA_A - IPv6 is preferred (recommended by standards) >>> NGX_RESOLVE_A_AAAA - IPv4 is preferred (better strategy nowadays) >>> >>> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >>> index ae34ca5..6fd81fe 100644 >>> --- a/src/core/ngx_resolver.h >>> +++ b/src/core/ngx_resolver.h >>> @@ -20,6 +20,15 @@ >>> #define NGX_RESOLVE_TXT 16 >>> #define NGX_RESOLVE_DNAME 39 >>> >>> +#if (NGX_HAVE_INET6) >>> + >>> +#define NGX_RESOLVE_AAAA 28 >>> +#define NGX_RESOLVE_A_AAAA 1000 >>> +#define NGX_RESOLVE_AAAA_A 1001 >>> +#define NGX_RESOLVE_RETRY 1002 >>> + >>> +#endif >>> + >>> #define NGX_RESOLVE_FORMERR 1 >>> #define NGX_RESOLVE_SERVFAIL 2 >>> #define NGX_RESOLVE_NXDOMAIN 3 >>> >>> >>> >>> On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: >>> >>>> Hello, >>>> >>>> I've split this big patch into several small patches, taking into >>>> account your comments. I'll send each part in separate email. Here is the >>>> first one. >>>> >>>> commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 >>>> Author: Anton Kortunov >>>> Date: Wed Jul 10 13:14:52 2013 +0400 >>>> >>>> Added new structure ngx_ipaddr_t >>>> >>>> This structure contains family field >>>> and the union of ipv4/ipv6 structures in_addr_t and in6_addr. >>>> >>>> diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h >>>> index 6a5a368..077ed34 100644 >>>> --- a/src/core/ngx_inet.h >>>> +++ b/src/core/ngx_inet.h >>>> @@ -68,6 +68,16 @@ typedef struct { >>>> >>>> >>>> typedef struct { >>>> + ngx_uint_t family; >>>> + union { >>>> + in_addr_t v4; >>>> +#if (NGX_HAVE_INET6) >>>> + struct in6_addr v6; >>>> +#endif >>>> + } u; >>>> +} ngx_ipaddr_t; >>>> + >>>> +typedef struct { >>>> struct sockaddr *sockaddr; >>>> socklen_t socklen; >>>> ngx_str_t name; >>>> >>>> >>>> >>>> On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: >>>> >>>>> Hello! >>>>> >>>>> On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: >>>>> >>>>> > Hello, >>>>> > >>>>> > We needed this feature in our company, I found that it is in >>>>> milestones of >>>>> > version 1.5 but doesn't exist yet. So I've implemented it based in >>>>> 1.3 code >>>>> > and merged in current 1.5 code. When I wrote this code I mostly >>>>> cared about >>>>> > minimum intrusion into other parts of nginx. >>>>> > >>>>> > IPv6 fallback logic is not a straightforward implementation of >>>>> suggested by >>>>> > RFC. RFC states that IPv6 resolving have priority over IPv4, and >>>>> it's not >>>>> > very good for Internet we have currently. With this patch you can >>>>> specify >>>>> > priority, and in upstream and mail modules I've set IPv4 as preferred >>>>> > address family. >>>>> > >>>>> > Patch is pretty big and I hope it'll not break mailing list or mail >>>>> clients. >>>>> >>>>> You may want to try to split the patch into smaller patches to >>>>> simplify review. See also some hints here: >>>>> >>>>> http://nginx.org/en/docs/contributing_changes.html >>>>> >>>>> Some quick comments below. >>>>> >>>>> [...] >>>>> >>>>> > - addr = ntohl(ctx->addr); >>>>> > +failed: >>>>> > + >>>>> > + //addr = ntohl(ctx->addr); >>>>> > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, >>>>> > NGX_SOCKADDR_STRLEN); >>>>> > >>>>> > ngx_log_error(NGX_LOG_ALERT, r->log, 0, >>>>> > - "could not cancel %ud.%ud.%ud.%ud resolving", >>>>> > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >>>>> > - (addr >> 8) & 0xff, addr & 0xff); >>>>> > + "could not cancel %s resolving", text); >>>>> >>>>> 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. >>>>> >>>>> 2. Don't use C++ style ("//") comments. >>>>> >>>>> 3. If some data is only needed for debug logging, keep relevant >>>>> calculations under #if (NGX_DEBUG). >>>>> >>>>> [...] >>>>> >>>>> > @@ -334,6 +362,7 @@ >>>>> > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, >>>>> > peers->peer[i].current_weight = 0; >>>>> > peers->peer[i].max_fails = 1; >>>>> > peers->peer[i].fail_timeout = 10; >>>>> > + >>>>> > } >>>>> > } >>>>> > >>>>> >>>>> Please avoid unrelated changes. >>>>> >>>>> [...] >>>>> >>>>> -- >>>>> Maxim Dounin >>>>> http://nginx.org/en/donation.html >>>>> >>>>> _______________________________________________ >>>>> nginx-devel mailing list >>>>> nginx-devel at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshic.toshic at gmail.com Wed Jul 10 17:32:55 2013 From: toshic.toshic at gmail.com (ToSHiC) Date: Wed, 10 Jul 2013 21:32:55 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: commit f194edc7e351d3a487e9305935647f0587b65fca Author: Anton Kortunov Date: Wed Jul 10 20:53:21 2013 +0400 IPv6 support in mail server Client ip address is resolving to hostname even if it's IPv6 address. Forward resolve of this hostname is processed according to socket family. diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c index 2171423..481e4a4 100644 --- a/src/mail/ngx_mail_smtp_handler.c +++ b/src/mail/ngx_mail_smtp_handler.c @@ -56,6 +56,9 @@ void ngx_mail_smtp_init_session(ngx_mail_session_t *s, ngx_connection_t *c) { struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif ngx_resolver_ctx_t *ctx; ngx_mail_core_srv_conf_t *cscf; @@ -67,7 +70,11 @@ ngx_mail_smtp_init_session(ngx_mail_session_t *s, ngx_connection_t *c) return; } - if (c->sockaddr->sa_family != AF_INET) { + if (c->sockaddr->sa_family != AF_INET +#if (NGX_HAVE_INET6) + && c->sockaddr->sa_family != AF_INET6 +#endif + ) { s->host = smtp_tempunavail; ngx_mail_smtp_greeting(s, c); return; @@ -81,11 +88,23 @@ ngx_mail_smtp_init_session(ngx_mail_session_t *s, ngx_connection_t *c) return; } - /* AF_INET only */ + ctx->addr.family = c->sockaddr->sa_family; - sin = (struct sockaddr_in *) c->sockaddr; + switch (c->sockaddr->sa_family) { + + case AF_INET: + sin = (struct sockaddr_in *) c->sockaddr; + ctx->addr.u.v4 = sin->sin_addr.s_addr; + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = (struct sockaddr_in6 *) c->sockaddr; + ctx->addr.u.v6 = sin6->sin6_addr; + break; +#endif + } - ctx->addr = sin->sin_addr.s_addr; ctx->handler = ngx_mail_smtp_resolve_addr_handler; ctx->data = s; ctx->timeout = cscf->resolver_timeout; @@ -167,11 +186,23 @@ ngx_mail_smtp_resolve_name(ngx_event_t *rev) } ctx->name = s->host; - ctx->type = NGX_RESOLVE_A; ctx->handler = ngx_mail_smtp_resolve_name_handler; ctx->data = s; ctx->timeout = cscf->resolver_timeout; + switch (c->sockaddr->sa_family) { + + case AF_INET: + ctx->type = NGX_RESOLVE_A; + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + ctx->type = NGX_RESOLVE_AAAA_A; + break; +#endif + } + if (ngx_resolve_name(ctx) != NGX_OK) { ngx_mail_close_connection(c); } @@ -181,10 +212,13 @@ ngx_mail_smtp_resolve_name(ngx_event_t *rev) static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx) { - in_addr_t addr; + ngx_ipaddr_t addr; ngx_uint_t i; ngx_connection_t *c; struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif ngx_mail_session_t *s; s = ctx->data; @@ -205,23 +239,55 @@ ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx) } else { - /* AF_INET only */ + addr.family = c->sockaddr->sa_family; - sin = (struct sockaddr_in *) c->sockaddr; + switch (c->sockaddr->sa_family) { + + case AF_INET: + sin = (struct sockaddr_in *) c->sockaddr; + addr.u.v4 = sin->sin_addr.s_addr; + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = (struct sockaddr_in6 *) c->sockaddr; + addr.u.v6 = sin6->sin6_addr; + break; +#endif + } for (i = 0; i < ctx->naddrs; i++) { - addr = ctx->addrs[i]; +#if (NGX_DEBUG) + { + u_char text[NGX_SOCKADDR_STRLEN]; + + ngx_inet_ntop(ctx->addrs[i].family, &ctx->addrs[i].u, text, NGX_SOCKADDR_STRLEN); + + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, + "name was resolved to %s", text); + } +#endif + + if (addr.family != ctx->addrs[i].family) { + continue; + } - ngx_log_debug4(NGX_LOG_DEBUG_MAIL, c->log, 0, - "name was resolved to %ud.%ud.%ud.%ud", - (ntohl(addr) >> 24) & 0xff, - (ntohl(addr) >> 16) & 0xff, - (ntohl(addr) >> 8) & 0xff, - ntohl(addr) & 0xff); + switch (addr.family) { - if (addr == sin->sin_addr.s_addr) { - goto found; + case AF_INET: + if (addr.u.v4 == ctx->addrs[i].u.v4) { + goto found; + } + break; + +#if (NGX_HAVE_INET6) + case AF_INET6: + if (!ngx_memcmp(&addr.u.v6, &ctx->addrs[i].u.v6, sizeof(addr.u.v6))) { + goto found; + } + break; +#endif } } On Wed, Jul 10, 2013 at 9:30 PM, ToSHiC wrote: > commit 2bf37859004e3ff2b5dd9a11e1725153ca43ff32 > Author: Anton Kortunov > Date: Wed Jul 10 20:49:28 2013 +0400 > > IPv6 support in http server upstreams > > Try to resolve upstream server name to IPv4 address first, then to > IPv6. > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > index 16e6602..df522f7 100644 > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -638,7 +638,11 @@ ngx_http_upstream_init_request(ngx_http_request_t *r) > } > > ctx->name = *host; > +#if (NGX_HAVE_INET6) > + ctx->type = NGX_RESOLVE_A_AAAA; > +#else > ctx->type = NGX_RESOLVE_A; > +#endif > ctx->handler = ngx_http_upstream_resolve_handler; > ctx->data = r; > ctx->timeout = clcf->resolver_timeout; > @@ -912,16 +916,14 @@ ngx_http_upstream_resolve_handler(ngx_resolver_ctx_t > *ctx) > > #if (NGX_DEBUG) > { > - in_addr_t addr; > + u_char text[NGX_SOCKADDR_STRLEN]; > ngx_uint_t i; > > - for (i = 0; i < ctx->naddrs; i++) { > - addr = ntohl(ur->addrs[i]); > + for (i = 0; i < ur->naddrs; i++) { > + ngx_inet_ntop(ur->addrs[i].family, &ur->addrs[i].u, text, > NGX_SOCKADDR_STRLEN); > > - ngx_log_debug4(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > - "name was resolved to %ud.%ud.%ud.%ud", > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, > - (addr >> 8) & 0xff, addr & 0xff); > + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > + "name was resolved to %s", text); > } > } > #endif > diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h > index fd4e36b..9e88a9a 100644 > --- a/src/http/ngx_http_upstream.h > +++ b/src/http/ngx_http_upstream.h > @@ -254,7 +254,7 @@ typedef struct { > ngx_uint_t no_port; /* unsigned no_port:1 */ > > ngx_uint_t naddrs; > - in_addr_t *addrs; > + ngx_ipaddr_t *addrs; > > struct sockaddr *sockaddr; > socklen_t socklen; > diff --git a/src/http/ngx_http_upstream_round_robin.c > b/src/http/ngx_http_upstream_round_robin.c > index e0c6c58..cf9d6a0 100644 > --- a/src/http/ngx_http_upstream_round_robin.c > +++ b/src/http/ngx_http_upstream_round_robin.c > @@ -268,6 +268,9 @@ > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, > size_t len; > ngx_uint_t i, n; > struct sockaddr_in *sin; > +#if (NGX_HAVE_INET6) > + struct sockaddr_in6 *sin6; > +#endif > ngx_http_upstream_rr_peers_t *peers; > ngx_http_upstream_rr_peer_data_t *rrp; > > @@ -306,27 +309,52 @@ > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, > > for (i = 0; i < ur->naddrs; i++) { > > - len = NGX_INET_ADDRSTRLEN + sizeof(":65536") - 1; > + len = NGX_SOCKADDR_STRLEN; > > p = ngx_pnalloc(r->pool, len); > if (p == NULL) { > return NGX_ERROR; > } > > - len = ngx_inet_ntop(AF_INET, &ur->addrs[i], p, > NGX_INET_ADDRSTRLEN); > + len = ngx_inet_ntop(ur->addrs[i].family, &ur->addrs[i].u, p, > NGX_SOCKADDR_STRLEN - sizeof(":65535") + 1); > len = ngx_sprintf(&p[len], ":%d", ur->port) - p; > > - sin = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in)); > - if (sin == NULL) { > + switch (ur->addrs[i].family) { > + > + case AF_INET: > + sin = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in)); > + if (sin == NULL) { > + return NGX_ERROR; > + } > + > + sin->sin_family = AF_INET; > + sin->sin_port = htons(ur->port); > + sin->sin_addr.s_addr = ur->addrs[i].u.v4; > + > + peers->peer[i].sockaddr = (struct sockaddr *) sin; > + peers->peer[i].socklen = sizeof(struct sockaddr_in); > + break; > + > +#if (NGX_HAVE_INET6) > + case AF_INET6: > + sin6 = ngx_pcalloc(r->pool, sizeof(struct sockaddr_in6)); > + if (sin6 == NULL) { > + return NGX_ERROR; > + } > + > + sin6->sin6_family = AF_INET6; > + sin6->sin6_port = htons(ur->port); > + sin6->sin6_addr = ur->addrs[i].u.v6; > + > + peers->peer[i].sockaddr = (struct sockaddr *) sin6; > + peers->peer[i].socklen = sizeof(struct sockaddr_in6); > + break; > +#endif > + > + default: > return NGX_ERROR; > } > > - sin->sin_family = AF_INET; > - sin->sin_port = htons(ur->port); > - sin->sin_addr.s_addr = ur->addrs[i]; > - > - peers->peer[i].sockaddr = (struct sockaddr *) sin; > - peers->peer[i].socklen = sizeof(struct sockaddr_in); > peers->peer[i].name.len = len; > peers->peer[i].name.data = p; > peers->peer[i].weight = 1; > > > > On Wed, Jul 10, 2013 at 9:29 PM, ToSHiC wrote: > >> commit 524dd02549575cb9ad5e95444093f6b494dc59bc >> Author: Anton Kortunov >> Date: Wed Jul 10 20:43:59 2013 +0400 >> >> IPv6 reverse resolve support >> >> diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c >> index 567368b..06d46c1 100644 >> --- a/src/core/ngx_resolver.c >> +++ b/src/core/ngx_resolver.c >> @@ -71,7 +71,7 @@ static void >> ngx_resolver_process_response(ngx_resolver_t *r, u_char *buf, >> size_t n); >> static void ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, >> size_t n, >> ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan, ngx_uint_t ans); >> -static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, >> size_t n, >> +void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, >> ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan); >> static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, >> ngx_str_t *name, uint32_t hash); >> @@ -126,7 +126,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_str_t *names, >> ngx_uint_t n) >> ngx_resolver_rbtree_insert_value); >> >> ngx_rbtree_init(&r->addr_rbtree, &r->addr_sentinel, >> - ngx_rbtree_insert_value); >> + ngx_resolver_rbtree_insert_value); >> >> ngx_queue_init(&r->name_resend_queue); >> ngx_queue_init(&r->addr_resend_queue); >> @@ -649,17 +649,40 @@ failed: >> ngx_int_t >> ngx_resolve_addr(ngx_resolver_ctx_t *ctx) >> { >> + uint32_t hash; >> u_char *name; >> ngx_resolver_t *r; >> ngx_resolver_node_t *rn; >> >> r = ctx->resolver; >> + rn = NULL; >> + >> + hash = ctx->addr.family; >> + >> + switch(ctx->addr.family) { >> + >> + case AF_INET: >> + ctx->addr.u.v4 = ntohl(ctx->addr.u.v4); >> + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, >> sizeof(in_addr_t)); >> +ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, >> + "resolve addr hash: %xd, addr:%xd, family: %d", hash, >> ctx->addr.u.v4, ctx->addr.family); >> + break; >> + >> +#if (NGX_HAVE_INET6) >> + case AF_INET6: >> + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, sizeof(struct >> in6_addr)); >> + break; >> +#endif >> >> - ctx->addr = ntohl(ctx->addr); >> + default: >> + goto failed; >> + } >> >> /* lock addr mutex */ >> >> - rn = ngx_resolver_lookup_addr(r, ctx->addr); >> + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); >> + ngx_log_error(r->log_level, r->log, 0, >> + "resolve: in resolve_addr searching, hash = %xd, rn = >> %p", hash, rn); >> >> if (rn) { >> >> @@ -714,8 +737,10 @@ ngx_resolve_addr(ngx_resolver_ctx_t *ctx) >> goto failed; >> } >> >> - rn->node.key = ctx->addr; >> + rn->node.key = hash; >> rn->query = NULL; >> + rn->qtype = ctx->type; >> + rn->u.addr = ctx->addr; >> >> ngx_rbtree_insert(&r->addr_rbtree, &rn->node); >> } >> @@ -788,10 +813,11 @@ failed: >> void >> ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) >> { >> - in_addr_t addr; >> + uint32_t hash; >> ngx_resolver_t *r; >> ngx_resolver_ctx_t *w, **p; >> ngx_resolver_node_t *rn; >> + u_char text[NGX_SOCKADDR_STRLEN]; >> >> r = ctx->resolver; >> >> @@ -806,7 +832,25 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) >> >> if (ctx->state == NGX_AGAIN || ctx->state == NGX_RESOLVE_TIMEDOUT) { >> >> - rn = ngx_resolver_lookup_addr(r, ctx->addr); >> + hash = ctx->addr.family; >> + >> + switch(ctx->addr.family) { >> + >> + case AF_INET: >> + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v4, >> sizeof(in_addr_t)); >> + break; >> + >> +#if (NGX_HAVE_INET6) >> + case AF_INET6: >> + ngx_crc32_update(&hash, (u_char *)&ctx->addr.u.v6, >> sizeof(struct in6_addr)); >> + break; >> +#endif >> + >> + default: >> + goto failed; >> + } >> + >> + rn = ngx_resolver_lookup_addr(r, ctx->addr, hash); >> >> if (rn) { >> p = &rn->waiting; >> @@ -824,12 +868,12 @@ ngx_resolve_addr_done(ngx_resolver_ctx_t *ctx) >> } >> } >> >> - addr = ntohl(ctx->addr); >> +failed: >> + >> + ngx_inet_ntop(ctx->addr.family, &ctx->addr.u, text, >> NGX_SOCKADDR_STRLEN); >> >> ngx_log_error(NGX_LOG_ALERT, r->log, 0, >> - "could not cancel %ud.%ud.%ud.%ud resolving", >> - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >> - (addr >> 8) & 0xff, addr & 0xff); >> + "could not cancel %s resolving", text); >> } >> >> done: >> @@ -1582,13 +1626,14 @@ failed: >> } >> >> >> -static void >> +void >> ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, >> ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan) >> { >> - char *err; >> + char *err = NULL; >> + uint32_t hash = 0; >> size_t len; >> - in_addr_t addr; >> + ngx_ipaddr_t addr; >> int32_t ttl; >> ngx_int_t digit; >> ngx_str_t name; >> @@ -1596,12 +1641,16 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, >> u_char *buf, size_t n, >> ngx_resolver_an_t *an; >> ngx_resolver_ctx_t *ctx, *next; >> ngx_resolver_node_t *rn; >> + u_char text[NGX_SOCKADDR_STRLEN]; >> >> if (ngx_resolver_copy(r, NULL, buf, &buf[12], &buf[n]) != NGX_OK) { >> goto invalid_in_addr_arpa; >> } >> >> - addr = 0; >> + ngx_memzero(&addr, sizeof(ngx_ipaddr_t)); >> + >> + /* Try to parse request as in-addr.arpa */ >> + addr.family = AF_INET; >> i = 12; >> >> for (mask = 0; mask < 32; mask += 8) { >> @@ -1612,7 +1661,7 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char >> *buf, size_t n, >> goto invalid_in_addr_arpa; >> } >> >> - addr += digit << mask; >> + addr.u.v4 += digit << mask; >> i += len; >> } >> >> @@ -1620,15 +1669,79 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, >> u_char *buf, size_t n, >> goto invalid_in_addr_arpa; >> } >> >> + i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); >> + >> + goto found; >> + >> +invalid_in_addr_arpa: >> + >> +#if (NGX_HAVE_INET6) >> + /* Try to parse request as ip6.arpa */ >> + addr.family = AF_INET6; >> + i = 12; >> + >> + for (len = 15; len < 16; len--) { >> + if (buf[i++] != 1) >> + goto invalid_arpa; >> + >> + digit = ngx_hextoi(&buf[i++], 1); >> + if (digit == NGX_ERROR || digit > 16) { >> + goto invalid_arpa; >> + } >> + >> + addr.u.v6.s6_addr[len] = digit; >> + >> + if (buf[i++] != 1) >> + goto invalid_arpa; >> + >> + >> + digit = ngx_hextoi(&buf[i++], 1); >> + if (digit == NGX_ERROR || digit > 16) { >> + goto invalid_arpa; >> + } >> + >> + addr.u.v6.s6_addr[len] += digit << 4; >> + } >> + >> + if (ngx_strcmp(&buf[i], "\3ip6\4arpa") != 0) { >> + goto invalid_arpa; >> + } >> + >> + i += sizeof("\3ip6\4arpa") + sizeof(ngx_resolver_qs_t); >> + >> +#else /* NGX_HAVE_INET6 */ >> + goto invalid_arpa; >> +#endif >> + >> +found: >> + >> /* lock addr mutex */ >> >> - rn = ngx_resolver_lookup_addr(r, addr); >> + hash = addr.family; >> + >> + switch(addr.family) { >> + >> + case AF_INET: >> + ngx_crc32_update(&hash, (u_char *)&addr.u.v4, sizeof(in_addr_t)); >> + break; >> + >> +#if (NGX_HAVE_INET6) >> + case AF_INET6: >> + ngx_crc32_update(&hash, (u_char *)&addr.u.v6, sizeof(struct >> in6_addr)); >> + break; >> +#endif >> + >> + default: >> + goto invalid; >> + } >> + >> + rn = ngx_resolver_lookup_addr(r, addr, hash); >> + >> + ngx_inet_ntop(addr.family, &addr.u, text, NGX_SOCKADDR_STRLEN); >> >> if (rn == NULL || rn->query == NULL) { >> ngx_log_error(r->log_level, r->log, 0, >> - "unexpected response for %ud.%ud.%ud.%ud", >> - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >> - (addr >> 8) & 0xff, addr & 0xff); >> + "unexpected response for %s", text); >> goto failed; >> } >> >> @@ -1636,12 +1749,15 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, >> u_char *buf, size_t n, >> >> if (ident != qident) { >> ngx_log_error(r->log_level, r->log, 0, >> - "wrong ident %ui response for %ud.%ud.%ud.%ud, >> expect %ui", >> - ident, (addr >> 24) & 0xff, (addr >> 16) & 0xff, >> - (addr >> 8) & 0xff, addr & 0xff, qident); >> + "wrong ident %ui response for %s, expect %ui", >> + ident, text, qident); >> goto failed; >> } >> >> + ngx_log_error(r->log_level, r->log, 0, >> + "code: %d, nan: %d", >> + code, nan); >> + >> if (code == 0 && nan == 0) { >> code = 3; /* NXDOMAIN */ >> } >> @@ -1669,8 +1785,6 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, u_char >> *buf, size_t n, >> return; >> } >> >> - i += sizeof("\7in-addr\4arpa") + sizeof(ngx_resolver_qs_t); >> - >> if (i + 2 + sizeof(ngx_resolver_an_t) > (ngx_uint_t) n) { >> goto short_response; >> } >> @@ -1750,10 +1864,10 @@ ngx_resolver_process_ptr(ngx_resolver_t *r, >> u_char *buf, size_t n, >> >> return; >> -invalid_in_addr_arpa: >> +invalid_arpa: >> >> ngx_log_error(r->log_level, r->log, 0, >> - "invalid in-addr.arpa name in DNS response"); >> + "invalid in-addr.arpa or ip6.arpa name in DNS >> response"); >> return; >> >> short_response: >> @@ -1818,28 +1932,54 @@ ngx_resolver_lookup_name(ngx_resolver_t *r, >> ngx_str_t *name, uint32_t hash) >> >> >> static ngx_resolver_node_t * >> -ngx_resolver_lookup_addr(ngx_resolver_t *r, in_addr_t addr) >> +ngx_resolver_lookup_addr(ngx_resolver_t *r, ngx_ipaddr_t addr, uint32_t >> hash) >> { >> + ngx_int_t rc; >> ngx_rbtree_node_t *node, *sentinel; >> + ngx_resolver_node_t *rn; >> >> node = r->addr_rbtree.root; >> sentinel = r->addr_rbtree.sentinel; >> >> while (node != sentinel) { >> >> - if (addr < node->key) { >> + if (hash < node->key) { >> node = node->left; >> continue; >> } >> >> - if (addr > node->key) { >> + if (hash > node->key) { >> node = node->right; >> continue; >> } >> >> - /* addr == node->key */ >> + /* hash == node->key */ >> + >> + rn = (ngx_resolver_node_t *) node; >> + >> + rc = addr.family - rn->u.addr.family; >> + >> + if (rc == 0) { >> + >> + switch (addr.family) { >> + case AF_INET: >> + rc = ngx_memn2cmp((u_char *)&addr.u.v4, (u_char >> *)&rn->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); >> + break; >> + >> +#if (NGX_HAVE_INET6) >> + case AF_INET6: >> + rc = ngx_memn2cmp((u_char *)&addr.u.v6, (u_char >> *)&rn->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct in6_addr)); >> + break; >> +#endif >> + } >> + >> + if (rc == 0) { >> + return rn; >> + } >> >> - return (ngx_resolver_node_t *) node; >> + } >> + >> + node = (rc < 0) ? node->left : node->right; >> } >> >> /* not found */ >> @@ -1854,6 +1994,7 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t >> *temp, >> { >> ngx_rbtree_node_t **p; >> ngx_resolver_node_t *rn, *rn_temp; >> + ngx_int_t rc; >> >> for ( ;; ) { >> >> @@ -1870,8 +2011,29 @@ ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t >> *temp, >> rn = (ngx_resolver_node_t *) node; >> rn_temp = (ngx_resolver_node_t *) temp; >> >> - p = (ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, >> rn_temp->nlen) >> - < 0) ? &temp->left : &temp->right; >> + if (rn->qtype == NGX_RESOLVE_PTR) { >> + rc = rn->u.addr.family - rn_temp->u.addr.family; >> + >> + if (rc == 0) { >> + >> + switch (rn->u.addr.family) { >> + case AF_INET: >> + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v4, >> (u_char *)&rn_temp->u.addr.u.v4, sizeof(in_addr_t), sizeof(in_addr_t)); >> + break; >> + >> + #if (NGX_HAVE_INET6) >> + case AF_INET6: >> + rc = ngx_memn2cmp((u_char *)&rn->u.addr.u.v6, >> (u_char *)&rn_temp->u.addr.u.v6, sizeof(struct in6_addr), sizeof(struct >> in6_addr)); >> + break; >> + #endif >> + } >> + } >> + >> + } else { >> + rc = ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, >> rn_temp->nlen); >> + } >> + >> + p = (rc < 0) ? &temp->left : &temp->right; >> } >> >> if (*p == sentinel) { >> @@ -1989,8 +2151,6 @@ ngx_resolver_create_name_query(ngx_resolver_node_t >> *rn, ngx_resolver_ctx_t *ctx) >> } >> >> >> -/* AF_INET only */ >> - >> static ngx_int_t >> ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, >> ngx_resolver_ctx_t *ctx) >> { >> @@ -2001,7 +2161,7 @@ ngx_resolver_create_addr_query(ngx_resolver_node_t >> *rn, ngx_resolver_ctx_t *ctx) >> ngx_resolver_query_t *query; >> >> len = sizeof(ngx_resolver_query_t) >> - + sizeof(".255.255.255.255.in-addr.arpa.") - 1 >> + + NGX_PTR_QUERY_LEN >> + sizeof(ngx_resolver_qs_t); >> >> p = ngx_resolver_alloc(ctx->resolver, len); >> @@ -2028,18 +2188,50 @@ >> ngx_resolver_create_addr_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t >> *ctx) >> p += sizeof(ngx_resolver_query_t); >> >> - for (n = 0; n < 32; n += 8) { >> - d = ngx_sprintf(&p[1], "%ud", (ctx->addr >> n) & 0xff); >> - *p = (u_char) (d - &p[1]); >> - p = d; >> + switch (ctx->addr.family) { >> + >> + case AF_INET: >> + for (n = 0; n < 32; n += 8) { >> + d = ngx_sprintf(&p[1], "%ud", (ctx->addr.u.v4 >> n) & 0xff); >> + *p = (u_char) (d - &p[1]); >> + p = d; >> + } >> + >> + /* query type "PTR", IP query class */ >> + ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); >> + >> + rn->qlen = (u_short) >> + (p + sizeof("\7in-addr\4arpa") + >> sizeof(ngx_resolver_qs_t) >> + - rn->query); >> + >> + break; >> + >> +#if (NGX_HAVE_INET6) >> + case AF_INET6: >> + for (n = 15; n >= 0; n--) { >> + p = ngx_sprintf(p, "\1%xd\1%xd", >> + (ctx->addr.u.v6.s6_addr[n]) & 0xf, >> + (ctx->addr.u.v6.s6_addr[n] >> 4) & 0xf); >> + >> + } >> + >> + /* query type "PTR", IP query class */ >> + ngx_memcpy(p, "\3ip6\4arpa\0\0\14\0\1", 18); >> + >> + rn->qlen = (u_short) >> + (p + sizeof("\3ip6\4arpa") + >> sizeof(ngx_resolver_qs_t) >> + - rn->query); >> + >> + break; >> +#endif >> + >> + default: >> + return NGX_ERROR; >> } >> >> - /* query type "PTR", IP query class */ >> - ngx_memcpy(p, "\7in-addr\4arpa\0\0\14\0\1", 18); >> +ngx_log_debug2(NGX_LOG_DEBUG_CORE, ctx->resolver->log, 0, >> + "resolve: query %s, ident %i", (rn->query+12), ident & >> 0xffff); >> >> - rn->qlen = (u_short) >> - (p + sizeof("\7in-addr\4arpa") + >> sizeof(ngx_resolver_qs_t) >> - - rn->query); >> >> return NGX_OK; >> } >> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >> index d2a4606..a45b244 100644 >> --- a/src/core/ngx_resolver.h >> +++ b/src/core/ngx_resolver.h >> @@ -41,6 +41,11 @@ >> >> #define NGX_RESOLVER_MAX_RECURSION 50 >> >> +#if (NGX_HAVE_INET6) >> +#define NGX_PTR_QUERY_LEN >> (sizeof(".f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.ip6.arpa.") >> - 1) >> +#else >> +#define NGX_PTR_QUERY_LEN (sizeof(".255.255.255.255.in-addr.arpa.") - >> 1) >> +#endif >> >> typedef struct { >> ngx_connection_t *connection; >> >> >> >> On Wed, Jul 10, 2013 at 9:24 PM, ToSHiC wrote: >> >>> commit 8670b164784032b2911b3c34ac31ef52ddba5b60 >>> Author: Anton Kortunov >>> Date: Wed Jul 10 19:53:06 2013 +0400 >>> >>> IPv6 support in resolver for forward requests >>> >>> To resolve name into IPv6 address use NGX_RESOLVE_AAAA, >>> NGX_RESOLVE_A_AAAA or NGX_RESOLVE_AAAA_A record type instead of >>> NGX_RESOLVE_A >>> >>> diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c >>> index d59d0c4..567368b 100644 >>> --- a/src/core/ngx_resolver.c >>> +++ b/src/core/ngx_resolver.c >>> @@ -76,7 +76,7 @@ static void ngx_resolver_process_ptr(ngx_resolver_t >>> *r, u_char *buf, size_t n, >>> static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, >>> ngx_str_t *name, uint32_t hash); >>> static ngx_resolver_node_t *ngx_resolver_lookup_addr(ngx_resolver_t *r, >>> - in_addr_t addr); >>> + ngx_ipaddr_t addr, uint32_t hash); >>> static void ngx_resolver_rbtree_insert_value(ngx_rbtree_node_t *temp, >>> ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); >>> static ngx_int_t ngx_resolver_copy(ngx_resolver_t *r, ngx_str_t *name, >>> @@ -88,7 +88,7 @@ static void *ngx_resolver_calloc(ngx_resolver_t *r, >>> size_t size); >>> static void ngx_resolver_free(ngx_resolver_t *r, void *p); >>> static void ngx_resolver_free_locked(ngx_resolver_t *r, void *p); >>> static void *ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t >>> size); >>> -static in_addr_t *ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, >>> +static ngx_ipaddr_t *ngx_resolver_rotate(ngx_resolver_t *r, >>> ngx_ipaddr_t *src, >>> ngx_uint_t n); >>> static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, >>> size_t len); >>> >>> @@ -270,13 +270,27 @@ ngx_resolver_cleanup_tree(ngx_resolver_t *r, >>> ngx_rbtree_t *tree) >>> ngx_resolver_ctx_t * >>> ngx_resolve_start(ngx_resolver_t *r, ngx_resolver_ctx_t *temp) >>> { >>> - in_addr_t addr; >>> + ngx_ipaddr_t addr; >>> ngx_resolver_ctx_t *ctx; >>> >>> if (temp) { >>> - addr = ngx_inet_addr(temp->name.data, temp->name.len); >>> + addr.family = 0; >>> >>> - if (addr != INADDR_NONE) { >>> + >>> + addr.u.v4 = ngx_inet_addr(temp->name.data, temp->name.len); >>> + >>> + if (addr.u.v4 != INADDR_NONE) { >>> + >>> + addr.family = AF_INET; >>> + >>> +#if (NGX_HAVE_INET6) >>> + } else if (ngx_inet6_addr(temp->name.data, temp->name.len, >>> addr.u.v6.s6_addr) == NGX_OK) { >>> + >>> + addr.family = AF_INET6; >>> +#endif >>> + } >>> + >>> + if (addr.family) { >>> temp->resolver = r; >>> temp->state = NGX_OK; >>> temp->naddrs = 1; >>> @@ -417,7 +431,7 @@ static ngx_int_t >>> ngx_resolve_name_locked(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx) >>> { >>> uint32_t hash; >>> - in_addr_t addr, *addrs; >>> + ngx_ipaddr_t addr, *addrs; >>> ngx_int_t rc; >>> ngx_uint_t naddrs; >>> ngx_resolver_ctx_t *next; >>> @@ -429,7 +443,11 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >>> ngx_resolver_ctx_t *ctx) >>> >>> if (rn) { >>> >>> - if (rn->valid >= ngx_time()) { >>> + if (rn->valid >= ngx_time() >>> +#if (NGX_HAVE_INET6) >>> + && rn->qtype != NGX_RESOLVE_RETRY >>> +#endif >>> + ) { >>> >>> ngx_log_debug0(NGX_LOG_DEBUG_CORE, r->log, 0, "resolve >>> cached"); >>> >>> @@ -446,7 +464,6 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >>> ngx_resolver_ctx_t *ctx) >>> /* NGX_RESOLVE_A answer */ >>> >>> if (naddrs != 1) { >>> - addr = 0; >>> addrs = ngx_resolver_rotate(r, rn->u.addrs, naddrs); >>> if (addrs == NULL) { >>> return NGX_ERROR; >>> @@ -506,6 +523,8 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >>> ngx_resolver_ctx_t *ctx) >>> } while (ctx); >>> >>> return NGX_OK; >>> + } else { >>> + rn->qtype = ctx->type; >>> } >>> >>> if (rn->waiting) { >>> @@ -552,6 +571,7 @@ ngx_resolve_name_locked(ngx_resolver_t *r, >>> ngx_resolver_ctx_t *ctx) >>> rn->node.key = hash; >>> rn->nlen = (u_short) ctx->name.len; >>> rn->query = NULL; >>> + rn->qtype = ctx->type; >>> >>> ngx_rbtree_insert(&r->name_rbtree, &rn->node); >>> } >>> @@ -1130,6 +1150,9 @@ found: >>> switch (qtype) { >>> >>> case NGX_RESOLVE_A: >>> +#if (NGX_HAVE_INET6) >>> + case NGX_RESOLVE_AAAA: >>> +#endif >>> >>> ngx_resolver_process_a(r, buf, n, ident, code, nan, >>> i + sizeof(ngx_resolver_qs_t)); >>> @@ -1178,7 +1201,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> size_t len; >>> int32_t ttl; >>> uint32_t hash; >>> - in_addr_t addr, *addrs; >>> + ngx_ipaddr_t addr, *addrs; >>> ngx_str_t name; >>> ngx_uint_t qtype, qident, naddrs, a, i, n, start; >>> ngx_resolver_an_t *an; >>> @@ -1212,12 +1235,57 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> goto failed; >>> } >>> >>> - ngx_resolver_free(r, name.data); >>> - >>> if (code == 0 && nan == 0) { >>> + >>> +#if (NGX_HAVE_INET6) >>> + /* >>> + * If it was required dual type v4|v6 resolv create one more request >>> + */ >>> + if (rn->qtype == NGX_RESOLVE_A_AAAA >>> + || rn->qtype == NGX_RESOLVE_AAAA_A) { >>> + >>> + ngx_queue_remove(&rn->queue); >>> + >>> + rn->valid = ngx_time() + (r->valid ? r->valid : ttl); >>> + rn->expire = ngx_time() + r->expire; >>> + >>> + ngx_queue_insert_head(&r->name_expire_queue, &rn->queue); >>> + >>> + ctx = rn->waiting; >>> + rn->waiting = NULL; >>> + >>> + if (ctx) { >>> + ctx->name = name; >>> + >>> + switch (rn->qtype) { >>> + >>> + case NGX_RESOLVE_A_AAAA: >>> + ctx->type = NGX_RESOLVE_AAAA; >>> + break; >>> + >>> + case NGX_RESOLVE_AAAA_A: >>> + ctx->type = NGX_RESOLVE_A; >>> + break; >>> + } >>> + >>> + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, >>> + "restarting request for name %V, with >>> type %ud", >>> + &name, ctx->type); >>> + >>> + rn->qtype = NGX_RESOLVE_RETRY; >>> + >>> + (void) ngx_resolve_name_locked(r, ctx); >>> + } >>> + >>> + return; >>> + } >>> +#endif >>> + >>> code = 3; /* NXDOMAIN */ >>> } >>> >>> + ngx_resolver_free(r, name.data); >>> + >>> if (code) { >>> next = rn->waiting; >>> rn->waiting = NULL; >>> @@ -1243,7 +1311,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> >>> i = ans; >>> naddrs = 0; >>> - addr = 0; >>> + addr.family = 0; >>> addrs = NULL; >>> cname = NULL; >>> qtype = 0; >>> @@ -1302,13 +1370,30 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> goto short_response; >>> } >>> >>> - addr = htonl((buf[i] << 24) + (buf[i + 1] << 16) >>> + addr.family = AF_INET; >>> + addr.u.v4 = htonl((buf[i] << 24) + (buf[i + 1] << 16) >>> + (buf[i + 2] << 8) + (buf[i + 3])); >>> >>> naddrs++; >>> >>> i += len; >>> >>> +#if (NGX_HAVE_INET6) >>> + } else if (qtype == NGX_RESOLVE_AAAA) { >>> + >>> + i += sizeof(ngx_resolver_an_t); >>> + >>> + if (i + len > last) { >>> + goto short_response; >>> + } >>> + >>> + addr.family = AF_INET6; >>> + ngx_memcpy(&addr.u.v6.s6_addr, &buf[i], 16); >>> + >>> + naddrs++; >>> + >>> + i += len; >>> +#endif >>> } else if (qtype == NGX_RESOLVE_CNAME) { >>> cname = &buf[i] + sizeof(ngx_resolver_an_t); >>> i += sizeof(ngx_resolver_an_t) + len; >>> @@ -1333,7 +1418,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> >>> } else { >>> >>> - addrs = ngx_resolver_alloc(r, naddrs * sizeof(in_addr_t)); >>> + addrs = ngx_resolver_alloc(r, naddrs * >>> sizeof(ngx_ipaddr_t)); >>> if (addrs == NULL) { >>> return; >>> } >>> @@ -1369,12 +1454,23 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> >>> if (qtype == NGX_RESOLVE_A) { >>> >>> - addrs[n++] = htonl((buf[i] << 24) + (buf[i + 1] << >>> 16) >>> + addrs[n].family = AF_INET; >>> + addrs[n++].u.v4 = htonl((buf[i] << 24) + (buf[i + >>> 1] << 16) >>> + (buf[i + 2] << 8) + (buf[i + >>> 3])); >>> >>> if (n == naddrs) { >>> break; >>> } >>> +#if (NGX_HAVE_INET6) >>> + } else if (qtype == NGX_RESOLVE_AAAA) { >>> + >>> + addrs[n].family = AF_INET6; >>> + ngx_memcpy(&addrs[n++].u.v6.s6_addr, &buf[i], 16); >>> + >>> + if (n == naddrs) { >>> + break; >>> + } >>> +#endif >>> } >>> >>> i += len; >>> @@ -1383,7 +1479,7 @@ ngx_resolver_process_a(ngx_resolver_t *r, u_char >>> *buf, size_t last, >>> rn->u.addrs = addrs; >>> >>> addrs = ngx_resolver_dup(r, rn->u.addrs, >>> - naddrs * sizeof(in_addr_t)); >>> + naddrs * sizeof(ngx_ipaddr_t)); >>> if (addrs == NULL) { >>> return; >>> } >>> @@ -1838,7 +1934,20 @@ >>> ngx_resolver_create_name_query(ngx_resolver_node_t *rn, ngx_resolver_ctx_t >>> *ctx) >>> qs = (ngx_resolver_qs_t *) p; >>> >>> /* query type */ >>> - qs->type_hi = 0; qs->type_lo = (u_char) ctx->type; >>> + qs->type_hi = 0; qs->type_lo = (u_char) rn->qtype; >>> + >>> +#if (NGX_HAVE_INET6) >>> + switch (rn->qtype) { >>> + >>> + case NGX_RESOLVE_A_AAAA: >>> + qs->type_lo = NGX_RESOLVE_A; >>> + break; >>> + >>> + case NGX_RESOLVE_AAAA_A: >>> + qs->type_lo = NGX_RESOLVE_AAAA; >>> + break; >>> + } >>> +#endif >>> >>> /* IP query class */ >>> qs->class_hi = 0; qs->class_lo = 1; >>> @@ -2136,13 +2245,13 @@ ngx_resolver_dup(ngx_resolver_t *r, void *src, >>> size_t size) >>> } >>> >>> >>> -static in_addr_t * >>> -ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n) >>> +static ngx_ipaddr_t * >>> +ngx_resolver_rotate(ngx_resolver_t *r, ngx_ipaddr_t *src, ngx_uint_t n) >>> { >>> void *dst, *p; >>> ngx_uint_t j; >>> >>> - dst = ngx_resolver_alloc(r, n * sizeof(in_addr_t)); >>> + dst = ngx_resolver_alloc(r, n * sizeof(ngx_ipaddr_t)); >>> >>> if (dst == NULL) { >>> return dst; >>> @@ -2151,12 +2260,12 @@ ngx_resolver_rotate(ngx_resolver_t *r, >>> in_addr_t *src, ngx_uint_t n) >>> j = ngx_random() % n; >>> >>> if (j == 0) { >>> - ngx_memcpy(dst, src, n * sizeof(in_addr_t)); >>> + ngx_memcpy(dst, src, n * sizeof(ngx_ipaddr_t)); >>> return dst; >>> } >>> >>> - p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(in_addr_t)); >>> - ngx_memcpy(p, src, j * sizeof(in_addr_t)); >>> + p = ngx_cpymem(dst, &src[j], (n - j) * sizeof(ngx_ipaddr_t)); >>> + ngx_memcpy(p, src, j * sizeof(ngx_ipaddr_t)); >>> >>> return dst; >>> } >>> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >>> index 6fd81fe..d2a4606 100644 >>> --- a/src/core/ngx_resolver.h >>> +++ b/src/core/ngx_resolver.h >>> @@ -67,10 +67,11 @@ typedef struct { >>> u_short qlen; >>> >>> u_char *query; >>> + ngx_int_t qtype; >>> >>> union { >>> - in_addr_t addr; >>> - in_addr_t *addrs; >>> + ngx_ipaddr_t addr; >>> + ngx_ipaddr_t *addrs; >>> u_char *cname; >>> } u; >>> >>> @@ -130,8 +131,8 @@ struct ngx_resolver_ctx_s { >>> ngx_str_t name; >>> >>> ngx_uint_t naddrs; >>> - in_addr_t *addrs; >>> - in_addr_t addr; >>> + ngx_ipaddr_t *addrs; >>> + ngx_ipaddr_t addr; >>> >>> ngx_resolver_handler_pt handler; >>> void *data; >>> >>> >>> >>> On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: >>> >>>> commit 482bd2a0b6240a2b26409b9c7924ad01c814f293 >>>> Author: Anton Kortunov >>>> Date: Wed Jul 10 13:21:27 2013 +0400 >>>> >>>> Added NGX_RESOLVE_* constants >>>> >>>> Module developers can decide how to resolve hosts relating to IPv6: >>>> >>>> NGX_RESOLVE_AAAA - try to resolve only to IPv6 address >>>> NGX_RESOLVE_AAAA_A - IPv6 is preferred (recommended by standards) >>>> NGX_RESOLVE_A_AAAA - IPv4 is preferred (better strategy nowadays) >>>> >>>> diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h >>>> index ae34ca5..6fd81fe 100644 >>>> --- a/src/core/ngx_resolver.h >>>> +++ b/src/core/ngx_resolver.h >>>> @@ -20,6 +20,15 @@ >>>> #define NGX_RESOLVE_TXT 16 >>>> #define NGX_RESOLVE_DNAME 39 >>>> >>>> +#if (NGX_HAVE_INET6) >>>> + >>>> +#define NGX_RESOLVE_AAAA 28 >>>> +#define NGX_RESOLVE_A_AAAA 1000 >>>> +#define NGX_RESOLVE_AAAA_A 1001 >>>> +#define NGX_RESOLVE_RETRY 1002 >>>> + >>>> +#endif >>>> + >>>> #define NGX_RESOLVE_FORMERR 1 >>>> #define NGX_RESOLVE_SERVFAIL 2 >>>> #define NGX_RESOLVE_NXDOMAIN 3 >>>> >>>> >>>> >>>> On Wed, Jul 10, 2013 at 9:17 PM, ToSHiC wrote: >>>> >>>>> Hello, >>>>> >>>>> I've split this big patch into several small patches, taking into >>>>> account your comments. I'll send each part in separate email. Here is the >>>>> first one. >>>>> >>>>> commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 >>>>> Author: Anton Kortunov >>>>> Date: Wed Jul 10 13:14:52 2013 +0400 >>>>> >>>>> Added new structure ngx_ipaddr_t >>>>> >>>>> This structure contains family field >>>>> and the union of ipv4/ipv6 structures in_addr_t and in6_addr. >>>>> >>>>> diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h >>>>> index 6a5a368..077ed34 100644 >>>>> --- a/src/core/ngx_inet.h >>>>> +++ b/src/core/ngx_inet.h >>>>> @@ -68,6 +68,16 @@ typedef struct { >>>>> >>>>> >>>>> typedef struct { >>>>> + ngx_uint_t family; >>>>> + union { >>>>> + in_addr_t v4; >>>>> +#if (NGX_HAVE_INET6) >>>>> + struct in6_addr v6; >>>>> +#endif >>>>> + } u; >>>>> +} ngx_ipaddr_t; >>>>> + >>>>> +typedef struct { >>>>> struct sockaddr *sockaddr; >>>>> socklen_t socklen; >>>>> ngx_str_t name; >>>>> >>>>> >>>>> >>>>> On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin wrote: >>>>> >>>>>> Hello! >>>>>> >>>>>> On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: >>>>>> >>>>>> > Hello, >>>>>> > >>>>>> > We needed this feature in our company, I found that it is in >>>>>> milestones of >>>>>> > version 1.5 but doesn't exist yet. So I've implemented it based in >>>>>> 1.3 code >>>>>> > and merged in current 1.5 code. When I wrote this code I mostly >>>>>> cared about >>>>>> > minimum intrusion into other parts of nginx. >>>>>> > >>>>>> > IPv6 fallback logic is not a straightforward implementation of >>>>>> suggested by >>>>>> > RFC. RFC states that IPv6 resolving have priority over IPv4, and >>>>>> it's not >>>>>> > very good for Internet we have currently. With this patch you can >>>>>> specify >>>>>> > priority, and in upstream and mail modules I've set IPv4 as >>>>>> preferred >>>>>> > address family. >>>>>> > >>>>>> > Patch is pretty big and I hope it'll not break mailing list or mail >>>>>> clients. >>>>>> >>>>>> You may want to try to split the patch into smaller patches to >>>>>> simplify review. See also some hints here: >>>>>> >>>>>> http://nginx.org/en/docs/contributing_changes.html >>>>>> >>>>>> Some quick comments below. >>>>>> >>>>>> [...] >>>>>> >>>>>> > - addr = ntohl(ctx->addr); >>>>>> > +failed: >>>>>> > + >>>>>> > + //addr = ntohl(ctx->addr); >>>>>> > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, >>>>>> > NGX_SOCKADDR_STRLEN); >>>>>> > >>>>>> > ngx_log_error(NGX_LOG_ALERT, r->log, 0, >>>>>> > - "could not cancel %ud.%ud.%ud.%ud resolving", >>>>>> > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, >>>>>> > - (addr >> 8) & 0xff, addr & 0xff); >>>>>> > + "could not cancel %s resolving", text); >>>>>> >>>>>> 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. >>>>>> >>>>>> 2. Don't use C++ style ("//") comments. >>>>>> >>>>>> 3. If some data is only needed for debug logging, keep relevant >>>>>> calculations under #if (NGX_DEBUG). >>>>>> >>>>>> [...] >>>>>> >>>>>> > @@ -334,6 +362,7 @@ >>>>>> > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, >>>>>> > peers->peer[i].current_weight = 0; >>>>>> > peers->peer[i].max_fails = 1; >>>>>> > peers->peer[i].fail_timeout = 10; >>>>>> > + >>>>>> > } >>>>>> > } >>>>>> > >>>>>> >>>>>> Please avoid unrelated changes. >>>>>> >>>>>> [...] >>>>>> >>>>>> -- >>>>>> Maxim Dounin >>>>>> http://nginx.org/en/donation.html >>>>>> >>>>>> _______________________________________________ >>>>>> nginx-devel mailing list >>>>>> nginx-devel at nginx.org >>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Jul 10 20:15:48 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 10 Jul 2013 13:15:48 -0700 Subject: [PATCH] Fixing buffer over-read when accepting unix domain sockets In-Reply-To: <20130710131839.GD66479@mdounin.ru> References: <20130710131839.GD66479@mdounin.ru> Message-ID: Hello! On Wed, Jul 10, 2013 at 6:18 AM, Maxim Dounin wrote: >> - if (ls->addr_ntop) { >> + if (ls->addr_ntop && socklen > sizeof(c->sockaddr->sa_family)) { >> c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); >> if (c->addr_text.data == NULL) { >> ngx_close_accepted_connection(c); > > The patch looks wrong - it doesn't initialize c->addr_text at all, > while it's requested by a caller. > Thank you for the review! How about this? --- nginx-1.4.1/src/event/ngx_event_accept.c 2013-05-06 03:26:50.000000000 -0700 +++ nginx-1.4.1-patched/src/event/ngx_event_accept.c 2013-07-10 13:05:02.001249099 -0700 @@ -269,17 +269,28 @@ ngx_event_accept(ngx_event_t *ev) #endif if (ls->addr_ntop) { - c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); - if (c->addr_text.data == NULL) { - ngx_close_accepted_connection(c); - return; - } + if (socklen > sizeof(c->sockaddr->sa_family)) { + c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); + if (c->addr_text.data == NULL) { + ngx_close_accepted_connection(c); + return; + } + + c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->addr_text.data, + ls->addr_text_max_len, 0); + if (c->addr_text.len == 0) { + ngx_close_accepted_connection(c); + return; + } + + } else { + /* + * Linux accept/accept4 syscalls, for example, do not return + * address data upon unix domain sockets + */ - c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->addr_text.data, - ls->addr_text_max_len, 0); - if (c->addr_text.len == 0) { - ngx_close_accepted_connection(c); - return; + c->addr_text.data = NULL; + c->addr_text.len = 0; } } -agentzh -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.4.1-unix_socket_accept_over_read.patch Type: application/octet-stream Size: 1669 bytes Desc: not available URL: From hnakamur at gmail.com Wed Jul 10 23:38:47 2013 From: hnakamur at gmail.com (Hiroaki Nakamura) Date: Thu, 11 Jul 2013 08:38:47 +0900 Subject: Request methods with hyphens In-Reply-To: <20130710140848.GF66479@mdounin.ru> References: <20130710140848.GF66479@mdounin.ru> Message-ID: Hi, 2013/7/10 Maxim Dounin : > Hello! > > On Wed, Jul 10, 2013 at 10:47:35PM +0900, Hiroaki Nakamura wrote: > >> Hi all, >> >> I found nginx rejects request methods with hyphens like >> VERSION-CONTROL with the status code 400. >> I got the following debug log: >> >> 2013/07/10 13:55:29 [info] 79048#0: *4 client sent invalid method >> while reading client request line, client: 127.0.0.1, server: >> localhost, request: "VERSION-CONTROL / HTTP/1.1" >> 2013/07/10 13:55:29 [debug] 79048#0: *4 http finalize request: 400, "?" a:1, c:1 > > Is it a method used by some real-world software? VERSION-CONTROL is defined in the Versioning Extensions to WebDAV spec. http://www.webdav.org/specs/rfc3253.html > >> I looked at the source code and found nginx will accept only 'A'-'Z' >> and '_' as request methods. >> http://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_parse.c?rev=626f288fa5ede7ee3cbeffe950cb9dd611e10c52#L270 >> >> RFC2616 says the method is case-sensitive and >> methods can have >> >> http://tools.ietf.org/html/rfc2616#section-5.1.1 >> >> 5.1.1 Method >> The Method token indicates the method to be performed on the >> resource identified by the Request-URI. The method is case-sensitive. >> >> Method = "OPTIONS" ; Section 9.2 >> | "GET" ; Section 9.3 >> | "HEAD" ; Section 9.4 >> | "POST" ; Section 9.5 >> | "PUT" ; Section 9.6 >> | "DELETE" ; Section 9.7 >> | "TRACE" ; Section 9.8 >> | "CONNECT" ; Section 9.9 >> | extension-method >> extension-method = token >> >> >> http://tools.ietf.org/html/rfc2616#section-2.2 >> >> token = 1* >> separators = "(" | ")" | "<" | ">" | "@" >> | "," | ";" | ":" | "\" | <"> >> | "/" | "[" | "]" | "?" | "=" >> | "{" | "}" | SP | HT >> >> >> Also, when a server rejects a method, the status code should be 405 or 501. >> >> http://tools.ietf.org/html/rfc2616#section-5.1.1 >> >> An origin server SHOULD return the status code 405 (Method Not Allowed) >> if the method is known by the origin server but not allowed for the >> requested resource, and 501 (Not Implemented) if the method is >> unrecognized or not implemented by the origin server. >> >> I wonder how to improve nginx on accepting or rejecting request methods. >> Comments are welcome. > > As of now nginx rejects anything which isn't uppercase latin > letters (or underscore) as syntactically invalid (and hence 400). According to RFC2616, any CHAR except CTLs or separators is syntactically valid. > > I don't think that current behaviour should be changed unless > there are good reasons to. If there are good reasons, we probably > should do something similar to underscores_in_headers, see > http://nginx.org/r/underscores_in_headers. I would like to use limit_except to accept only HEAD, GET and POST methods, and return 405 (Method Not Allowed) or 501 (Not Implemented) for the other methods. Is this a good reason? > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Hiroaki From mdounin at mdounin.ru Thu Jul 11 11:14:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jul 2013 15:14:39 +0400 Subject: [PATCH] Fixing buffer over-read when accepting unix domain sockets In-Reply-To: References: <20130710131839.GD66479@mdounin.ru> Message-ID: <20130711111439.GM66479@mdounin.ru> Hello! On Wed, Jul 10, 2013 at 01:15:48PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Jul 10, 2013 at 6:18 AM, Maxim Dounin wrote: > >> - if (ls->addr_ntop) { > >> + if (ls->addr_ntop && socklen > sizeof(c->sockaddr->sa_family)) { > >> c->addr_text.data = ngx_pnalloc(c->pool, ls->addr_text_max_len); > >> if (c->addr_text.data == NULL) { > >> ngx_close_accepted_connection(c); > > > > The patch looks wrong - it doesn't initialize c->addr_text at all, > > while it's requested by a caller. > > > > Thank you for the review! > > How about this? This doesn't looks good either. It looks like on linux unix sockaddr can't be printed without socklen argument due to abstract namespace sockets (see [1]). Therefore the only correct solution seems to be to change ngx_sock_ntop() interface to accept (and use) socklen. Vladimir looked into this a while ago, and I've just reviewed his latest patch he resubmitted due to your attempts to fix the same issue. The patch is good enough and expected to be committed after few minor fixes. [1] http://man7.org/linux/man-pages/man7/unix.7.html Note "Three types of address are distinguished in this structure..." and below. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Jul 11 12:23:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jul 2013 16:23:25 +0400 Subject: Request methods with hyphens In-Reply-To: References: <20130710140848.GF66479@mdounin.ru> Message-ID: <20130711122325.GP66479@mdounin.ru> Hello! On Thu, Jul 11, 2013 at 08:38:47AM +0900, Hiroaki Nakamura wrote: > Hi, > > 2013/7/10 Maxim Dounin : > > Hello! > > > > On Wed, Jul 10, 2013 at 10:47:35PM +0900, Hiroaki Nakamura wrote: > > > >> Hi all, > >> > >> I found nginx rejects request methods with hyphens like > >> VERSION-CONTROL with the status code 400. > >> I got the following debug log: > >> > >> 2013/07/10 13:55:29 [info] 79048#0: *4 client sent invalid method > >> while reading client request line, client: 127.0.0.1, server: > >> localhost, request: "VERSION-CONTROL / HTTP/1.1" > >> 2013/07/10 13:55:29 [debug] 79048#0: *4 http finalize request: 400, "?" a:1, c:1 > > > > Is it a method used by some real-world software? > > VERSION-CONTROL is defined in the Versioning Extensions to WebDAV spec. > http://www.webdav.org/specs/rfc3253.html The question still applies. [...] > > As of now nginx rejects anything which isn't uppercase latin > > letters (or underscore) as syntactically invalid (and hence 400). > > According to RFC2616, any CHAR except CTLs or separators is > syntactically valid. For sure. But it doesn't mean that (more strict) syntax rules as applied by nginx needs to be changed (unless there is a good reason). > > I don't think that current behaviour should be changed unless > > there are good reasons to. If there are good reasons, we probably > > should do something similar to underscores_in_headers, see > > http://nginx.org/r/underscores_in_headers. > > I would like to use limit_except to accept only HEAD, GET and POST methods, > and return 405 (Method Not Allowed) or 501 (Not Implemented) for the > other methods. > Is this a good reason? Doesn't looks like a good reason for me. -- Maxim Dounin http://nginx.org/en/donation.html From vl at nginx.com Thu Jul 11 12:44:01 2013 From: vl at nginx.com (Homutov Vladimir) Date: Thu, 11 Jul 2013 12:44:01 +0000 Subject: [nginx] Core: extended ngx_sock_ntop() with socklen parameter. Message-ID: details: http://hg.nginx.org/nginx/rev/05ba5bce31e0 branches: changeset: 5263:05ba5bce31e0 user: Vladimir Homutov date: Thu Jul 11 16:07:25 2013 +0400 description: Core: extended ngx_sock_ntop() with socklen parameter. On Linux, sockaddr length is required to process unix socket addresses properly due to unnamed sockets (which don't have sun_path set at all) and abstract namespace sockets. diffstat: src/core/ngx_connection.c | 7 ++++--- src/core/ngx_inet.c | 22 +++++++++++++++++----- src/core/ngx_inet.h | 4 ++-- src/event/ngx_event_accept.c | 3 ++- src/event/ngx_event_acceptex.c | 3 ++- src/event/ngx_event_openssl_stapling.c | 3 ++- src/http/modules/ngx_http_realip_module.c | 3 ++- src/http/ngx_http_core_module.c | 4 ++-- src/mail/ngx_mail.c | 6 ++++-- src/mail/ngx_mail_core_module.c | 3 ++- 10 files changed, 39 insertions(+), 19 deletions(-) diffs (216 lines): diff -r 626f288fa5ed -r 05ba5bce31e0 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/core/ngx_connection.c Thu Jul 11 16:07:25 2013 +0400 @@ -41,7 +41,7 @@ ngx_create_listening(ngx_conf_t *cf, voi ls->sockaddr = sa; ls->socklen = socklen; - len = ngx_sock_ntop(sa, text, NGX_SOCKADDR_STRLEN, 1); + len = ngx_sock_ntop(sa, socklen, text, NGX_SOCKADDR_STRLEN, 1); ls->addr_text.len = len; switch (ls->sockaddr->sa_family) { @@ -152,7 +152,8 @@ ngx_set_inherited_sockets(ngx_cycle_t *c return NGX_ERROR; } - len = ngx_sock_ntop(ls[i].sockaddr, ls[i].addr_text.data, len, 1); + len = ngx_sock_ntop(ls[i].sockaddr, ls[i].socklen, + ls[i].addr_text.data, len, 1); if (len == 0) { return NGX_ERROR; } @@ -1068,7 +1069,7 @@ ngx_connection_local_sockaddr(ngx_connec return NGX_OK; } - s->len = ngx_sock_ntop(c->local_sockaddr, s->data, s->len, port); + s->len = ngx_sock_ntop(c->local_sockaddr, len, s->data, s->len, port); return NGX_OK; } diff -r 626f288fa5ed -r 05ba5bce31e0 src/core/ngx_inet.c --- a/src/core/ngx_inet.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/core/ngx_inet.c Thu Jul 11 16:07:25 2013 +0400 @@ -174,7 +174,8 @@ ngx_inet6_addr(u_char *p, size_t len, u_ size_t -ngx_sock_ntop(struct sockaddr *sa, u_char *text, size_t len, ngx_uint_t port) +ngx_sock_ntop(struct sockaddr *sa, socklen_t socklen, u_char *text, size_t len, + ngx_uint_t port) { u_char *p; struct sockaddr_in *sin; @@ -230,9 +231,18 @@ ngx_sock_ntop(struct sockaddr *sa, u_cha case AF_UNIX: saun = (struct sockaddr_un *) sa; + /* on Linux sockaddr might not include sun_path at all */ + + if (socklen <= offsetof(struct sockaddr_un, sun_path)) { + p = ngx_snprintf(text, len, "unix:%Z"); + + } else { + p = ngx_snprintf(text, len, "unix:%s%Z", saun->sun_path); + } + /* we do not include trailing zero in address length */ - return ngx_snprintf(text, len, "unix:%s%Z", saun->sun_path) - text - 1; + return (p - text - 1); #endif @@ -1020,7 +1030,7 @@ ngx_inet_resolve_host(ngx_pool_t *pool, goto failed; } - len = ngx_sock_ntop((struct sockaddr *) sin, p, len, 1); + len = ngx_sock_ntop((struct sockaddr *) sin, rp->ai_addrlen, p, len, 1); u->addrs[i].name.len = len; u->addrs[i].name.data = p; @@ -1053,7 +1063,8 @@ ngx_inet_resolve_host(ngx_pool_t *pool, goto failed; } - len = ngx_sock_ntop((struct sockaddr *) sin6, p, len, 1); + len = ngx_sock_ntop((struct sockaddr *) sin6, rp->ai_addrlen, p, + len, 1); u->addrs[i].name.len = len; u->addrs[i].name.data = p; @@ -1138,7 +1149,8 @@ ngx_inet_resolve_host(ngx_pool_t *pool, return NGX_ERROR; } - len = ngx_sock_ntop((struct sockaddr *) sin, p, len, 1); + len = ngx_sock_ntop((struct sockaddr *) sin, + sizeof(struct sockaddr_in), p, len, 1); u->addrs[i].name.len = len; u->addrs[i].name.data = p; diff -r 626f288fa5ed -r 05ba5bce31e0 src/core/ngx_inet.h --- a/src/core/ngx_inet.h Fri Jul 05 11:42:25 2013 +0400 +++ b/src/core/ngx_inet.h Thu Jul 11 16:07:25 2013 +0400 @@ -107,8 +107,8 @@ in_addr_t ngx_inet_addr(u_char *text, si ngx_int_t ngx_inet6_addr(u_char *p, size_t len, u_char *addr); size_t ngx_inet6_ntop(u_char *p, u_char *text, size_t len); #endif -size_t ngx_sock_ntop(struct sockaddr *sa, u_char *text, size_t len, - ngx_uint_t port); +size_t ngx_sock_ntop(struct sockaddr *sa, socklen_t socklen, u_char *text, + size_t len, ngx_uint_t port); size_t ngx_inet_ntop(int family, void *addr, u_char *text, size_t len); ngx_int_t ngx_ptocidr(ngx_str_t *text, ngx_cidr_t *cidr); ngx_int_t ngx_parse_addr(ngx_pool_t *pool, ngx_addr_t *addr, u_char *text, diff -r 626f288fa5ed -r 05ba5bce31e0 src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/event/ngx_event_accept.c Thu Jul 11 16:07:25 2013 +0400 @@ -275,7 +275,8 @@ ngx_event_accept(ngx_event_t *ev) return; } - c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->addr_text.data, + c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen, + c->addr_text.data, ls->addr_text_max_len, 0); if (c->addr_text.len == 0) { ngx_close_accepted_connection(c); diff -r 626f288fa5ed -r 05ba5bce31e0 src/event/ngx_event_acceptex.c --- a/src/event/ngx_event_acceptex.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/event/ngx_event_acceptex.c Thu Jul 11 16:07:25 2013 +0400 @@ -68,7 +68,8 @@ ngx_event_acceptex(ngx_event_t *rev) return; } - c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->addr_text.data, + c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen, + c->addr_text.data, ls->addr_text_max_len, 0); if (c->addr_text.len == 0) { /* TODO: close socket */ diff -r 626f288fa5ed -r 05ba5bce31e0 src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/event/ngx_event_openssl_stapling.c Thu Jul 11 16:07:25 2013 +0400 @@ -878,7 +878,8 @@ ngx_ssl_ocsp_resolve_handler(ngx_resolve goto failed; } - len = ngx_sock_ntop((struct sockaddr *) sin, p, len, 1); + len = ngx_sock_ntop((struct sockaddr *) sin, sizeof(struct sockaddr_in), + p, len, 1); ctx->addrs[i].name.len = len; ctx->addrs[i].name.data = p; diff -r 626f288fa5ed -r 05ba5bce31e0 src/http/modules/ngx_http_realip_module.c --- a/src/http/modules/ngx_http_realip_module.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/http/modules/ngx_http_realip_module.c Thu Jul 11 16:07:25 2013 +0400 @@ -230,7 +230,8 @@ ngx_http_realip_set_addr(ngx_http_reques c = r->connection; - len = ngx_sock_ntop(addr->sockaddr, text, NGX_SOCKADDR_STRLEN, 0); + len = ngx_sock_ntop(addr->sockaddr, addr->socklen, text, + NGX_SOCKADDR_STRLEN, 0); if (len == 0) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } diff -r 626f288fa5ed -r 05ba5bce31e0 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/http/ngx_http_core_module.c Thu Jul 11 16:07:25 2013 +0400 @@ -3033,7 +3033,7 @@ ngx_http_core_server(ngx_conf_t *cf, ngx #endif lsopt.wildcard = 1; - (void) ngx_sock_ntop(&lsopt.u.sockaddr, lsopt.addr, + (void) ngx_sock_ntop(&lsopt.u.sockaddr, lsopt.socklen, lsopt.addr, NGX_SOCKADDR_STRLEN, 1); if (ngx_http_add_listen(cf, cscf, &lsopt) != NGX_OK) { @@ -3984,7 +3984,7 @@ ngx_http_core_listen(ngx_conf_t *cf, ngx lsopt.ipv6only = 1; #endif - (void) ngx_sock_ntop(&lsopt.u.sockaddr, lsopt.addr, + (void) ngx_sock_ntop(&lsopt.u.sockaddr, lsopt.socklen, lsopt.addr, NGX_SOCKADDR_STRLEN, 1); for (n = 2; n < cf->args->nelts; n++) { diff -r 626f288fa5ed -r 05ba5bce31e0 src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/mail/ngx_mail.c Thu Jul 11 16:07:25 2013 +0400 @@ -465,7 +465,8 @@ ngx_mail_add_addrs(ngx_conf_t *cf, ngx_m addrs[i].conf.ssl = addr[i].ssl; #endif - len = ngx_sock_ntop(addr[i].sockaddr, buf, NGX_SOCKADDR_STRLEN, 1); + len = ngx_sock_ntop(addr[i].sockaddr, addr[i].socklen , buf, + NGX_SOCKADDR_STRLEN, 1); p = ngx_pnalloc(cf->pool, len); if (p == NULL) { @@ -513,7 +514,8 @@ ngx_mail_add_addrs6(ngx_conf_t *cf, ngx_ addrs6[i].conf.ssl = addr[i].ssl; #endif - len = ngx_sock_ntop(addr[i].sockaddr, buf, NGX_SOCKADDR_STRLEN, 1); + len = ngx_sock_ntop(addr[i].sockaddr, addr[i].socklen, buf, + NGX_SOCKADDR_STRLEN, 1); p = ngx_pnalloc(cf->pool, len); if (p == NULL) { diff -r 626f288fa5ed -r 05ba5bce31e0 src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/mail/ngx_mail_core_module.c Thu Jul 11 16:07:25 2013 +0400 @@ -439,7 +439,8 @@ ngx_mail_core_listen(ngx_conf_t *cf, ngx ls->bind = 1; } else { - len = ngx_sock_ntop(sa, buf, NGX_SOCKADDR_STRLEN, 1); + len = ngx_sock_ntop(sa, ls->socklen, buf, + NGX_SOCKADDR_STRLEN, 1); ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "ipv6only is not supported " From vl at nginx.com Thu Jul 11 16:33:55 2013 From: vl at nginx.com (Homutov Vladimir) Date: Thu, 11 Jul 2013 16:33:55 +0000 Subject: [nginx] Core: fixed possible use of an uninitialized variable. Message-ID: details: http://hg.nginx.org/nginx/rev/b6ffe53f9c3d branches: changeset: 5264:b6ffe53f9c3d user: Vladimir Homutov date: Thu Jul 11 19:50:19 2013 +0400 description: Core: fixed possible use of an uninitialized variable. The call to ngx_sock_ntop() in ngx_connection_local_sockaddr() might be performed with the uninitialized "len" variable. The fix is to initialize variable to the size of corresponding socket address type. The issue was introduced in commit 05ba5bce31e0. diffstat: src/core/ngx_connection.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (19 lines): diff -r 05ba5bce31e0 -r b6ffe53f9c3d src/core/ngx_connection.c --- a/src/core/ngx_connection.c Thu Jul 11 16:07:25 2013 +0400 +++ b/src/core/ngx_connection.c Thu Jul 11 19:50:19 2013 +0400 @@ -1034,6 +1034,7 @@ ngx_connection_local_sockaddr(ngx_connec #if (NGX_HAVE_INET6) case AF_INET6: sin6 = (struct sockaddr_in6 *) c->local_sockaddr; + len = sizeof(struct sockaddr_in6); for (addr = 0, i = 0; addr == 0 && i < 16; i++) { addr |= sin6->sin6_addr.s6_addr[i]; @@ -1044,6 +1045,7 @@ ngx_connection_local_sockaddr(ngx_connec default: /* AF_INET */ sin = (struct sockaddr_in *) c->local_sockaddr; + len = sizeof(struct sockaddr_in); addr = sin->sin_addr.s_addr; break; } From agentzh at gmail.com Thu Jul 11 18:33:24 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 11 Jul 2013 11:33:24 -0700 Subject: [PATCH] Fixing buffer over-read when accepting unix domain sockets In-Reply-To: <20130711111439.GM66479@mdounin.ru> References: <20130710131839.GD66479@mdounin.ru> <20130711111439.GM66479@mdounin.ru> Message-ID: Hello! On Thu, Jul 11, 2013 at 4:14 AM, Maxim Dounin wrote: > This doesn't looks good either. It looks like on linux unix > sockaddr can't be printed without socklen argument due to abstract > namespace sockets (see [1]). Therefore the only correct solution > seems to be to change ngx_sock_ntop() interface to accept (and > use) socklen. > Yes, this is cleaner. I also thought about this approach but dared not change the ngx_sock_ntop() API that requires updating many places in the code base (especially those 3rd-party modules using this API) ;) Regards, -agentzh From mdounin at mdounin.ru Fri Jul 12 10:21:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Jul 2013 10:21:09 +0000 Subject: [nginx] Configure: perl Makefile rebuild after configure. Message-ID: details: http://hg.nginx.org/nginx/rev/9f17e765a21e branches: changeset: 5265:9f17e765a21e user: Maxim Dounin date: Thu Jul 11 20:34:02 2013 +0400 description: Configure: perl Makefile rebuild after configure. The $NGX_AUTO_CONFIG_H added to perl module Makefile dependencies to make sure it's always rebuild after a configure. It is needed as we expand various variables used for Makefile generation during configure (in particular, nginx version). diffstat: auto/lib/perl/make | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/auto/lib/perl/make b/auto/lib/perl/make --- a/auto/lib/perl/make +++ b/auto/lib/perl/make @@ -18,6 +18,7 @@ cat << END $NGX_OBJS/src/http/modules/perl/Makefile: \\ + $NGX_AUTO_CONFIG_H \\ src/core/nginx.h \\ src/http/modules/perl/Makefile.PL \\ src/http/modules/perl/nginx.pm \\ From mdounin at mdounin.ru Fri Jul 12 10:21:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Jul 2013 10:21:10 +0000 Subject: [nginx] Style. Message-ID: details: http://hg.nginx.org/nginx/rev/8e7db77e5d88 branches: changeset: 5266:8e7db77e5d88 user: Maxim Dounin date: Thu Jul 11 20:38:27 2013 +0400 description: Style. diffstat: src/http/ngx_http_core_module.c | 2 +- src/mail/ngx_mail.c | 2 +- src/os/unix/ngx_process_cycle.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diffs (36 lines): diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -4469,7 +4469,7 @@ static ngx_http_method_name_t ngx_metho { (u_char *) "COPY", (uint32_t) ~NGX_HTTP_COPY }, { (u_char *) "MOVE", (uint32_t) ~NGX_HTTP_MOVE }, { (u_char *) "OPTIONS", (uint32_t) ~NGX_HTTP_OPTIONS }, - { (u_char *) "PROPFIND" , (uint32_t) ~NGX_HTTP_PROPFIND }, + { (u_char *) "PROPFIND", (uint32_t) ~NGX_HTTP_PROPFIND }, { (u_char *) "PROPPATCH", (uint32_t) ~NGX_HTTP_PROPPATCH }, { (u_char *) "LOCK", (uint32_t) ~NGX_HTTP_LOCK }, { (u_char *) "UNLOCK", (uint32_t) ~NGX_HTTP_UNLOCK }, diff --git a/src/mail/ngx_mail.c b/src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c +++ b/src/mail/ngx_mail.c @@ -465,7 +465,7 @@ ngx_mail_add_addrs(ngx_conf_t *cf, ngx_m addrs[i].conf.ssl = addr[i].ssl; #endif - len = ngx_sock_ntop(addr[i].sockaddr, addr[i].socklen , buf, + len = ngx_sock_ntop(addr[i].sockaddr, addr[i].socklen, buf, NGX_SOCKADDR_STRLEN, 1); p = ngx_pnalloc(cf->pool, len); diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c +++ b/src/os/unix/ngx_process_cycle.c @@ -536,7 +536,7 @@ ngx_signal_worker_processes(ngx_cycle_t } ngx_log_debug2(NGX_LOG_DEBUG_CORE, cycle->log, 0, - "kill (%P, %d)" , ngx_processes[i].pid, signo); + "kill (%P, %d)", ngx_processes[i].pid, signo); if (kill(ngx_processes[i].pid, signo) == -1) { err = ngx_errno; From yszhou4tech at gmail.com Fri Jul 12 11:10:33 2013 From: yszhou4tech at gmail.com (Yousong Zhou) Date: Fri, 12 Jul 2013 19:10:33 +0800 Subject: Free unused large block when doubling capacity of ngx_array_t. Message-ID: <20130712111031.GB11964@gmail.com> # HG changeset patch # User Yousong Zhou # Date 1373358378 -28800 # Branch t # Node ID b8a53d0bb5c306b89eef767694fcf127a0da8f41 # Parent 626f288fa5ede7ee3cbeffe950cb9dd611e10c52 Free unused large block when doubling capacity of ngx_array_t. When `nalloc' of array needs to be doubled, originally allocated large block should be freed. This can lead to gains from 2 aspects: - It saves unnessary memory consumption. - Length of p->large chain could be constrained a little to reduce transversal cost. diff -r 626f288fa5ed -r b8a53d0bb5c3 src/core/ngx_array.c --- a/src/core/ngx_array.c Fri Jul 05 11:42:25 2013 +0400 +++ b/src/core/ngx_array.c Tue Jul 09 16:26:18 2013 +0800 @@ -79,6 +79,7 @@ } ngx_memcpy(new, a->elts, size); + ngx_pfree(p, a->elts); a->elts = new; a->nalloc *= 2; } @@ -129,6 +130,7 @@ } ngx_memcpy(new, a->elts, a->nelts * a->size); + ngx_pfree(p, a->elts); a->elts = new; a->nalloc = nalloc; } From vbart at nginx.com Fri Jul 12 23:27:00 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 12 Jul 2013 23:27:00 +0000 Subject: [nginx] Events: honor NGX_USE_GREEDY_EVENT when kqueue support i... Message-ID: details: http://hg.nginx.org/nginx/rev/13c006f0c40e branches: changeset: 5267:13c006f0c40e user: Valentin Bartenev date: Sat Jul 13 03:24:30 2013 +0400 description: Events: honor NGX_USE_GREEDY_EVENT when kqueue support is enabled. Currently this flag is needed for epoll and rtsig, and though these methods usually present on different platforms than kqueue, nginx can be compiled to support all of them. diffstat: src/os/unix/ngx_readv_chain.c | 2 +- src/os/unix/ngx_recv.c | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diffs (26 lines): diff -r 8e7db77e5d88 -r 13c006f0c40e src/os/unix/ngx_readv_chain.c --- a/src/os/unix/ngx_readv_chain.c Thu Jul 11 20:38:27 2013 +0400 +++ b/src/os/unix/ngx_readv_chain.c Sat Jul 13 03:24:30 2013 +0400 @@ -136,7 +136,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx return n; } - if (n < size) { + if (n < size && !(ngx_event_flags & NGX_USE_GREEDY_EVENT)) { rev->ready = 0; } diff -r 8e7db77e5d88 -r 13c006f0c40e src/os/unix/ngx_recv.c --- a/src/os/unix/ngx_recv.c Thu Jul 11 20:38:27 2013 +0400 +++ b/src/os/unix/ngx_recv.c Sat Jul 13 03:24:30 2013 +0400 @@ -87,7 +87,9 @@ ngx_unix_recv(ngx_connection_t *c, u_cha return n; } - if ((size_t) n < size) { + if ((size_t) n < size + && !(ngx_event_flags & NGX_USE_GREEDY_EVENT)) + { rev->ready = 0; } From jzefip at gmail.com Sat Jul 13 23:43:22 2013 From: jzefip at gmail.com (Julien Zefi) Date: Sat, 13 Jul 2013 17:43:22 -0600 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hi, On Tue, Jul 9, 2013 at 7:02 PM, Yichun Zhang (agentzh) wrote: > Hello! > > On Tue, Jul 9, 2013 at 5:12 PM, Julien Zefi wrote: > > But if in some triggered callback by the timer the > > ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send > that > > chain as soon as the socket becomes available again. > > This assumption is not correct. Nginx will only flush out the pending > data for you when r->write_event_handler is set to ngx_http_writer. > This only (automatically) happens in ngx_http_finalize_request (by > calling the ngx_http_set_write_handler function to do the assignment > to r->write_event_handler). > > > But after that happens, > > how can i restore my timer cycle ? > > > > My suggestion is to register your own r->write_event_handler handler > to propagate the pending outputs by calling ngx_http_output_filter > with a NULL chain link pointer yourself. And in that handler, you can > also restore your timer cycle and etc when all the pending outputs > have been flushed out (into the system socket send buffers). > > I've been doing something like this in our ngx_lua module. You can > check out the ngx.flush() API function's implementation in particular: > > http://wiki.nginx.org/HttpLuaModule#ngx.flush I have been trying many workarounds without luck, the last one that i have is that if in my timer-callback the flush returns NGX_AGAIN, invoke a new handler that sends out an empty chain, but it continue returning NGX_AGAIN, it never backs to NGX_OK, this is how it looks: 306 static void ngx_http_hls_flush(ngx_event_t *e) 307 { 308 printf("Trying to flush data\n"); 309 int ret; 310 ngx_buf_t *buf; 311 ngx_chain_t chain; 312 ngx_http_request_t *r; 313 ngx_http_hls_event_t *hls_event; 314 315 hls_event = e->data; 316 r = hls_event->r; 317 318 buf = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); 319 buf->pos = NULL; 320 buf->last = NULL; 321 buf->memory = 0; 322 buf->last_buf = 1; 323 324 chain.buf = buf; 325 chain.next = NULL; 326 327 ret = ngx_http_output_filter(r, &chain); 328 if (ret == NGX_OK) { 329 e->handler = ngx_http_hls_timer; 330 } 331 else if (ret == NGX_AGAIN) { 332 e->handler = ngx_http_hls_flush; 333 } 334 else { 335 e->handler = ngx_http_hls_finalize; 336 } 337 338 ngx_add_timer(e, 50); 339 } it always returns to hgx_http_hls_flush, what else i can do ?, i have spent bunch of hours on this, any extra help is welcome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sun Jul 14 06:40:15 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 13 Jul 2013 23:40:15 -0700 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hello! On Sat, Jul 13, 2013 at 4:43 PM, Julien Zefi wrote: > > I have been trying many workarounds without luck, the last one that i have > is that if in my timer-callback the flush returns NGX_AGAIN, invoke a new > handler that sends out an empty chain, but it continue returning NGX_AGAIN, > it never backs to NGX_OK, this is how it looks: > It seems that what you're doing is very wrong. You'd better take a look at how the ngx_http_writer and ngx_http_set_write_handler functions are implemented in the Nginx core instead of shooting in the darkness. Basically: 1. Flush outputs by r->write_event_handler instead of in your timer handler because you should only retry sending the outputs upon a epoll/poll/select write event. 2. By "sending out an empty chain", I actually mean rc = ngx_http_output_filter(r, NULL); You can see this line in the ngx_http_writer function. You're just sending out a last_buf chain. Regards, -agentzh From jzefip at gmail.com Mon Jul 15 03:43:43 2013 From: jzefip at gmail.com (Julien Zefi) Date: Sun, 14 Jul 2013 21:43:43 -0600 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hi, On Sun, Jul 14, 2013 at 12:40 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Sat, Jul 13, 2013 at 4:43 PM, Julien Zefi wrote: > > > > I have been trying many workarounds without luck, the last one that i > have > > is that if in my timer-callback the flush returns NGX_AGAIN, invoke a new > > handler that sends out an empty chain, but it continue returning > NGX_AGAIN, > > it never backs to NGX_OK, this is how it looks: > > > > It seems that what you're doing is very wrong. You'd better take a > look at how the ngx_http_writer and ngx_http_set_write_handler > functions are implemented in the Nginx core instead of shooting in the > darkness. Basically: > > 1. Flush outputs by r->write_event_handler instead of in your timer > handler because you should only retry sending the outputs upon a > epoll/poll/select write event. > > 2. By "sending out an empty chain", I actually mean > > rc = ngx_http_output_filter(r, NULL); > > You can see this line in the ngx_http_writer function. You're just > sending out a last_buf chain. > > Sorry by bother you again but i still cannot figure out how some internals are not working as i expect. I have take in count your suggestions and wrote a new test case (file attached). My goal in that test case is to let NginX invoke my write_event_handler once the socket is ready to write again when a NGX_AGAIN is found, despite the test case is not perfect for all things that need to be fixed, i have isolated as much as i can. The test case writes 12.3KB of data every 1ms, at some point it will raise NGX_AGAIN but from there is not recovering, it keeps in the same state forever, do you see any specific problem when handling the exception ? thnks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_case_02.tar.gz Type: application/x-gzip Size: 1372 bytes Desc: not available URL: From agentzh at gmail.com Mon Jul 15 04:57:33 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 14 Jul 2013 21:57:33 -0700 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hello! On Sun, Jul 14, 2013 at 8:43 PM, Julien Zefi wrote: > > Sorry by bother you again but i still cannot figure out how some internals > are not working as i expect. I have take in count your suggestions and wrote > a new test case (file attached). > 1. You should simply call ngx_http_output_filter(r, NULL); in your r->write_event_handler, but you set r->write_event_handler to ngx_http_test_stream_handler which always emits brand new data. I'm guessing you don't really understand how the ngx_http_writer and ngx_http_set_write_handler functions are implemented in the Nginx core. Look harder. 2. You should not set r->header_only = 1 in your case because you're actually sending out the response body. Ensure that you know how a flag works before you start using it. 3. Another obvious mistake is that you incorrectly perform r->main->count++; without decrementing it by calling ngx_http_finalize_request, which will certainly lead to request hang. Ensure that you understand this flag before using it. > The test case writes 12.3KB of data every 1ms, at some point it will raise > NGX_AGAIN but from there is not recovering, it keeps in the same state > forever, do you see any specific problem when handling the exception ? > This is trivial to implement by writing some Lua code using ngx_lua module: location /t { content_by_lua ' local message = "..." for i = 1, 100 do ngx.print(message) ngx.flush(true) ngx.sleep(0.001) end '; } Maybe you can just use ngx_lua for your purposes without all the pain of understanding the nginx internals (you seem to lack a lot of knowledge here). If you insist in writing your own nginx C module, then just check out how ngx_lua implements all the APIs demonstrated in the example above. You can also check out the official documentation of ngx_lua: http://wiki.nginx.org/HttpLuaModule Best regards, -agentzh From maxim at nginx.com Mon Jul 15 13:02:56 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 15 Jul 2013 17:02:56 +0400 Subject: IPv6 support in resolver In-Reply-To: References: <20130617153021.GH72282@mdounin.ru> Message-ID: <51E3F300.3070509@nginx.com> Hi Anton, First of all, thanks for the patches! Just want to let you know that we have a ticket in our internal system assigned your patches to the developer for review but it'll take some time due to other tasks and complexity of the patch series. On 7/10/13 9:17 PM, ToSHiC wrote: > Hello, > > I've split this big patch into several small patches, taking into > account your comments. I'll send each part in separate email. Here > is the first one. > > commit 597d09e7ae9247c5466b18aa2ef3f5892e61b708 > Author: Anton Kortunov > > Date: Wed Jul 10 13:14:52 2013 +0400 > > Added new structure ngx_ipaddr_t > > This structure contains family field > and the union of ipv4/ipv6 structures in_addr_t and in6_addr. > > diff --git a/src/core/ngx_inet.h b/src/core/ngx_inet.h > index 6a5a368..077ed34 100644 > --- a/src/core/ngx_inet.h > +++ b/src/core/ngx_inet.h > @@ -68,6 +68,16 @@ typedef struct { > > > typedef struct { > + ngx_uint_t family; > + union { > + in_addr_t v4; > +#if (NGX_HAVE_INET6) > + struct in6_addr v6; > +#endif > + } u; > +} ngx_ipaddr_t; > + > +typedef struct { > struct sockaddr *sockaddr; > socklen_t socklen; > ngx_str_t name; > > > > On Mon, Jun 17, 2013 at 7:30 PM, Maxim Dounin > wrote: > > Hello! > > On Fri, Jun 14, 2013 at 09:44:46PM +0400, ToSHiC wrote: > > > Hello, > > > > We needed this feature in our company, I found that it is in > milestones of > > version 1.5 but doesn't exist yet. So I've implemented it > based in 1.3 code > > and merged in current 1.5 code. When I wrote this code I > mostly cared about > > minimum intrusion into other parts of nginx. > > > > IPv6 fallback logic is not a straightforward implementation of > suggested by > > RFC. RFC states that IPv6 resolving have priority over IPv4, > and it's not > > very good for Internet we have currently. With this patch you > can specify > > priority, and in upstream and mail modules I've set IPv4 as > preferred > > address family. > > > > Patch is pretty big and I hope it'll not break mailing list or > mail clients. > > You may want to try to split the patch into smaller patches to > simplify review. See also some hints here: > > http://nginx.org/en/docs/contributing_changes.html > > Some quick comments below. > > [...] > > > - addr = ntohl(ctx->addr); > > +failed: > > + > > + //addr = ntohl(ctx->addr); > > + inet_ntop(ctx->addr.family, &ctx->addr.u, text, > > NGX_SOCKADDR_STRLEN); > > > > ngx_log_error(NGX_LOG_ALERT, r->log, 0, > > - "could not cancel %ud.%ud.%ud.%ud > resolving", > > - (addr >> 24) & 0xff, (addr >> 16) & 0xff, > > - (addr >> 8) & 0xff, addr & 0xff); > > + "could not cancel %s resolving", text); > > 1. Don't use inet_ntop(), there is ngx_sock_ntop() instead. > > 2. Don't use C++ style ("//") comments. > > 3. If some data is only needed for debug logging, keep relevant > calculations under #if (NGX_DEBUG). > > [...] > > > @@ -334,6 +362,7 @@ > > ngx_http_upstream_create_round_robin_peer(ngx_http_request_t *r, > > peers->peer[i].current_weight = 0; > > peers->peer[i].max_fails = 1; > > peers->peer[i].fail_timeout = 10; > > + > > } > > } > > > > Please avoid unrelated changes. > > [...] > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From mdounin at mdounin.ru Tue Jul 16 11:41:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:17 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/cf8224619ba7 branches: stable-1.4 changeset: 5268:cf8224619ba7 user: Maxim Dounin date: Fri Jul 12 14:24:07 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1004001 -#define NGINX_VERSION "1.4.1" +#define nginx_version 1004002 +#define NGINX_VERSION "1.4.2" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From mdounin at mdounin.ru Tue Jul 16 11:41:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:18 +0000 Subject: [nginx] Perl: extra "return" removed. Message-ID: details: http://hg.nginx.org/nginx/rev/51f6ddbf6d09 branches: stable-1.4 changeset: 5269:51f6ddbf6d09 user: Maxim Dounin date: Sat May 11 18:48:56 2013 +0400 description: Perl: extra "return" removed. diffstat: src/http/modules/perl/nginx.xs | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/modules/perl/nginx.xs b/src/http/modules/perl/nginx.xs --- a/src/http/modules/perl/nginx.xs +++ b/src/http/modules/perl/nginx.xs @@ -419,7 +419,7 @@ request_body(r) p = ngx_pnalloc(r->pool, len); if (p == NULL) { - return XSRETURN_UNDEF; + XSRETURN_UNDEF; } data = p; From mdounin at mdounin.ru Tue Jul 16 11:41:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:19 +0000 Subject: [nginx] Fixed build with --with-mail_ssl_module. Message-ID: details: http://hg.nginx.org/nginx/rev/95a30deca8ad branches: stable-1.4 changeset: 5270:95a30deca8ad user: Maxim Dounin date: Sat May 11 18:49:30 2013 +0400 description: Fixed build with --with-mail_ssl_module. If nginx was compiled without --with-http_ssl_module, but with some other module which uses OpenSSL (e.g. --with-mail_ssl_module), insufficient preprocessor check resulted in build failure. The problem was introduced by e0a3714a36f8 (1.3.14). Reported by Roman Arutyunyan. diffstat: src/http/ngx_http.h | 2 +- src/http/ngx_http_request.c | 8 ++++---- src/http/ngx_http_request.h | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diffs (63 lines): diff --git a/src/http/ngx_http.h b/src/http/ngx_http.h --- a/src/http/ngx_http.h +++ b/src/http/ngx_http.h @@ -89,7 +89,7 @@ ngx_int_t ngx_http_add_listen(ngx_conf_t void ngx_http_init_connection(ngx_connection_t *c); void ngx_http_close_connection(ngx_connection_t *c); -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME +#if (NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME) int ngx_http_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, void *arg); #endif diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -1955,7 +1955,7 @@ ngx_http_set_virtual_server(ngx_http_req hc = r->http_connection; -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME +#if (NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME) if (hc->ssl_servername) { if (hc->ssl_servername->len == host->len @@ -1986,7 +1986,7 @@ ngx_http_set_virtual_server(ngx_http_req return NGX_ERROR; } -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME +#if (NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME) if (hc->ssl_servername) { ngx_http_ssl_srv_conf_t *sscf; @@ -2053,7 +2053,7 @@ ngx_http_find_virtual_server(ngx_connect sn = virtual_names->regex; -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME +#if (NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME) if (r == NULL) { ngx_http_connection_t *hc; @@ -2085,7 +2085,7 @@ ngx_http_find_virtual_server(ngx_connect return NGX_DECLINED; } -#endif /* SSL_CTRL_SET_TLSEXT_HOSTNAME */ +#endif /* NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME */ for (i = 0; i < virtual_names->nregex; i++) { diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h +++ b/src/http/ngx_http_request.h @@ -295,7 +295,7 @@ typedef struct { ngx_http_addr_conf_t *addr_conf; ngx_http_conf_ctx_t *conf_ctx; -#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME +#if (NGX_HTTP_SSL && defined SSL_CTRL_SET_TLSEXT_HOSTNAME) ngx_str_t *ssl_servername; #if (NGX_PCRE) ngx_http_regex_t *ssl_servername_regex; From mdounin at mdounin.ru Tue Jul 16 11:41:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:21 +0000 Subject: [nginx] Proxy: $proxy_internal_body_length fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/8c866e31bc39 branches: stable-1.4 changeset: 5271:8c866e31bc39 user: Maxim Dounin date: Sat May 11 21:12:24 2013 +0400 description: Proxy: $proxy_internal_body_length fixed. The $proxy_internal_body_length value might change during request lifetime, notably if proxy_set_body used, and use of a cached value might result in incorrect upstream requests. Patch by Lanshun Zhou. diffstat: src/http/modules/ngx_http_proxy_module.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (13 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -615,7 +615,8 @@ static ngx_http_variable_t ngx_http_pro #endif { ngx_string("proxy_internal_body_length"), NULL, - ngx_http_proxy_internal_body_length_variable, 0, NGX_HTTP_VAR_NOHASH, 0 }, + ngx_http_proxy_internal_body_length_variable, 0, + NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, { ngx_null_string, NULL, NULL, 0, 0, 0 } }; From mdounin at mdounin.ru Tue Jul 16 11:41:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:22 +0000 Subject: [nginx] Removed vestiges of SVN. Message-ID: details: http://hg.nginx.org/nginx/rev/c248b0071507 branches: stable-1.4 changeset: 5272:c248b0071507 user: Ruslan Ermilov date: Thu Apr 25 17:41:45 2013 +0400 description: Removed vestiges of SVN. diffstat: misc/GNUmakefile | 31 ++----------------------------- misc/README | 3 --- 2 files changed, 2 insertions(+), 32 deletions(-) diffs (64 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -3,7 +3,6 @@ VER = $(shell grep 'define NGINX_VERSIO | sed -e 's/^.*"\(.*\)".*/\1/') NGINX = nginx-$(VER) TEMP = tmp -REPO = $(shell svn info | sed -n 's/^Repository Root: //p') OBJS = objs.msvc8 OPENSSL = openssl-1.0.1e @@ -38,40 +37,14 @@ release: export export: rm -rf $(TEMP) - - if [ -d .svn ]; then \ - svn export -rHEAD . $(TEMP)/$(NGINX); \ - else \ - hg archive -X '.hg*' $(TEMP)/$(NGINX); \ - fi + hg archive -X '.hg*' $(TEMP)/$(NGINX) RELEASE: - if [ -d .svn ]; then \ - $(MAKE) -f misc/GNUmakefile RELEASE.svn; \ - else \ - $(MAKE) -f misc/GNUmakefile RELEASE.hg; \ - fi - - $(MAKE) -f misc/GNUmakefile release - - -RELEASE.hg: hg ci -m nginx-$(VER)-RELEASE hg tag -m "release-$(VER) tag" release-$(VER) - -RELEASE.svn: - test -d $(TEMP) || mkdir -p $(TEMP) - - echo "nginx-$(VER)-RELEASE" > $(TEMP)/message - svn ci -F $(TEMP)/message - - echo "release-$(VER) tag" > $(TEMP)/message - svn copy $(REPO)/trunk $(REPO)/tags/release-$(VER) \ - -F $(TEMP)/message - - svn up + $(MAKE) -f misc/GNUmakefile release win32: diff --git a/misc/README b/misc/README --- a/misc/README +++ b/misc/README @@ -1,6 +1,3 @@ - -GNUmakefile, in svn it is available since 0.4.0 only. - make -f misc/GNUmakefile release From mdounin at mdounin.ru Tue Jul 16 11:41:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:23 +0000 Subject: [nginx] OCSP stapling: fix error logging of successful OCSP resp... Message-ID: details: http://hg.nginx.org/nginx/rev/83d028011ae2 branches: stable-1.4 changeset: 5273:83d028011ae2 user: Piotr Sikora date: Thu May 16 15:37:13 2013 -0700 description: OCSP stapling: fix error logging of successful OCSP responses. Due to a bad argument list, nginx worker would crash (SIGSEGV) while trying to log the fact that it received OCSP response with "revoked" or "unknown" certificate status. While there, fix similar (but non-crashing) error a few lines above. Signed-off-by: Piotr Sikora diffstat: src/event/ngx_event_openssl_stapling.c | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diffs (21 lines): diff --git a/src/event/ngx_event_openssl_stapling.c b/src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c +++ b/src/event/ngx_event_openssl_stapling.c @@ -611,15 +611,14 @@ ngx_ssl_stapling_ocsp_handler(ngx_ssl_oc != 1) { ngx_log_error(NGX_LOG_ERR, ctx->log, 0, - "certificate status not found in the OCSP response", - n, OCSP_response_status_str(n)); + "certificate status not found in the OCSP response"); goto error; } if (n != V_OCSP_CERTSTATUS_GOOD) { ngx_log_error(NGX_LOG_ERR, ctx->log, 0, "certificate status \"%s\" in the OCSP response", - n, OCSP_cert_status_str(n)); + OCSP_cert_status_str(n)); goto error; } From mdounin at mdounin.ru Tue Jul 16 11:41:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:25 +0000 Subject: [nginx] Upstream: fixed fail_timeout and max_fails > 1. Message-ID: details: http://hg.nginx.org/nginx/rev/f06bbc08e457 branches: stable-1.4 changeset: 5274:f06bbc08e457 user: Maxim Dounin date: Tue May 21 21:47:50 2013 +0400 description: Upstream: fixed fail_timeout and max_fails > 1. Due to peer->checked always set since rev. c90801720a0c (1.3.0) by round-robin and least_conn balancers (ip_hash not affected), the code in ngx_http_upstream_free_round_robin_peer() function incorrectly reset peer->fails too often. Reported by Dmitry Popov, http://mailman.nginx.org/pipermail/nginx-devel/2013-May/003720.html diffstat: src/http/modules/ngx_http_upstream_least_conn_module.c | 5 ++++- src/http/ngx_http_upstream_round_robin.c | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-) diffs (30 lines): diff --git a/src/http/modules/ngx_http_upstream_least_conn_module.c b/src/http/modules/ngx_http_upstream_least_conn_module.c --- a/src/http/modules/ngx_http_upstream_least_conn_module.c +++ b/src/http/modules/ngx_http_upstream_least_conn_module.c @@ -282,7 +282,10 @@ ngx_http_upstream_get_least_conn_peer(ng } best->current_weight -= total; - best->checked = now; + + if (now - best->checked > best->fail_timeout) { + best->checked = now; + } pc->sockaddr = best->sockaddr; pc->socklen = best->socklen; diff --git a/src/http/ngx_http_upstream_round_robin.c b/src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c +++ b/src/http/ngx_http_upstream_round_robin.c @@ -523,7 +523,10 @@ ngx_http_upstream_get_peer(ngx_http_upst rrp->tried[n] |= m; best->current_weight -= total; - best->checked = now; + + if (now - best->checked > best->fail_timeout) { + best->checked = now; + } return best; } From mdounin at mdounin.ru Tue Jul 16 11:41:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:26 +0000 Subject: [nginx] Win32: accept_mutex now always disabled (ticket #362). Message-ID: details: http://hg.nginx.org/nginx/rev/1b70200d83e3 branches: stable-1.4 changeset: 5275:1b70200d83e3 user: Maxim Dounin date: Fri May 31 14:59:26 2013 +0400 description: Win32: accept_mutex now always disabled (ticket #362). Use of accept mutex on win32 may result in a deadlock if there are multiple worker_processes configured and the mutex is grabbed by a process which can't accept connections. diffstat: src/event/ngx_event.c | 11 +++++++++++ 1 files changed, 11 insertions(+), 0 deletions(-) diffs (21 lines): diff --git a/src/event/ngx_event.c b/src/event/ngx_event.c --- a/src/event/ngx_event.c +++ b/src/event/ngx_event.c @@ -607,6 +607,17 @@ ngx_event_process_init(ngx_cycle_t *cycl ngx_use_accept_mutex = 0; } +#if (NGX_WIN32) + + /* + * disable accept mutex on win32 as it may cause deadlock if + * grabbed by a process which can't accept connections + */ + + ngx_use_accept_mutex = 0; + +#endif + #if (NGX_THREADS) ngx_posted_events_mutex = ngx_mutex_init(cycle->log, 0); if (ngx_posted_events_mutex == NULL) { From mdounin at mdounin.ru Tue Jul 16 11:41:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:27 +0000 Subject: [nginx] Updated zlib used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/02a861428c3d branches: stable-1.4 changeset: 5276:02a861428c3d user: Maxim Dounin date: Tue Jun 04 16:16:51 2013 +0400 description: Updated zlib used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -6,7 +6,7 @@ TEMP = tmp OBJS = objs.msvc8 OPENSSL = openssl-1.0.1e -ZLIB = zlib-1.2.7 +ZLIB = zlib-1.2.8 PCRE = pcre-8.32 From mdounin at mdounin.ru Tue Jul 16 11:41:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2013 11:41:29 +0000 Subject: [nginx] Perl: fixed r->header_in("Cookie") (ticket #351). Message-ID: details: http://hg.nginx.org/nginx/rev/edc479bf33b1 branches: stable-1.4 changeset: 5277:edc479bf33b1 user: Maxim Dounin date: Mon Jun 10 14:35:00 2013 +0400 description: Perl: fixed r->header_in("Cookie") (ticket #351). It was broken by X-Forwarded-For related changes in f7fe817c92a2 (1.3.14) as hh->offset is no longer 0 for Cookie. diffstat: src/http/modules/perl/nginx.xs | 36 +++++++++++++++++++++++++++--------- 1 files changed, 27 insertions(+), 9 deletions(-) diffs (88 lines): diff --git a/src/http/modules/perl/nginx.xs b/src/http/modules/perl/nginx.xs --- a/src/http/modules/perl/nginx.xs +++ b/src/http/modules/perl/nginx.xs @@ -222,10 +222,11 @@ header_in(r, key) dXSTARG; ngx_http_request_t *r; SV *key; - u_char *p, *lowcase_key, *cookie; + u_char *p, *lowcase_key, *value, sep; STRLEN len; ssize_t size; ngx_uint_t i, n, hash; + ngx_array_t *a; ngx_list_part_t *part; ngx_table_elt_t *h, **ph; ngx_http_header_t *hh; @@ -255,6 +256,19 @@ header_in(r, key) hh = ngx_hash_find(&cmcf->headers_in_hash, hash, lowcase_key, len); if (hh) { + + if (hh->offset == offsetof(ngx_http_headers_in_t, cookies)) { + sep = ';'; + goto multi; + } + + #if (NGX_HTTP_X_FORWARDED_FOR) + if (hh->offset == offsetof(ngx_http_headers_in_t, x_forwarded_for)) { + sep = ','; + goto multi; + } + #endif + if (hh->offset) { ph = (ngx_table_elt_t **) ((char *) &r->headers_in + hh->offset); @@ -268,15 +282,19 @@ header_in(r, key) XSRETURN_UNDEF; } - /* Cookie */ + multi: - n = r->headers_in.cookies.nelts; + /* Cookie, X-Forwarded-For */ + + a = (ngx_array_t *) ((char *) &r->headers_in + hh->offset); + + n = a->nelts; if (n == 0) { XSRETURN_UNDEF; } - ph = r->headers_in.cookies.elts; + ph = a->elts; if (n == 1) { ngx_http_perl_set_targ((*ph)->value.data, (*ph)->value.len); @@ -290,12 +308,12 @@ header_in(r, key) size += ph[i]->value.len + sizeof("; ") - 1; } - cookie = ngx_pnalloc(r->pool, size); - if (cookie == NULL) { + value = ngx_pnalloc(r->pool, size); + if (value == NULL) { XSRETURN_UNDEF; } - p = cookie; + p = value; for (i = 0; /* void */ ; i++) { p = ngx_copy(p, ph[i]->value.data, ph[i]->value.len); @@ -304,10 +322,10 @@ header_in(r, key) break; } - *p++ = ';'; *p++ = ' '; + *p++ = sep; *p++ = ' '; } - ngx_http_perl_set_targ(cookie, size); + ngx_http_perl_set_targ(value, size); goto done; } From mdounin at mdounin.ru Wed Jul 17 13:23:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2013 13:23:44 +0000 Subject: [nginx] nginx-1.4.2-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/50f065641b4c branches: stable-1.4 changeset: 5278:50f065641b4c user: Maxim Dounin date: Wed Jul 17 16:51:21 2013 +0400 description: nginx-1.4.2-RELEASE diffstat: docs/xml/nginx/changes.xml | 81 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 81 insertions(+), 0 deletions(-) diffs (91 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,87 @@ + + + + +????? $r->header_in() ??????????? ????? ?? ????????? ???????? ????? +"Cookie" ? "X-Forwarded-For" ?? ????????? ???????; +?????? ????????? ? 1.3.14. + + +the $r->header_in() embedded perl method did not return value of the +"Cookie" and "X-Forwarded-For" request header lines; +the bug had appeared in 1.3.14. + + + + + +nginx ?? ????????? ? ??????? ngx_mail_ssl_module, +?? ??? ?????? ngx_http_ssl_module; +?????? ????????? ? 1.3.14. + + +nginx could not be built with the ngx_mail_ssl_module, +but without ngx_http_ssl_module; +the bug had appeared in 1.3.14. + + + + + +? ????????? proxy_set_body.
+??????? Lanshun Zhou. +
+ +in the "proxy_set_body" directive.
+Thanks to Lanshun Zhou. +
+
+ + + +???????? fail_timeout ????????? server +? ????? upstream ??? ?? ????????, +???? ????????????? ???????? max_fails; +?????? ????????? ? 1.3.0. + + +the "fail_timeout" parameter of the "server" directive +in the "upstream" context might not work +if "max_fails" parameter was used; +the bug had appeared in 1.3.0. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ????????? ssl_stapling.
+??????? Piotr Sikora. +
+ +a segmentation fault might occur in a worker process +if the "ssl_stapling" directive was used.
+Thanks to Piotr Sikora. +
+
+ + + +nginx/Windows ??? ????????? ????????? ??????????, +???? ?????????????? ????????? ??????? ?????????. + + +nginx/Windows might stop accepting connections +if several worker processes were used. + + + +
+ + From mdounin at mdounin.ru Wed Jul 17 13:23:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2013 13:23:45 +0000 Subject: [nginx] release-1.4.2 tag Message-ID: details: http://hg.nginx.org/nginx/rev/fe2d74c60a3b branches: stable-1.4 changeset: 5279:fe2d74c60a3b user: Maxim Dounin date: Wed Jul 17 16:51:21 2013 +0400 description: release-1.4.2 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -356,3 +356,4 @@ cd84e467c72967b9f5fb4d96bfc708c93edeb634 23159600bdea695db8f9d2890aaf73424303e49c release-1.3.16 7809529022b83157067e7d1e2fb65d57db5f4d99 release-1.4.0 0702de638a4c51123d7b97801d393e8e25eb48de release-1.4.1 +50f065641b4c52ced41fae1ce216c73aaf112306 release-1.4.2 From alecdu at gmail.com Wed Jul 17 15:04:35 2013 From: alecdu at gmail.com (Hungpo DU) Date: Wed, 17 Jul 2013 23:04:35 +0800 Subject: A confusion about `slab allocator` 's initialization Message-ID: I find a couple of lines confusing while reading the slab allocator's code. 96 p = (u_char *) pool + sizeof(ngx_slab_pool_t); 97 size = pool->end - p; ... 110 p += n * sizeof(ngx_slab_page_t); 111 112 pages = (ngx_uint_t) (size / (ngx_pagesize + sizeof(ngx_slab_page_t))); 113 114 ngx_memzero(p, pages * sizeof(ngx_slab_page_t)); 115 ... 125 pool->start = (u_char *) 126 ngx_align_ptr((uintptr_t) p + pages * sizeof(ngx_slab_page_t), 127 ngx_pagesize); The `size` takes space occupied by *slots* into account, that'll make `pages` a little bit larger. Then, * because `p` is already advanced by sizeof *slots*, the following `ngx_memzero` will operates on more `ngx_slab_page_t`'s than expected. * also `pool->start` will start at a higher postion before aligned by `ngx_pagesize`. One more page maybe available if `pages` is a little smaller? Can someone please tell me these lines's true intention? I must get it wrong somewhere. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dakota at brokenpipe.ru Wed Jul 17 21:56:03 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Thu, 18 Jul 2013 01:56:03 +0400 Subject: Subrequests again Message-ID: Hi, It looks like when a subrequest is completed with NGX_ERROR result (post subrequest callback is called with NGX_ERROR status) and I try ngx_http_output_filter for my main request after that, ngx_http_output_filter returns NGX_ERROR too. I need to be able to continue sending my main request body normally. How to achieve that? Thanks. -- Marat -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Jul 17 23:28:43 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 17 Jul 2013 16:28:43 -0700 Subject: Subrequests again In-Reply-To: References: Message-ID: Hello! On Wed, Jul 17, 2013 at 2:56 PM, Marat Dakota wrote: > It looks like when a subrequest is completed with NGX_ERROR result (post > subrequest callback is called with NGX_ERROR status) and I try > ngx_http_output_filter for my main request after that, > ngx_http_output_filter returns NGX_ERROR too. > > I need to be able to continue sending my main request body normally. > > How to achieve that? > Easy. Just do not return NGX_ERROR in your post_subrequest handler when the error is not fatal enough to abort the main request. Otherwise, ngx_http_finalize_request will call ngx_http_terminate_request. Regards, -agentzh From jzefip at gmail.com Thu Jul 18 04:37:59 2013 From: jzefip at gmail.com (Julien Zefi) Date: Wed, 17 Jul 2013 22:37:59 -0600 Subject: handle NGX_AGAIN properly In-Reply-To: References: Message-ID: Hi all, thanks for the help but after more changes and taking in count your suggestions i am still stuck with the problem (it cannot be in Lua, must be done in C as i am streaming binary data). If anyone of you is interested, i will put a budget of 100USD for who is interested into fix the test case as required, for more details send me a private email to discuss the requirements and what is expected as result. thanks, On Sun, Jul 14, 2013 at 10:57 PM, Yichun Zhang (agentzh) wrote: > Hello! > > On Sun, Jul 14, 2013 at 8:43 PM, Julien Zefi wrote: > > > > Sorry by bother you again but i still cannot figure out how some > internals > > are not working as i expect. I have take in count your suggestions and > wrote > > a new test case (file attached). > > > > 1. You should simply call ngx_http_output_filter(r, NULL); in your > r->write_event_handler, but you set r->write_event_handler to > ngx_http_test_stream_handler which always emits brand new data. I'm > guessing you don't really understand how the ngx_http_writer and > ngx_http_set_write_handler functions are implemented in the Nginx > core. Look harder. > > 2. You should not set r->header_only = 1 in your case because you're > actually sending out the response body. Ensure that you know how a > flag works before you start using it. > > 3. Another obvious mistake is that you incorrectly perform > > r->main->count++; > > without decrementing it by calling ngx_http_finalize_request, which > will certainly lead to request hang. Ensure that you understand this > flag before using it. > > > The test case writes 12.3KB of data every 1ms, at some point it will > raise > > NGX_AGAIN but from there is not recovering, it keeps in the same state > > forever, do you see any specific problem when handling the exception ? > > > > This is trivial to implement by writing some Lua code using ngx_lua module: > > location /t { > content_by_lua ' > local message = "..." > for i = 1, 100 do > ngx.print(message) > ngx.flush(true) > ngx.sleep(0.001) > end > '; > } > > Maybe you can just use ngx_lua for your purposes without all the pain > of understanding the nginx internals (you seem to lack a lot of > knowledge here). If you insist in writing your own nginx C module, > then just check out how ngx_lua implements all the APIs demonstrated > in the example above. You can also check out the official > documentation of ngx_lua: > > http://wiki.nginx.org/HttpLuaModule > > Best regards, > -agentzh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dakota at brokenpipe.ru Thu Jul 18 22:26:45 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Fri, 19 Jul 2013 02:26:45 +0400 Subject: Subrequests again In-Reply-To: References: Message-ID: Hi Yichun, It looks like it's not that simple. I've traced the source a bit, my post subrequest callback is called from ngx_http_finalize_request() and this ngx_http_finalize_request() is called from ngx_http_discarded_request_body_handler() like that (a piece of code from ngx_http_request_body.c): if (rev->timedout) { c->timedout = 1; c->error = 1; ngx_http_finalize_request(r, NGX_ERROR); return; } So, I have a timeout and c->error set to 1 and that means ngx_http_terminate_request(). -- Marat On Thu, Jul 18, 2013 at 3:28 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Jul 17, 2013 at 2:56 PM, Marat Dakota wrote: > > It looks like when a subrequest is completed with NGX_ERROR result (post > > subrequest callback is called with NGX_ERROR status) and I try > > ngx_http_output_filter for my main request after that, > > ngx_http_output_filter returns NGX_ERROR too. > > > > I need to be able to continue sending my main request body normally. > > > > How to achieve that? > > > > Easy. Just do not return NGX_ERROR in your post_subrequest handler > when the error is not fatal enough to abort the main request. > Otherwise, ngx_http_finalize_request will call > ngx_http_terminate_request. > > Regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dakota at brokenpipe.ru Thu Jul 18 23:05:08 2013 From: dakota at brokenpipe.ru (Marat Dakota) Date: Fri, 19 Jul 2013 03:05:08 +0400 Subject: Subrequests again In-Reply-To: References: Message-ID: And now I've described the problem, it looks like I've found the reason of my problem. I'll recheck it, but it looks like I'm incorrectly setting subrequest's discard_body value. On Fri, Jul 19, 2013 at 2:26 AM, Marat Dakota wrote: > Hi Yichun, > > It looks like it's not that simple. > > I've traced the source a bit, my post subrequest callback is called > from ngx_http_finalize_request() and this ngx_http_finalize_request() is > called from ngx_http_discarded_request_body_handler() like that (a piece of > code from ngx_http_request_body.c): > > if (rev->timedout) { > c->timedout = 1; > c->error = 1; > ngx_http_finalize_request(r, NGX_ERROR); > return; > } > > So, I have a timeout and c->error set to 1 and that > means ngx_http_terminate_request(). > > -- > Marat > > > On Thu, Jul 18, 2013 at 3:28 AM, Yichun Zhang (agentzh) > wrote: > >> Hello! >> >> On Wed, Jul 17, 2013 at 2:56 PM, Marat Dakota wrote: >> > It looks like when a subrequest is completed with NGX_ERROR result (post >> > subrequest callback is called with NGX_ERROR status) and I try >> > ngx_http_output_filter for my main request after that, >> > ngx_http_output_filter returns NGX_ERROR too. >> > >> > I need to be able to continue sending my main request body normally. >> > >> > How to achieve that? >> > >> >> Easy. Just do not return NGX_ERROR in your post_subrequest handler >> when the error is not fatal enough to abort the main request. >> Otherwise, ngx_http_finalize_request will call >> ngx_http_terminate_request. >> >> Regards, >> -agentzh >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 19 12:02:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2013 12:02:33 +0000 Subject: [nginx] Xslt: exsltRegisterAll() moved to preconfiguration. Message-ID: details: http://hg.nginx.org/nginx/rev/e939f6e8548c branches: changeset: 5280:e939f6e8548c user: Maxim Dounin date: Fri Jul 19 15:59:50 2013 +0400 description: Xslt: exsltRegisterAll() moved to preconfiguration. The exsltRegisterAll() needs to be called before XSLT stylesheets are compiled, else stylesheet compilation hooks will not work. This change fixes EXSLT Functions extension. diffstat: src/http/modules/ngx_http_xslt_filter_module.c | 12 ++++++++++-- 1 files changed, 10 insertions(+), 2 deletions(-) diffs (43 lines): diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -104,6 +104,7 @@ static void *ngx_http_xslt_filter_create static void *ngx_http_xslt_filter_create_conf(ngx_conf_t *cf); static char *ngx_http_xslt_filter_merge_conf(ngx_conf_t *cf, void *parent, void *child); +static ngx_int_t ngx_http_xslt_filter_preconfiguration(ngx_conf_t *cf); static ngx_int_t ngx_http_xslt_filter_init(ngx_conf_t *cf); static void ngx_http_xslt_filter_exit(ngx_cycle_t *cycle); @@ -163,7 +164,7 @@ static ngx_command_t ngx_http_xslt_filt static ngx_http_module_t ngx_http_xslt_filter_module_ctx = { - NULL, /* preconfiguration */ + ngx_http_xslt_filter_preconfiguration, /* preconfiguration */ ngx_http_xslt_filter_init, /* postconfiguration */ ngx_http_xslt_filter_create_main_conf, /* create main configuration */ @@ -1111,7 +1112,7 @@ ngx_http_xslt_filter_merge_conf(ngx_conf static ngx_int_t -ngx_http_xslt_filter_init(ngx_conf_t *cf) +ngx_http_xslt_filter_preconfiguration(ngx_conf_t *cf) { xmlInitParser(); @@ -1119,6 +1120,13 @@ ngx_http_xslt_filter_init(ngx_conf_t *cf exsltRegisterAll(); #endif + return NGX_OK; +} + + +static ngx_int_t +ngx_http_xslt_filter_init(ngx_conf_t *cf) +{ ngx_http_next_header_filter = ngx_http_top_header_filter; ngx_http_top_header_filter = ngx_http_xslt_header_filter; From mdounin at mdounin.ru Fri Jul 19 13:58:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2013 17:58:26 +0400 Subject: A confusion about `slab allocator` 's initialization In-Reply-To: References: Message-ID: <20130719135826.GO49108@mdounin.ru> Hello! On Wed, Jul 17, 2013 at 11:04:35PM +0800, Hungpo DU wrote: > I find a couple of lines confusing while reading the slab allocator's code. > > 96 p = (u_char *) pool + sizeof(ngx_slab_pool_t); > 97 size = pool->end - p; > ... > 110 p += n * sizeof(ngx_slab_page_t); > 111 > 112 pages = (ngx_uint_t) (size / (ngx_pagesize + > sizeof(ngx_slab_page_t))); > 113 > 114 ngx_memzero(p, pages * sizeof(ngx_slab_page_t)); > 115 > ... > 125 pool->start = (u_char *) > 126 ngx_align_ptr((uintptr_t) p + pages * > sizeof(ngx_slab_page_t), > 127 ngx_pagesize); > > The `size` takes space occupied by *slots* into account, that'll make > `pages` > a little bit larger. Then, > > * because `p` is already advanced by sizeof *slots*, the following > `ngx_memzero` > will operates on more `ngx_slab_page_t`'s than expected. > > * also `pool->start` will start at a higher postion before aligned by > `ngx_pagesize`. One more page maybe available if `pages` is a little > smaller? > > > Can someone please tell me these lines's true intention? I must get it > wrong > somewhere. Slab allocator uses page as a memory allocation unit, and each memory page must have corresponding ngx_slab_page_t structure. That is, if "pages" will be smaller - memory available for allocation will be the same, as memory page without corresponding ngx_slab_page_t structure can't be used. With "size" reduced by n * sizeof(ngx_slab_page_t) the code might be a bit more readable though, and probably something like this worth committing: --- a/src/core/ngx_slab.c Sat Jul 13 03:24:30 2013 +0400 +++ b/src/core/ngx_slab.c Fri Jul 19 17:53:11 2013 +0400 @@ -105,6 +105,7 @@ ngx_slab_init(ngx_slab_pool_t *pool) } p += n * sizeof(ngx_slab_page_t); + size -= n * sizeof(ngx_slab_page_t); pages = (ngx_uint_t) (size / (ngx_pagesize + sizeof(ngx_slab_page_t))); Not sure though. -- Maxim Dounin http://nginx.org/en/donation.html From piotr at cloudflare.com Wed Jul 24 02:27:32 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 23 Jul 2013 19:27:32 -0700 Subject: [PATCH] SNI: better server name handling. In-Reply-To: <201305272051.34349.vbart@nginx.com> References: <201305272051.34349.vbart@nginx.com> Message-ID: Hey Valentin, sorry for the long delay. > Nice catch, but I'm not happy with the solution. With your patch, client > will be acknowledged of acceptance even if the server name is not found. Correct, that's the intended behavior. > I believe such behavior isn't consistent with RFC 4366, and it prevents the > client to know that specified virtual host doesn't exist on the server, which > effectively makes it useless. I actually disagree with that statement. >From RFC 4366, 3.1. Server Name Indication: A server that receives a client hello containing the "server_name" extension MAY use the information contained in the extension to guide its selection of an appropriate certificate to return to the client, and/or other aspects of security policy. In this event, the server SHALL include an extension of type "server_name" in the (extended) server hello. The "extension_data" field of this extension SHALL be empty. My interpretation of the above paragraph is that if "server_name" from Client Hello is being used in the decision making process then server should always acknowledge that fact by sending empty "server_name" in Server Hello, regardless of whether or not the server name was found, i.e. even if the server name wasn't found, we still used that information to decide to serve the certificate from the default server block. ...or do you disagree? > Let me propose a better (from my point of view) patch at the end of my message. Your patch is indeed better and should be committed, simply for the sake of fixing ngx_http_find_virtual_server(). Just keep in mind that it doesn't change behavior in case when server name wasn't found. Best regards, Piotr Sikora From sepherosa at gmail.com Wed Jul 24 13:42:18 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Wed, 24 Jul 2013 21:42:18 +0800 Subject: [PATCH] DragonFlyBSD KEEPALIVE_TUNABLE Message-ID: Hi, On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in milliseconds instead of seconds. Following patch fixes this: http://leaf.dragonflybsd.org/~sephe/ngx_keepalive.diff Best Regards, sephe -- Tomorrow Will Never Die From maxim at nginx.com Wed Jul 24 14:32:41 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 24 Jul 2013 18:32:41 +0400 Subject: [PATCH] DragonFlyBSD KEEPALIVE_TUNABLE In-Reply-To: References: Message-ID: <51EFE589.7060507@nginx.com> Hi, On 7/24/13 5:42 PM, Sepherosa Ziehau wrote: > Hi, > > On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in milliseconds > instead of seconds. Following patch fixes this: > http://leaf.dragonflybsd.org/~sephe/ngx_keepalive.diff > Thanks for the patch! Just curious: are there any reasons why these timers have a millisecond resolution and not compatible with other BSD's? -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From vbart at nginx.com Wed Jul 24 18:26:20 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 24 Jul 2013 18:26:20 +0000 Subject: [nginx] SPDY: fixed segfault with "client_body_in_file_only" ena... Message-ID: details: http://hg.nginx.org/nginx/rev/7542b72fe4b1 branches: changeset: 5281:7542b72fe4b1 user: Valentin Bartenev date: Wed Jul 24 22:24:25 2013 +0400 description: SPDY: fixed segfault with "client_body_in_file_only" enabled. It is possible to send FLAG_FIN in additional empty data frame, even if it is known from the content-length header that request body is empty. And Firefox actually behaves like this (see ticket #357). To simplify code we sacrificed our microoptimization that did not work right due to missing check in the ngx_http_spdy_state_data() function for rb->buf set to NULL. diffstat: src/http/ngx_http_spdy.c | 11 ++--------- 1 files changed, 2 insertions(+), 9 deletions(-) diffs (30 lines): diff -r e939f6e8548c -r 7542b72fe4b1 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Fri Jul 19 15:59:50 2013 +0400 +++ b/src/http/ngx_http_spdy.c Wed Jul 24 22:24:25 2013 +0400 @@ -2529,13 +2529,6 @@ ngx_http_spdy_init_request_body(ngx_http return NGX_ERROR; } - if (rb->rest == 0) { - buf->in_file = 1; - buf->file = &tf->file; - } else { - rb->buf = buf; - } - } else { if (rb->rest == 0) { @@ -2546,10 +2539,10 @@ ngx_http_spdy_init_request_body(ngx_http if (buf == NULL) { return NGX_ERROR; } - - rb->buf = buf; } + rb->buf = buf; + rb->bufs = ngx_alloc_chain_link(r->pool); if (rb->bufs == NULL) { return NGX_ERROR; From sepherosa at gmail.com Thu Jul 25 01:32:37 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Thu, 25 Jul 2013 09:32:37 +0800 Subject: [PATCH] DragonFlyBSD KEEPALIVE_TUNABLE In-Reply-To: <51EFE589.7060507@nginx.com> References: <51EFE589.7060507@nginx.com> Message-ID: On Wed, Jul 24, 2013 at 10:32 PM, Maxim Konovalov wrote: > Hi, > > On 7/24/13 5:42 PM, Sepherosa Ziehau wrote: >> Hi, >> >> On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in milliseconds >> instead of seconds. Following patch fixes this: >> http://leaf.dragonflybsd.org/~sephe/ngx_keepalive.diff >> > Thanks for the patch! > > Just curious: are there any reasons why these timers have a > millisecond resolution and not compatible with other BSD's? When I added TCP_KEEP* sockopts to DragonFlyBSD, I checked the unit of TCP_KEEP* on various systems that had implemented them (FreeBSD did not have that option at that time; not sure about NetBSD). They are using different unit, some use half-second (e.g. OpenVMS), some use 1 second (e.g. Linux); and there is no standard specifies which unit should be used. Another reason is that I want to keep the sockopts' unit consistent w/ the sysctls' unit. Well, and I actually used TCP_KEEP* sockopts in my own project at that time, which requires higher resolution TCP_KEEP* ;) Best Regards, sephe -- Tomorrow Will Never Die From ru at nginx.com Thu Jul 25 08:46:31 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 25 Jul 2013 08:46:31 +0000 Subject: [nginx] Style: reuse one int variable in ngx_configure_listening... Message-ID: details: http://hg.nginx.org/nginx/rev/31690d934175 branches: changeset: 5282:31690d934175 user: Ruslan Ermilov date: Thu Jul 25 12:46:02 2013 +0400 description: Style: reuse one int variable in ngx_configure_listening_sockets(). No functional changes. diffstat: src/core/ngx_connection.c | 19 ++++++++----------- 1 files changed, 8 insertions(+), 11 deletions(-) diffs (65 lines): diff -r 7542b72fe4b1 -r 31690d934175 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Wed Jul 24 22:24:25 2013 +0400 +++ b/src/core/ngx_connection.c Thu Jul 25 12:46:02 2013 +0400 @@ -464,16 +464,13 @@ ngx_open_listening_sockets(ngx_cycle_t * void ngx_configure_listening_sockets(ngx_cycle_t *cycle) { - int keepalive; + int value; ngx_uint_t i; ngx_listening_t *ls; #if (NGX_HAVE_DEFERRED_ACCEPT && defined SO_ACCEPTFILTER) struct accept_filter_arg af; #endif -#if (NGX_HAVE_DEFERRED_ACCEPT && defined TCP_DEFER_ACCEPT) - int timeout; -#endif ls = cycle->listening.elts; for (i = 0; i < cycle->listening.nelts; i++) { @@ -503,15 +500,15 @@ ngx_configure_listening_sockets(ngx_cycl } if (ls[i].keepalive) { - keepalive = (ls[i].keepalive == 1) ? 1 : 0; + value = (ls[i].keepalive == 1) ? 1 : 0; if (setsockopt(ls[i].fd, SOL_SOCKET, SO_KEEPALIVE, - (const void *) &keepalive, sizeof(int)) + (const void *) &value, sizeof(int)) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, "setsockopt(SO_KEEPALIVE, %d) %V failed, ignored", - keepalive, &ls[i].addr_text); + value, &ls[i].addr_text); } } @@ -648,20 +645,20 @@ ngx_configure_listening_sockets(ngx_cycl if (ls[i].add_deferred || ls[i].delete_deferred) { if (ls[i].add_deferred) { - timeout = (int) (ls[i].post_accept_timeout / 1000); + value = (int) (ls[i].post_accept_timeout / 1000); } else { - timeout = 0; + value = 0; } if (setsockopt(ls[i].fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, - &timeout, sizeof(int)) + &value, sizeof(int)) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, "setsockopt(TCP_DEFER_ACCEPT, %d) for %V failed, " "ignored", - timeout, &ls[i].addr_text); + value, &ls[i].addr_text); continue; } From ru at nginx.com Thu Jul 25 08:46:32 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 25 Jul 2013 08:46:32 +0000 Subject: [nginx] On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in m... Message-ID: details: http://hg.nginx.org/nginx/rev/6d73e0dc4f64 branches: changeset: 5283:6d73e0dc4f64 user: Ruslan Ermilov date: Thu Jul 25 12:46:03 2013 +0400 description: On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in msecs. Based on a patch by Sepherosa Ziehau. diffstat: src/core/ngx_connection.c | 20 ++++++++++++++++---- src/os/unix/ngx_freebsd_config.h | 5 +++++ 2 files changed, 21 insertions(+), 4 deletions(-) diffs (59 lines): diff -r 31690d934175 -r 6d73e0dc4f64 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Thu Jul 25 12:46:02 2013 +0400 +++ b/src/core/ngx_connection.c Thu Jul 25 12:46:03 2013 +0400 @@ -515,24 +515,36 @@ ngx_configure_listening_sockets(ngx_cycl #if (NGX_HAVE_KEEPALIVE_TUNABLE) if (ls[i].keepidle) { + value = ls[i].keepidle; + +#if (NGX_KEEPALIVE_FACTOR) + value *= NGX_KEEPALIVE_FACTOR; +#endif + if (setsockopt(ls[i].fd, IPPROTO_TCP, TCP_KEEPIDLE, - (const void *) &ls[i].keepidle, sizeof(int)) + (const void *) &value, sizeof(int)) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, "setsockopt(TCP_KEEPIDLE, %d) %V failed, ignored", - ls[i].keepidle, &ls[i].addr_text); + value, &ls[i].addr_text); } } if (ls[i].keepintvl) { + value = ls[i].keepintvl; + +#if (NGX_KEEPALIVE_FACTOR) + value *= NGX_KEEPALIVE_FACTOR; +#endif + if (setsockopt(ls[i].fd, IPPROTO_TCP, TCP_KEEPINTVL, - (const void *) &ls[i].keepintvl, sizeof(int)) + (const void *) &value, sizeof(int)) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, "setsockopt(TCP_KEEPINTVL, %d) %V failed, ignored", - ls[i].keepintvl, &ls[i].addr_text); + value, &ls[i].addr_text); } } diff -r 31690d934175 -r 6d73e0dc4f64 src/os/unix/ngx_freebsd_config.h --- a/src/os/unix/ngx_freebsd_config.h Thu Jul 25 12:46:02 2013 +0400 +++ b/src/os/unix/ngx_freebsd_config.h Thu Jul 25 12:46:03 2013 +0400 @@ -94,6 +94,11 @@ typedef struct aiocb ngx_aiocb_t; #define NGX_LISTEN_BACKLOG -1 +#ifdef __DragonFly__ +#define NGX_KEEPALIVE_FACTOR 1000 +#endif + + #if (__FreeBSD_version < 430000 || __FreeBSD_version < 500012) pid_t rfork_thread(int flags, void *stack, int (*func)(void *arg), void *arg); From ru at nginx.com Thu Jul 25 08:51:03 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 25 Jul 2013 12:51:03 +0400 Subject: [PATCH] DragonFlyBSD KEEPALIVE_TUNABLE In-Reply-To: References: Message-ID: <20130725085103.GF55404@lo0.su> On Wed, Jul 24, 2013 at 09:42:18PM +0800, Sepherosa Ziehau wrote: > On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in milliseconds > instead of seconds. Following patch fixes this: > http://leaf.dragonflybsd.org/~sephe/ngx_keepalive.diff > > Best Regards, > sephe http://hg.nginx.org/nginx/rev/6d73e0dc4f64 From sepherosa at gmail.com Thu Jul 25 08:51:11 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Thu, 25 Jul 2013 16:51:11 +0800 Subject: [nginx] On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in m... In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 4:46 PM, Ruslan Ermilov wrote: > details: http://hg.nginx.org/nginx/rev/6d73e0dc4f64 > branches: > changeset: 5283:6d73e0dc4f64 > user: Ruslan Ermilov > date: Thu Jul 25 12:46:03 2013 +0400 > description: > On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in msecs. > Thank you very much! Best Regards, sephe -- Tomorrow Will Never Die From info at tvdw.eu Thu Jul 25 08:57:31 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Thu, 25 Jul 2013 10:57:31 +0200 Subject: [PATCH] DragonFlyBSD KEEPALIVE_TUNABLE In-Reply-To: References: <51EFE589.7060507@nginx.com> Message-ID: <8BAA9F61-A3C4-414F-AD87-F3C609C4B713@tvdw.eu> Hi, Wouldn't a better place for this patch be the configure script instead of hardcoding the values? Tom > On 25 jul. 2013, at 03:32, Sepherosa Ziehau wrote: > >> On Wed, Jul 24, 2013 at 10:32 PM, Maxim Konovalov wrote: >> Hi, >> >>> On 7/24/13 5:42 PM, Sepherosa Ziehau wrote: >>> Hi, >>> >>> On DragonFlyBSD, TCP_KEEPIDLE and TCP_KEEPINTVL are in milliseconds >>> instead of seconds. Following patch fixes this: >>> http://leaf.dragonflybsd.org/~sephe/ngx_keepalive.diff >> Thanks for the patch! >> >> Just curious: are there any reasons why these timers have a >> millisecond resolution and not compatible with other BSD's? > > When I added TCP_KEEP* sockopts to DragonFlyBSD, I checked the unit of > TCP_KEEP* on various systems that had implemented them (FreeBSD did > not have that option at that time; not sure about NetBSD). They are > using different unit, some use half-second (e.g. OpenVMS), some use 1 > second (e.g. Linux); and there is no standard specifies which unit > should be used. Another reason is that I want to keep the sockopts' > unit consistent w/ the sysctls' unit. Well, and I actually used > TCP_KEEP* sockopts in my own project at that time, which requires > higher resolution TCP_KEEP* ;) > > Best Regards, > sephe > > -- > Tomorrow Will Never Die > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Thu Jul 25 11:58:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:36 +0000 Subject: [nginx] Sub filter: stale comments removed. Message-ID: details: http://hg.nginx.org/nginx/rev/eaa9c732a1b9 branches: changeset: 5284:eaa9c732a1b9 user: Maxim Dounin date: Thu Jul 25 14:54:43 2013 +0400 description: Sub filter: stale comments removed. diffstat: src/http/modules/ngx_http_sub_filter_module.c | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diffs (13 lines): diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -677,9 +677,6 @@ ngx_http_sub_create_conf(ngx_conf_t *cf) * set by ngx_pcalloc(): * * conf->match = { 0, NULL }; - * conf->sub = { 0, NULL }; - * conf->sub_lengths = NULL; - * conf->sub_values = NULL; * conf->types = { NULL }; * conf->types_keys = NULL; */ From mdounin at mdounin.ru Thu Jul 25 11:58:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:37 +0000 Subject: [nginx] Sub filter: switched to ngx_chain_get_free_buf(). Message-ID: details: http://hg.nginx.org/nginx/rev/d47ef93134e5 branches: changeset: 5285:d47ef93134e5 user: Maxim Dounin date: Thu Jul 25 14:54:45 2013 +0400 description: Sub filter: switched to ngx_chain_get_free_buf(). No functional changes. diffstat: src/http/modules/ngx_http_sub_filter_module.c | 83 +++++++------------------- 1 files changed, 22 insertions(+), 61 deletions(-) diffs (139 lines): diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -268,25 +268,14 @@ ngx_http_sub_body_filter(ngx_http_reques if (ctx->saved.len) { - if (ctx->free) { - cl = ctx->free; - ctx->free = ctx->free->next; - b = cl->buf; - ngx_memzero(b, sizeof(ngx_buf_t)); + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { + return NGX_ERROR; + } - } else { - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; - } + b = cl->buf; - cl = ngx_alloc_chain_link(r->pool); - if (cl == NULL) { - return NGX_ERROR; - } - - cl->buf = b; - } + ngx_memzero(b, sizeof(ngx_buf_t)); b->pos = ngx_pnalloc(r->pool, ctx->saved.len); if (b->pos == NULL) { @@ -303,24 +292,12 @@ ngx_http_sub_body_filter(ngx_http_reques ctx->saved.len = 0; } - if (ctx->free) { - cl = ctx->free; - ctx->free = ctx->free->next; - b = cl->buf; + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { + return NGX_ERROR; + } - } else { - b = ngx_alloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; - } - - cl = ngx_alloc_chain_link(r->pool); - if (cl == NULL) { - return NGX_ERROR; - } - - cl->buf = b; - } + b = cl->buf; ngx_memcpy(b, ctx->buf, sizeof(ngx_buf_t)); @@ -335,7 +312,6 @@ ngx_http_sub_body_filter(ngx_http_reques b->file_pos += b->pos - ctx->buf->pos; } - cl->next = NULL; *ctx->last_out = cl; ctx->last_out = &cl->next; } @@ -356,15 +332,14 @@ ngx_http_sub_body_filter(ngx_http_reques /* rc == NGX_OK */ - b = ngx_calloc_buf(r->pool); - if (b == NULL) { + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { return NGX_ERROR; } - cl = ngx_alloc_chain_link(r->pool); - if (cl == NULL) { - return NGX_ERROR; - } + b = cl->buf; + + ngx_memzero(b, sizeof(ngx_buf_t)); slcf = ngx_http_get_module_loc_conf(r, ngx_http_sub_filter_module); @@ -386,8 +361,6 @@ ngx_http_sub_body_filter(ngx_http_reques b->sync = 1; } - cl->buf = b; - cl->next = NULL; *ctx->last_out = cl; ctx->last_out = &cl->next; @@ -398,29 +371,17 @@ ngx_http_sub_body_filter(ngx_http_reques if (ctx->buf->last_buf || ngx_buf_in_memory(ctx->buf)) { if (b == NULL) { - if (ctx->free) { - cl = ctx->free; - ctx->free = ctx->free->next; - b = cl->buf; - ngx_memzero(b, sizeof(ngx_buf_t)); + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { + return NGX_ERROR; + } - } else { - b = ngx_calloc_buf(r->pool); - if (b == NULL) { - return NGX_ERROR; - } + b = cl->buf; - cl = ngx_alloc_chain_link(r->pool); - if (cl == NULL) { - return NGX_ERROR; - } - - cl->buf = b; - } + ngx_memzero(b, sizeof(ngx_buf_t)); b->sync = 1; - cl->next = NULL; *ctx->last_out = cl; ctx->last_out = &cl->next; } From mdounin at mdounin.ru Thu Jul 25 11:58:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:38 +0000 Subject: [nginx] Sub filter: flush buffers handling. Message-ID: details: http://hg.nginx.org/nginx/rev/819c5b53d8b5 branches: changeset: 5286:819c5b53d8b5 user: Maxim Dounin date: Thu Jul 25 14:54:47 2013 +0400 description: Sub filter: flush buffers handling. diffstat: src/http/modules/ngx_http_sub_filter_module.c | 5 ++++- 1 files changed, 4 insertions(+), 1 deletions(-) diffs (22 lines): diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -369,7 +369,9 @@ ngx_http_sub_body_filter(ngx_http_reques continue; } - if (ctx->buf->last_buf || ngx_buf_in_memory(ctx->buf)) { + if (ctx->buf->last_buf || ctx->buf->flush + || ngx_buf_in_memory(ctx->buf)) + { if (b == NULL) { cl = ngx_chain_get_free_buf(r->pool, &ctx->free); if (cl == NULL) { @@ -387,6 +389,7 @@ ngx_http_sub_body_filter(ngx_http_reques } b->last_buf = ctx->buf->last_buf; + b->flush = ctx->buf->flush; b->shadow = ctx->buf; b->recycled = ctx->buf->recycled; From mdounin at mdounin.ru Thu Jul 25 11:58:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:40 +0000 Subject: [nginx] Sub filter: fixed incomplete last buffer on partial match. Message-ID: details: http://hg.nginx.org/nginx/rev/2dbc5e38b65d branches: changeset: 5287:2dbc5e38b65d user: Maxim Dounin date: Thu Jul 25 14:54:48 2013 +0400 description: Sub filter: fixed incomplete last buffer on partial match. If a pattern was partially matched at a response end, partially matched string wasn't send. E.g., a response "fo" was truncated to an empty response if partially mathed by a pattern "foo". diffstat: src/http/modules/ngx_http_sub_filter_module.c | 20 ++++++++++++++++++++ 1 files changed, 20 insertions(+), 0 deletions(-) diffs (30 lines): diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -369,6 +369,26 @@ ngx_http_sub_body_filter(ngx_http_reques continue; } + if (ctx->buf->last_buf && ctx->looked.len) { + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { + return NGX_ERROR; + } + + b = cl->buf; + + ngx_memzero(b, sizeof(ngx_buf_t)); + + b->pos = ctx->looked.data; + b->last = b->pos + ctx->looked.len; + b->memory = 1; + + *ctx->last_out = cl; + ctx->last_out = &cl->next; + + ctx->looked.len = 0; + } + if (ctx->buf->last_buf || ctx->buf->flush || ngx_buf_in_memory(ctx->buf)) { From mdounin at mdounin.ru Thu Jul 25 11:58:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:41 +0000 Subject: [nginx] Sub filter: fixed matching after a partial match. Message-ID: details: http://hg.nginx.org/nginx/rev/102d7117ffb8 branches: changeset: 5288:102d7117ffb8 user: Maxim Dounin date: Thu Jul 25 14:54:53 2013 +0400 description: Sub filter: fixed matching after a partial match. After a failed partial match we now check if there is another partial match in previously matched substring to fix cases like "aab" in "aaab". The ctx->saved string is now always sent if it's present on return from the ngx_http_sub_parse() function (and reset accordingly). This allows to release parts of previously matched data. diffstat: src/http/modules/ngx_http_sub_filter_module.c | 100 +++++++++++++++++-------- 1 files changed, 69 insertions(+), 31 deletions(-) diffs (157 lines): diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -261,36 +261,36 @@ ngx_http_sub_body_filter(ngx_http_reques return rc; } - if (ctx->copy_start != ctx->copy_end) { + if (ctx->saved.len) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "saved: \"%V\"", &ctx->saved); - if (ctx->saved.len) { + cl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (cl == NULL) { + return NGX_ERROR; + } - cl = ngx_chain_get_free_buf(r->pool, &ctx->free); - if (cl == NULL) { - return NGX_ERROR; - } + b = cl->buf; - b = cl->buf; + ngx_memzero(b, sizeof(ngx_buf_t)); - ngx_memzero(b, sizeof(ngx_buf_t)); + b->pos = ngx_pnalloc(r->pool, ctx->saved.len); + if (b->pos == NULL) { + return NGX_ERROR; + } - b->pos = ngx_pnalloc(r->pool, ctx->saved.len); - if (b->pos == NULL) { - return NGX_ERROR; - } + ngx_memcpy(b->pos, ctx->saved.data, ctx->saved.len); + b->last = b->pos + ctx->saved.len; + b->memory = 1; - ngx_memcpy(b->pos, ctx->saved.data, ctx->saved.len); - b->last = b->pos + ctx->saved.len; - b->memory = 1; + *ctx->last_out = cl; + ctx->last_out = &cl->next; - *ctx->last_out = cl; - ctx->last_out = &cl->next; + ctx->saved.len = 0; + } - ctx->saved.len = 0; - } + if (ctx->copy_start != ctx->copy_end) { cl = ngx_chain_get_free_buf(r->pool, &ctx->free); if (cl == NULL) { @@ -325,6 +325,11 @@ ngx_http_sub_body_filter(ngx_http_reques ctx->copy_end = NULL; } + if (ctx->looked.len > (size_t) (ctx->pos - ctx->buf->pos)) { + ctx->saved.len = ctx->looked.len - (ctx->pos - ctx->buf->pos); + ngx_memcpy(ctx->saved.data, ctx->looked.data, ctx->saved.len); + } + if (rc == NGX_AGAIN) { continue; } @@ -502,7 +507,7 @@ static ngx_int_t ngx_http_sub_parse(ngx_http_request_t *r, ngx_http_sub_ctx_t *ctx) { u_char *p, *last, *copy_end, ch, match; - size_t looked; + size_t looked, i; ngx_http_sub_state_e state; if (ctx->once) { @@ -573,13 +578,11 @@ ngx_http_sub_parse(ngx_http_request_t *r looked++; if (looked == ctx->match.len) { - if ((size_t) (p - ctx->pos) < looked) { - ctx->saved.len = 0; - } ctx->state = sub_start_state; ctx->pos = p + 1; ctx->looked.len = 0; + ctx->saved.len = 0; ctx->copy_end = copy_end; if (ctx->copy_start == NULL && copy_end) { @@ -589,18 +592,53 @@ ngx_http_sub_parse(ngx_http_request_t *r return NGX_OK; } - } else if (ch == ctx->match.data[0]) { - copy_end = p; - ctx->looked.data[0] = *p; - looked = 1; + } else { + /* + * check if there is another partial match in previously + * matched substring to catch cases like "aab" in "aaab" + */ - } else { - copy_end = p; - looked = 0; - state = sub_start_state; + ctx->looked.data[looked] = *p; + looked++; + + for (i = 1; i < looked; i++) { + if (ngx_strncasecmp(ctx->looked.data + i, + ctx->match.data, looked - i) + == 0) + { + break; + } + } + + if (i < looked) { + if (ctx->saved.len > i) { + ctx->saved.len = i; + } + + if ((size_t) (p + 1 - ctx->buf->pos) >= looked - i) { + copy_end = p + 1 - (looked - i); + } + + ngx_memmove(ctx->looked.data, ctx->looked.data + i, looked - i); + looked = looked - i; + + } else { + copy_end = p; + looked = 0; + state = sub_start_state; + } + + if (ctx->saved.len) { + p++; + goto out; + } } } + ctx->saved.len = 0; + +out: + ctx->state = state; ctx->pos = p; ctx->looked.len = looked; From mdounin at mdounin.ru Thu Jul 25 11:58:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:42 +0000 Subject: [nginx] Fixed ngx_http_test_reading() to finalize request properly. Message-ID: details: http://hg.nginx.org/nginx/rev/aadfadd5af2b branches: changeset: 5289:aadfadd5af2b user: Maxim Dounin date: Fri Jun 14 20:56:07 2013 +0400 description: Fixed ngx_http_test_reading() to finalize request properly. Previous code called ngx_http_finalize_request() with rc = 0. This is ok if a response status was already set, but resulted in "000" being logged if it wasn't. In particular this happened with limit_req if a connection was prematurely closed during limit_req delay. diffstat: src/http/ngx_http_request.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2733,7 +2733,7 @@ closed: ngx_log_error(NGX_LOG_INFO, c->log, err, "client prematurely closed connection"); - ngx_http_finalize_request(r, 0); + ngx_http_finalize_request(r, NGX_HTTP_CLIENT_CLOSED_REQUEST); } From mdounin at mdounin.ru Thu Jul 25 11:58:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:43 +0000 Subject: [nginx] Event pipe: fixed writing cache header to a temp file. Message-ID: details: http://hg.nginx.org/nginx/rev/355779f81491 branches: changeset: 5290:355779f81491 user: Maxim Dounin date: Thu Jul 25 14:55:09 2013 +0400 description: Event pipe: fixed writing cache header to a temp file. With previous code the p->temp_file->offset wasn't adjusted if a temp file was written by the code in ngx_event_pipe_write_to_downstream() after an EOF, resulting in cache not being used with empty scgi and uwsgi responses with Content-Length set to 0. Fix it to call ngx_event_pipe_write_chain_to_temp_file() there instead of calling ngx_write_chain_to_temp_file() directly. diffstat: src/event/ngx_event_pipe.c | 11 ++++------- 1 files changed, 4 insertions(+), 7 deletions(-) diffs (29 lines): diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -454,7 +454,7 @@ ngx_event_pipe_write_to_downstream(ngx_e size_t bsize; ngx_int_t rc; ngx_uint_t flush, flushed, prev_last_shadow; - ngx_chain_t *out, **ll, *cl, file; + ngx_chain_t *out, **ll, *cl; ngx_connection_t *downstream; downstream = p->downstream; @@ -514,13 +514,10 @@ ngx_event_pipe_write_to_downstream(ngx_e } if (p->cacheable && p->buf_to_file) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, p->log, 0, + "pipe write chain"); - file.buf = p->buf_to_file; - file.next = NULL; - - if (ngx_write_chain_to_temp_file(p->temp_file, &file) - == NGX_ERROR) - { + if (ngx_event_pipe_write_chain_to_temp_file(p) == NGX_ABORT) { return NGX_ABORT; } } From mdounin at mdounin.ru Thu Jul 25 11:58:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:45 +0000 Subject: [nginx] Gzip: clearing of c->buffered if all data are flushed. Message-ID: details: http://hg.nginx.org/nginx/rev/84155a389bcc branches: changeset: 5291:84155a389bcc user: Maxim Dounin date: Thu Jul 25 14:55:32 2013 +0400 description: Gzip: clearing of c->buffered if all data are flushed. This allows to finalize unfinished responses while still sending as much data as available. diffstat: src/http/modules/ngx_http_gzip_filter_module.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diffs (30 lines): diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c +++ b/src/http/modules/ngx_http_gzip_filter_module.c @@ -368,6 +368,8 @@ ngx_http_gzip_body_filter(ngx_http_reque if (ngx_chain_add_copy(r->pool, &ctx->in, in) != NGX_OK) { goto failed; } + + r->connection->buffered |= NGX_HTTP_GZIP_BUFFERED; } if (ctx->nomem) { @@ -620,8 +622,6 @@ ngx_http_gzip_filter_deflate_start(ngx_h return NGX_ERROR; } - r->connection->buffered |= NGX_HTTP_GZIP_BUFFERED; - ctx->last_out = &ctx->out; ctx->crc32 = crc32(0L, Z_NULL, 0); ctx->flush = Z_NO_FLUSH; @@ -854,6 +854,8 @@ ngx_http_gzip_filter_deflate(ngx_http_re *ctx->last_out = cl; ctx->last_out = &cl->next; + r->connection->buffered &= ~NGX_HTTP_GZIP_BUFFERED; + return NGX_OK; } From mdounin at mdounin.ru Thu Jul 25 11:58:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:46 +0000 Subject: [nginx] Upstream: stale comments removed. Message-ID: details: http://hg.nginx.org/nginx/rev/8f9da50cf912 branches: changeset: 5292:8f9da50cf912 user: Maxim Dounin date: Thu Jun 13 19:52:31 2013 +0400 description: Upstream: stale comments removed. diffstat: src/http/ngx_http_upstream.c | 12 ------------ 1 files changed, 0 insertions(+), 12 deletions(-) diffs (26 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1472,22 +1472,10 @@ ngx_http_upstream_send_request(ngx_http_ ngx_add_timer(c->read, u->conf->read_timeout); -#if 1 if (c->read->ready) { - - /* post aio operation */ - - /* - * TODO comment - * although we can post aio operation just in the end - * of ngx_http_upstream_connect() CHECK IT !!! - * it's better to do here because we postpone header buffer allocation - */ - ngx_http_upstream_process_header(r, u); return; } -#endif u->write_event_handler = ngx_http_upstream_dummy_handler; From mdounin at mdounin.ru Thu Jul 25 11:58:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:48 +0000 Subject: [nginx] Upstream: busy lock remnants removed. Message-ID: details: http://hg.nginx.org/nginx/rev/2c4eb6ecba26 branches: changeset: 5293:2c4eb6ecba26 user: Maxim Dounin date: Thu Jul 25 14:55:59 2013 +0400 description: Upstream: busy lock remnants removed. diffstat: src/http/ngx_http_upstream.c | 14 -------------- 1 files changed, 0 insertions(+), 14 deletions(-) diffs (38 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3026,9 +3026,6 @@ ngx_http_upstream_process_request(ngx_ht if (p->upstream_done || p->upstream_eof || p->upstream_error) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); -#if 0 - ngx_http_busy_unlock(u->conf->busy_lock, &u->busy_lock); -#endif ngx_http_upstream_finalize_request(r, u, 0); return; } @@ -3139,10 +3136,6 @@ ngx_http_upstream_next(ngx_http_request_ ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http next upstream, %xi", ft_type); -#if 0 - ngx_http_busy_unlock(u->conf->busy_lock, &u->busy_lock); -#endif - if (u->peer.sockaddr) { if (ft_type == NGX_HTTP_UPSTREAM_FT_HTTP_403 @@ -3256,13 +3249,6 @@ ngx_http_upstream_next(ngx_http_request_ u->peer.connection = NULL; } -#if 0 - if (u->conf->busy_lock && !u->busy_locked) { - ngx_http_upstream_busy_lock(p); - return; - } -#endif - ngx_http_upstream_connect(r, u); } From mdounin at mdounin.ru Thu Jul 25 11:58:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:49 +0000 Subject: [nginx] Upstream: consistent error handling after u->input_filte... Message-ID: details: http://hg.nginx.org/nginx/rev/d44c3b36c53f branches: changeset: 5294:d44c3b36c53f user: Maxim Dounin date: Thu Jul 25 14:56:00 2013 +0400 description: Upstream: consistent error handling after u->input_filter_init(). In all cases ngx_http_upstream_finalize_request() with NGX_ERROR now used. Previously used NGX_HTTP_INTERNAL_SERVER_ERROR in the subrequest in memory case don't cause any harm, but inconsistent with other uses. diffstat: src/http/ngx_http_upstream.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (13 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1682,8 +1682,7 @@ ngx_http_upstream_process_header(ngx_htt } if (u->input_filter_init(u->input_filter_ctx) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } From mdounin at mdounin.ru Thu Jul 25 11:58:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:50 +0000 Subject: [nginx] Upstream: ngx_http_upstream_finalize_request(NGX_ERROR) ... Message-ID: details: http://hg.nginx.org/nginx/rev/a489c31c9783 branches: changeset: 5295:a489c31c9783 user: Maxim Dounin date: Thu Jul 25 14:56:13 2013 +0400 description: Upstream: ngx_http_upstream_finalize_request(NGX_ERROR) on errors. Previously, ngx_http_upstream_finalize_request(0) was used in most cases after errors. While with current code there is no difference, use of NGX_ERROR allows to pass a bit more information into ngx_http_upstream_finalize_request(). diffstat: src/http/ngx_http_upstream.c | 50 ++++++++++++++++++++++---------------------- 1 files changed, 25 insertions(+), 25 deletions(-) diffs (225 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2144,7 +2144,7 @@ ngx_http_upstream_send_response(ngx_http r->limit_rate = 0; if (u->input_filter_init(u->input_filter_ctx) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2158,7 +2158,7 @@ ngx_http_upstream_send_response(ngx_http { ngx_connection_error(c, ngx_socket_errno, "setsockopt(TCP_NODELAY) failed"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2173,7 +2173,7 @@ ngx_http_upstream_send_response(ngx_http u->state->response_length += n; if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2184,7 +2184,7 @@ ngx_http_upstream_send_response(ngx_http u->buffer.last = u->buffer.start; if (ngx_http_send_special(r, NGX_HTTP_FLUSH) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2208,7 +2208,7 @@ ngx_http_upstream_send_response(ngx_http switch (ngx_http_test_predicates(r, u->conf->no_cache)) { case NGX_ERROR: - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; case NGX_DECLINED: @@ -2224,7 +2224,7 @@ ngx_http_upstream_send_response(ngx_http r->cache->file_cache = u->conf->cache->data; if (ngx_http_file_cache_create(r) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } @@ -2285,7 +2285,7 @@ ngx_http_upstream_send_response(ngx_http p->temp_file = ngx_pcalloc(r->pool, sizeof(ngx_temp_file_t)); if (p->temp_file == NULL) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2308,7 +2308,7 @@ ngx_http_upstream_send_response(ngx_http p->preread_bufs = ngx_alloc_chain_link(r->pool); if (p->preread_bufs == NULL) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2322,7 +2322,7 @@ ngx_http_upstream_send_response(ngx_http p->buf_to_file = ngx_calloc_buf(r->pool); if (p->buf_to_file == NULL) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2370,7 +2370,7 @@ ngx_http_upstream_send_response(ngx_http if (u->input_filter_init && u->input_filter_init(p->input_ctx) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2412,7 +2412,7 @@ ngx_http_upstream_upgrade(ngx_http_reque { ngx_connection_error(c, ngx_socket_errno, "setsockopt(TCP_NODELAY) failed"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2428,7 +2428,7 @@ ngx_http_upstream_upgrade(ngx_http_reque { ngx_connection_error(u->peer.connection, ngx_socket_errno, "setsockopt(TCP_NODELAY) failed"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2437,7 +2437,7 @@ ngx_http_upstream_upgrade(ngx_http_reque } if (ngx_http_send_special(r, NGX_HTTP_FLUSH) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2537,7 +2537,7 @@ ngx_http_upstream_process_upgraded(ngx_h if (b->start == NULL) { b->start = ngx_palloc(r->pool, u->conf->buffer_size); if (b->start == NULL) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2560,7 +2560,7 @@ ngx_http_upstream_process_upgraded(ngx_h n = dst->send(dst, b->pos, size); if (n == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2615,7 +2615,7 @@ ngx_http_upstream_process_upgraded(ngx_h if (ngx_handle_write_event(upstream->write, u->conf->send_lowat) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2627,7 +2627,7 @@ ngx_http_upstream_process_upgraded(ngx_h } if (ngx_handle_read_event(upstream->read, 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2641,12 +2641,12 @@ ngx_http_upstream_process_upgraded(ngx_h if (ngx_handle_write_event(downstream->write, clcf->send_lowat) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } if (ngx_handle_read_event(downstream->read, 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2737,7 +2737,7 @@ ngx_http_upstream_process_non_buffered_r rc = ngx_http_output_filter(r, u->out_bufs); if (rc == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2774,7 +2774,7 @@ ngx_http_upstream_process_non_buffered_r u->state->response_length += n; if (u->input_filter(u->input_filter_ctx, n) == NGX_ERROR) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } @@ -2793,7 +2793,7 @@ ngx_http_upstream_process_non_buffered_r if (ngx_handle_write_event(downstream->write, clcf->send_lowat) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } @@ -2806,7 +2806,7 @@ ngx_http_upstream_process_non_buffered_r } if (ngx_handle_read_event(upstream->read, 0) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2897,7 +2897,7 @@ ngx_http_upstream_process_downstream(ngx ngx_add_timer(wev, p->send_timeout); if (ngx_handle_write_event(wev, p->send_lowat) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } return; @@ -2922,7 +2922,7 @@ ngx_http_upstream_process_downstream(ngx "http downstream delayed"); if (ngx_handle_write_event(wev, p->send_lowat) != NGX_OK) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } return; From mdounin at mdounin.ru Thu Jul 25 11:58:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:51 +0000 Subject: [nginx] Upstream: NGX_HTTP_GATEWAY_TIME_OUT after upstream timeo... Message-ID: details: http://hg.nginx.org/nginx/rev/1ccdda1f37f3 branches: changeset: 5296:1ccdda1f37f3 user: Maxim Dounin date: Thu Jul 25 14:56:20 2013 +0400 description: Upstream: NGX_HTTP_GATEWAY_TIME_OUT after upstream timeouts. There is no real difference from previously used 0 as NGX_HTTP_* will become 0 in ngx_http_upstream_finalize_request(), but the change preserves information about a timeout a bit longer. Previous use of ETIMEDOUT in one place was just wrong. Note well that with cacheable responses there will be a difference (code in ngx_http_upstream_finalize_request() will store the error in cache), though this change doesn't touch cacheable case. diffstat: src/http/ngx_http_upstream.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (30 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2018,7 +2018,7 @@ ngx_http_upstream_process_body_in_memory if (rev->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, NGX_ETIMEDOUT); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } @@ -2514,7 +2514,7 @@ ngx_http_upstream_process_upgraded(ngx_h if (upstream->read->timedout || upstream->write->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } @@ -2701,7 +2701,7 @@ ngx_http_upstream_process_non_buffered_u if (c->read->timedout) { ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out"); - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT); return; } From mdounin at mdounin.ru Thu Jul 25 11:58:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:53 +0000 Subject: [nginx] Upstream: NGX_ERROR after pipe errors. Message-ID: details: http://hg.nginx.org/nginx/rev/0ae9a2958886 branches: changeset: 5297:0ae9a2958886 user: Maxim Dounin date: Thu Jul 25 14:56:41 2013 +0400 description: Upstream: NGX_ERROR after pipe errors. diffstat: src/http/ngx_http_upstream.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (39 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2904,7 +2904,7 @@ ngx_http_upstream_process_downstream(ngx } if (ngx_event_pipe(p, wev->write) == NGX_ABORT) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } @@ -2929,7 +2929,7 @@ ngx_http_upstream_process_downstream(ngx } if (ngx_event_pipe(p, 1) == NGX_ABORT) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } @@ -2957,7 +2957,7 @@ ngx_http_upstream_process_upstream(ngx_h } else { if (ngx_event_pipe(u->pipe, 0) == NGX_ABORT) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } } @@ -3035,7 +3035,7 @@ ngx_http_upstream_process_request(ngx_ht "http upstream downstream error"); if (!u->cacheable && !u->store && u->peer.connection) { - ngx_http_upstream_finalize_request(r, u, 0); + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } } } From mdounin at mdounin.ru Thu Jul 25 11:58:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:54 +0000 Subject: [nginx] Upstream: replaced u->pipe->temp_file with p->temp_file. Message-ID: details: http://hg.nginx.org/nginx/rev/a7b2db9119e0 branches: changeset: 5298:a7b2db9119e0 user: Maxim Dounin date: Thu Jul 25 14:56:49 2013 +0400 description: Upstream: replaced u->pipe->temp_file with p->temp_file. While here, redundant parentheses removed. No functional changes. diffstat: src/http/ngx_http_upstream.c | 10 +++++----- 1 files changed, 5 insertions(+), 5 deletions(-) diffs (40 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2982,11 +2982,11 @@ ngx_http_upstream_process_request(ngx_ht if (p->upstream_eof || p->upstream_done) { - tf = u->pipe->temp_file; + tf = p->temp_file; if (u->headers_in.status_n == NGX_HTTP_OK && (u->headers_in.content_length_n == -1 - || (u->headers_in.content_length_n == tf->offset))) + || u->headers_in.content_length_n == tf->offset)) { ngx_http_upstream_store(r, u); u->store = 0; @@ -2999,11 +2999,11 @@ ngx_http_upstream_process_request(ngx_ht if (u->cacheable) { if (p->upstream_done) { - ngx_http_file_cache_update(r, u->pipe->temp_file); + ngx_http_file_cache_update(r, p->temp_file); } else if (p->upstream_eof) { - tf = u->pipe->temp_file; + tf = p->temp_file; if (u->headers_in.content_length_n == -1 || u->headers_in.content_length_n @@ -3016,7 +3016,7 @@ ngx_http_upstream_process_request(ngx_ht } } else if (p->upstream_error) { - ngx_http_file_cache_free(r->cache, u->pipe->temp_file); + ngx_http_file_cache_free(r->cache, p->temp_file); } } From mdounin at mdounin.ru Thu Jul 25 11:58:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:55 +0000 Subject: [nginx] Upstream: fixed store/cache of unfinished responses. Message-ID: details: http://hg.nginx.org/nginx/rev/b779728b180c branches: changeset: 5299:b779728b180c user: Maxim Dounin date: Thu Jul 25 14:56:59 2013 +0400 description: Upstream: fixed store/cache of unfinished responses. In case of upstream eof, only responses with u->pipe->length == -1 are now cached/stored. This ensures that unfinished chunked responses are not cached. Note well - previously used checks for u->headers_in.content_length_n are preserved. This provides an additional level of protection if protol data disagree with Content-Length header provided (e.g., a FastCGI response is sent with wrong Content-Length, or an incomple SCGI or uwsgi response), as well as protects from storing of responses to HEAD requests. This should be reconsidered if we'll consider caching of responses to HEAD requests. diffstat: src/http/ngx_http_upstream.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-) diffs (25 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2985,6 +2985,7 @@ ngx_http_upstream_process_request(ngx_ht tf = p->temp_file; if (u->headers_in.status_n == NGX_HTTP_OK + && (p->upstream_done || p->length == -1) && (u->headers_in.content_length_n == -1 || u->headers_in.content_length_n == tf->offset)) { @@ -3005,9 +3006,10 @@ ngx_http_upstream_process_request(ngx_ht tf = p->temp_file; - if (u->headers_in.content_length_n == -1 - || u->headers_in.content_length_n - == tf->offset - (off_t) r->cache->body_start) + if (p->length == -1 + && (u->headers_in.content_length_n == -1 + || u->headers_in.content_length_n + == tf->offset - (off_t) r->cache->body_start)) { ngx_http_file_cache_update(r, tf); From mdounin at mdounin.ru Thu Jul 25 11:58:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:56 +0000 Subject: [nginx] Upstream: u->length now defaults to -1 (API change). Message-ID: details: http://hg.nginx.org/nginx/rev/f538a67c9f77 branches: changeset: 5300:f538a67c9f77 user: Maxim Dounin date: Thu Jul 25 14:58:11 2013 +0400 description: Upstream: u->length now defaults to -1 (API change). That is, by default we assume that response end is signalled by a connection close. This seems to be better default, and in line with u->pipe->length behaviour. Memcached module was modified accordingly. diffstat: src/http/modules/ngx_http_memcached_module.c | 5 ++++- src/http/ngx_http_upstream.c | 2 +- 2 files changed, 5 insertions(+), 2 deletions(-) diffs (28 lines): diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -441,8 +441,11 @@ ngx_http_memcached_filter_init(void *dat u = ctx->request->upstream; if (u->headers_in.status_n != 404) { - u->length += NGX_HTTP_MEMCACHED_END; + u->length = u->headers_in.content_length_n + NGX_HTTP_MEMCACHED_END; ctx->rest = NGX_HTTP_MEMCACHED_END; + + } else { + u->length = 0; } return NGX_OK; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1994,7 +1994,7 @@ ngx_http_upstream_process_headers(ngx_ht r->headers_out.content_length_n = u->headers_in.content_length_n; - u->length = u->headers_in.content_length_n; + u->length = -1; return NGX_OK; } From mdounin at mdounin.ru Thu Jul 25 11:58:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:58 +0000 Subject: [nginx] Upstream: added check if a response is complete. Message-ID: details: http://hg.nginx.org/nginx/rev/a50e26148d21 branches: changeset: 5301:a50e26148d21 user: Maxim Dounin date: Thu Jul 25 15:00:12 2013 +0400 description: Upstream: added check if a response is complete. Checks were added to both buffered and unbuffered code paths to detect and complain if a response is incomplete. Appropriate error codes are now passed to ngx_http_upstream_finalize_request(). With this change in unbuffered mode we now use u->length set to -1 as an indicator that EOF is allowed per protocol and used to indicate response end (much like its with p->length in buffered mode). Proxy module was changed to set u->length to 1 (instead of previously used -1) in case of chunked transfer encoding used to comply with the above. diffstat: src/http/modules/ngx_http_proxy_module.c | 2 +- src/http/ngx_http_upstream.c | 33 +++++++++++++++++++++++++++++-- 2 files changed, 31 insertions(+), 4 deletions(-) diffs (67 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -1542,7 +1542,7 @@ ngx_http_proxy_input_filter_init(void *d u->pipe->length = 3; /* "0" LF LF */ u->input_filter = ngx_http_proxy_non_buffered_chunked_filter; - u->length = -1; + u->length = 1; } else if (u->headers_in.content_length_n == 0) { /* empty body: special case as filter won't be called */ diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2748,13 +2748,27 @@ ngx_http_upstream_process_non_buffered_r if (u->busy_bufs == NULL) { if (u->length == 0 - || upstream->read->eof - || upstream->read->error) + || (upstream->read->eof && u->length == -1)) { ngx_http_upstream_finalize_request(r, u, 0); return; } + if (upstream->read->eof) { + ngx_log_error(NGX_LOG_ERR, upstream->log, 0, + "upstream prematurely closed connection"); + + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_BAD_GATEWAY); + return; + } + + if (upstream->read->error) { + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_BAD_GATEWAY); + return; + } + b->pos = b->start; b->last = b->start; } @@ -3027,7 +3041,20 @@ ngx_http_upstream_process_request(ngx_ht if (p->upstream_done || p->upstream_eof || p->upstream_error) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream exit: %p", p->out); - ngx_http_upstream_finalize_request(r, u, 0); + + if (p->upstream_done + || (p->upstream_eof && p->length == -1)) + { + ngx_http_upstream_finalize_request(r, u, 0); + return; + } + + if (p->upstream_eof) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "upstream prematurely closed connection"); + } + + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY); return; } } From mdounin at mdounin.ru Thu Jul 25 11:58:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:58:59 +0000 Subject: [nginx] Upstream: NGX_HTTP_CLIENT_CLOSED_REQUEST no longer reset... Message-ID: details: http://hg.nginx.org/nginx/rev/292c92fb05d7 branches: changeset: 5302:292c92fb05d7 user: Maxim Dounin date: Thu Jul 25 15:00:25 2013 +0400 description: Upstream: NGX_HTTP_CLIENT_CLOSED_REQUEST no longer reset to 0. The NGX_HTTP_CLIENT_CLOSED_REQUEST code is allowed to happen after we started sending a response (much like NGX_HTTP_REQUEST_TIME_OUT), so there is no need to reset response code to 0 in this case. diffstat: src/http/ngx_http_upstream.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3405,6 +3405,7 @@ ngx_http_upstream_finalize_request(ngx_h if (u->header_sent && rc != NGX_HTTP_REQUEST_TIME_OUT + && rc != NGX_HTTP_CLIENT_CLOSED_REQUEST && (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE)) { rc = 0; From mdounin at mdounin.ru Thu Jul 25 11:59:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:59:01 +0000 Subject: [nginx] Upstream: request finalization rework. Message-ID: details: http://hg.nginx.org/nginx/rev/0fb714d80909 branches: changeset: 5303:0fb714d80909 user: Maxim Dounin date: Thu Jul 25 15:00:29 2013 +0400 description: Upstream: request finalization rework. No semantic changes expected, though some checks are done differently. In particular, the r->cached flag is no longer explicitly checked. Instead, we relay on u->header_sent not being set if a response is sent from a cache. diffstat: src/http/ngx_http_upstream.c | 31 +++++++++++++++++-------------- 1 files changed, 17 insertions(+), 14 deletions(-) diffs (48 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3403,27 +3403,30 @@ ngx_http_upstream_finalize_request(ngx_h #endif - if (u->header_sent - && rc != NGX_HTTP_REQUEST_TIME_OUT - && rc != NGX_HTTP_CLIENT_CLOSED_REQUEST - && (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE)) - { - rc = 0; - } - if (rc == NGX_DECLINED) { return; } r->connection->log->action = "sending to client"; - if (rc == 0 - && !r->header_only -#if (NGX_HTTP_CACHE) - && !r->cached -#endif - ) + if (!u->header_sent + || rc == NGX_HTTP_REQUEST_TIME_OUT + || rc == NGX_HTTP_CLIENT_CLOSED_REQUEST) { + ngx_http_finalize_request(r, rc); + return; + } + + if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) { + rc = 0; + } + + if (r->header_only) { + ngx_http_finalize_request(r, rc); + return; + } + + if (rc == 0) { rc = ngx_http_send_special(r, NGX_HTTP_LAST); } From mdounin at mdounin.ru Thu Jul 25 11:59:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 11:59:02 +0000 Subject: [nginx] Upstream: no last buffer on errors. Message-ID: details: http://hg.nginx.org/nginx/rev/d3eab5e2df5f branches: changeset: 5304:d3eab5e2df5f user: Maxim Dounin date: Thu Jul 25 15:00:41 2013 +0400 description: Upstream: no last buffer on errors. Previously, after sending a header we always sent a last buffer and finalized a request with code 0, even in case of errors. In some cases this resulted in a loss of ability to detect the response wasn't complete (e.g. if Content-Length was removed from a response by gzip filter). This change tries to propogate to a client information that a response isn't complete in such cases. In particular, with this change we no longer pretend a returned response is complete if we wasn't able to create a temporary file. If an error code suggests the error wasn't fatal, we flush buffered data and disable keepalive, then finalize request normally. This allows to to propogate information about a problem to a client, while still sending all the data we've got from an upstream. diffstat: src/http/ngx_http_upstream.c | 12 ++++++++++-- 1 files changed, 10 insertions(+), 2 deletions(-) diffs (36 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3297,6 +3297,7 @@ static void ngx_http_upstream_finalize_request(ngx_http_request_t *r, ngx_http_upstream_t *u, ngx_int_t rc) { + ngx_uint_t flush; ngx_time_t *tp; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -3417,8 +3418,11 @@ ngx_http_upstream_finalize_request(ngx_h return; } - if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) { - rc = 0; + flush = 0; + + if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { + rc = NGX_ERROR; + flush = 1; } if (r->header_only) { @@ -3428,6 +3432,10 @@ ngx_http_upstream_finalize_request(ngx_h if (rc == 0) { rc = ngx_http_send_special(r, NGX_HTTP_LAST); + + } else if (flush) { + r->keepalive = 0; + rc = ngx_http_send_special(r, NGX_HTTP_FLUSH); } ngx_http_finalize_request(r, rc); From mdounin at mdounin.ru Thu Jul 25 16:16:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2013 20:16:15 +0400 Subject: [PATCH] Make ngx_http_upstream provide a way to expose errors after sending out the response header In-Reply-To: <20130409003030.GZ62550@mdounin.ru> References: <20120911002335.GW40452@mdounin.ru> <20130408002317.GK62550@mdounin.ru> <20130408223628.GY62550@mdounin.ru> <20130409003030.GZ62550@mdounin.ru> Message-ID: <20130725161615.GJ90722@mdounin.ru> Hello! On Tue, Apr 09, 2013 at 04:30:30AM +0400, Maxim Dounin wrote: > On Mon, Apr 08, 2013 at 04:12:18PM -0700, agentzh wrote: > > > Will you work on the patch directly? This issue keeps bothering me > > (and of my users) for long. > > > > Guessing your mind is no easy task for me and I've ended up tweaking > > my patches over and over again without real gains ;) > > I have plans to start working on upstream error handling cleanup, > and on this problem in particular, in about two weeks. TWIMC, I've committed upstream error handling cleanup patch series, see here: http://hg.nginx.org/nginx/rev/d3eab5e2df5f (and previous 20 patches). In case of fatal errors (like memory allocation problems and so on) it now just calls ngx_http_finalize_request(NGX_ERROR), which in turn results in a connection being closed. If nginx detects incomplete response from an upstream server, it now only flushes pending data and then finalizes request normally without sending a last buffer. It's a bit less radical than finalization with NGX_ERROR and ensures that everything we've got from an upstream server is sent to a client. -- Maxim Dounin http://nginx.org/en/donation.html From jzefip at gmail.com Fri Jul 26 06:05:11 2013 From: jzefip at gmail.com (Julien Zefi) Date: Fri, 26 Jul 2013 00:05:11 -0600 Subject: Looking for developer to fix a NginX test case module Message-ID: Hi, As i am not able to fix a problem for a NginX module that i am writing, and after iterate a couple of times in the mailing list, i am still not able to solve the problem. So i would like to offer 100 USD for who is able to help me to fix the problem in my test case module. As a reference please check the following thread: http://mailman.nginx.org/pipermail/nginx-devel/2013-July/003923.html If you think you can fix it, please reply me back in private so we can discuss how to proceed, thanks. Julien. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sepherosa at gmail.com Fri Jul 26 08:07:58 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Fri, 26 Jul 2013 16:07:58 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets Message-ID: Hi all, I have the following preliminary patch to enable SO_REUSEPORT feature on listen sockets: http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport.diff The basic idea of the above patch is: - Defer the listen socket creation until work processes are forked - Work process creates listen socket, and set SO_REUSEPORT before bind(2) - Accept mutex is no longer needed, since worker process is not contended on the single listen socket anymore The SO_REUSEPORT sockopt on Linux: https://lwn.net/Articles/542629/ The SO_REUSEPORT sockopt on DragonFlyBSD: http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/740d1d9f7b7bf9c9c021abb8197718d7a2d441c9 The non-blocking accept(2) w/ SO_REUSEPORT performance improvement on DragonFlyBSD: http://lists.dragonflybsd.org/pipermail/users/2013-July/053632.html The preliminary httperf test shows w/ "so_reuseport on" gives me ~33% req/s performance improvement on DragonFlyBSD: httperf is running as: httperf --server=$server_name --wsess=5000,1,1 --max-conn=4 Same testing machines and network configuration as in: http://lists.dragonflybsd.org/pipermail/users/2013-July/053632.html Each client runs 16 above httperf test, except the box w/ bce, which runs 8 above httperf. The nginx w/ "so_reuseport on" is doing 49852 reqs/s (4 run avg) and there are 35%~40% idle time on each hyperthread. The nginx w/o "so_reuseport on" is doing 37386 reqs/s (4 run avg). Any feedbacks are welcome. Best Regards, sephe -- Tomorrow Will Never Die From info at tvdw.eu Fri Jul 26 08:31:13 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Fri, 26 Jul 2013 10:31:13 +0200 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: Message-ID: <49E152DD-DF28-4AF0-92C2-7E3CFA0F8EB7@tvdw.eu> You might want to add some checks to ensure compilation doesn't fail on platforms that don't define the constant, such as Windows. Tom > On 26 jul. 2013, at 10:07, Sepherosa Ziehau wrote: > > Hi all, > > I have the following preliminary patch to enable SO_REUSEPORT feature > on listen sockets: > http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport.diff > > The basic idea of the above patch is: > - Defer the listen socket creation until work processes are forked > - Work process creates listen socket, and set SO_REUSEPORT before bind(2) > - Accept mutex is no longer needed, since worker process is not > contended on the single listen socket anymore > > > The SO_REUSEPORT sockopt on Linux: > https://lwn.net/Articles/542629/ > > The SO_REUSEPORT sockopt on DragonFlyBSD: > http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/740d1d9f7b7bf9c9c021abb8197718d7a2d441c9 > > The non-blocking accept(2) w/ SO_REUSEPORT performance improvement on > DragonFlyBSD: > http://lists.dragonflybsd.org/pipermail/users/2013-July/053632.html > > > The preliminary httperf test shows w/ "so_reuseport on" gives me ~33% > req/s performance improvement on DragonFlyBSD: > > httperf is running as: > httperf --server=$server_name --wsess=5000,1,1 --max-conn=4 > > Same testing machines and network configuration as in: > http://lists.dragonflybsd.org/pipermail/users/2013-July/053632.html > > Each client runs 16 above httperf test, except the box w/ bce, which > runs 8 above httperf. > > The nginx w/ "so_reuseport on" is doing 49852 reqs/s (4 run avg) and > there are 35%~40% idle time on each hyperthread. > The nginx w/o "so_reuseport on" is doing 37386 reqs/s (4 run avg). > > > Any feedbacks are welcome. > > Best Regards, > sephe > > -- > Tomorrow Will Never Die > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From piotr at cloudflare.com Fri Jul 26 10:59:47 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 26 Jul 2013 03:59:47 -0700 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: Message-ID: Hey, > @@ -76,6 +78,13 @@ > 0, > NULL }, > > + { ngx_string("so_reuseport"), > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, > + ngx_set_so_reuseport, > + 0, > + 0, > + NULL }, > + > { ngx_string("debug_points"), > NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, > ngx_conf_set_enum_slot, > @@ -1361,3 +1370,24 @@ > > return NGX_CONF_OK; > } > + > + > +static char * > +ngx_set_so_reuseport(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > +{ > + ngx_str_t *value; > + ngx_core_conf_t *ccf; > + > + ccf = (ngx_core_conf_t *) conf; > + > + value = (ngx_str_t *) cf->args->elts; > + > + if (ngx_strcmp(value[1].data, "on") == 0) { > + ccf->so_reuseport = 1; > + } else if (ngx_strcmp(value[1].data, "off") == 0) { > + ccf->so_reuseport = 0; > + } else { > + return "invalid value"; > + } > + return NGX_CONF_OK; > +} This can be replaced with ngx_conf_set_flag_slot(), i.e.: + { ngx_string("so_reuseport"), + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + 0, + offsetof(ngx_core_conf_t, so_reuseport), + NULL }, Also: 1) like Tom said, you definitely need to guard code to make sure SO_REUSEPORT is available, 2) this feature should be disabled on DragonFly versions prior to the 740d1d9 commit, because it clearly wouldn't do any good there, 3) it might make sense to expose this as an option of "listen" directive, instead of a global setting, 4) how does this (OS-level sharding) play with nginx's upgrade process (forking of new binary and passing listening fds)? Are there any side-effects of this change that could result in dropped packets / requests? Best regards, Piotr Sikora From mdounin at mdounin.ru Fri Jul 26 11:24:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Jul 2013 15:24:56 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: Message-ID: <20130726112456.GO90722@mdounin.ru> Hello! On Fri, Jul 26, 2013 at 03:59:47AM -0700, Piotr Sikora wrote: > Hey, > > > @@ -76,6 +78,13 @@ > > 0, > > NULL }, > > > > + { ngx_string("so_reuseport"), > > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, > > + ngx_set_so_reuseport, > > + 0, > > + 0, > > + NULL }, > > + > > { ngx_string("debug_points"), > > NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, > > ngx_conf_set_enum_slot, > > @@ -1361,3 +1370,24 @@ > > > > return NGX_CONF_OK; > > } > > + > > + > > +static char * > > +ngx_set_so_reuseport(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > > +{ > > + ngx_str_t *value; > > + ngx_core_conf_t *ccf; > > + > > + ccf = (ngx_core_conf_t *) conf; > > + > > + value = (ngx_str_t *) cf->args->elts; > > + > > + if (ngx_strcmp(value[1].data, "on") == 0) { > > + ccf->so_reuseport = 1; > > + } else if (ngx_strcmp(value[1].data, "off") == 0) { > > + ccf->so_reuseport = 0; > > + } else { > > + return "invalid value"; > > + } > > + return NGX_CONF_OK; > > +} > > This can be replaced with ngx_conf_set_flag_slot(), i.e.: > > + { ngx_string("so_reuseport"), > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG, > + ngx_conf_set_flag_slot, > + 0, > + offsetof(ngx_core_conf_t, so_reuseport), > + NULL }, If it's kept as a global setting, it would be good idea to move this into events module if possible. > Also: > 1) like Tom said, you definitely need to guard code to make sure > SO_REUSEPORT is available, > 2) this feature should be disabled on DragonFly versions prior to the > 740d1d9 commit, because it clearly wouldn't do any good there, I believe SO_REUSEPORT doesn't do accept() load balancing on many OSes right now (e.g. FreeBSD doesn't do that), and it might not be a good idea to track this in nginx code. It might be better to just allow users to decide whether to use it or not. Not sure though. > 3) it might make sense to expose this as an option of "listen" > directive, instead of a global setting, Agree. > 4) how does this (OS-level sharding) play with nginx's upgrade process > (forking of new binary and passing listening fds)? Are there any > side-effects of this change that could result in dropped packets / > requests? And obvious downside I see is that with the SO_REUSEPORT causes OS to allow duplicate bindings from different processes, which makes it possible to unintentionally run 2 copies of nginx. It might be also possible that configuration test will start to do bad things as a result. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Jul 26 13:31:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Jul 2013 17:31:36 +0400 Subject: Looking for developer to fix a NginX test case module In-Reply-To: References: Message-ID: <20130726133136.GP90722@mdounin.ru> Hello! On Fri, Jul 26, 2013 at 12:05:11AM -0600, Julien Zefi wrote: > Hi, > > As i am not able to fix a problem for a NginX module that i am writing, and > after iterate a couple of times in the mailing list, i am still not able to > solve the problem. So i would like to offer 100 USD for who is able to help > me to fix the problem in my test case module. > > As a reference please check the following thread: > > http://mailman.nginx.org/pipermail/nginx-devel/2013-July/003923.html > > If you think you can fix it, please reply me back in private so we can > discuss how to proceed, Below is working example of what you've tried to do. It still lacks various things like request body handling which should be here in real module, but it's expected to work. Notable problems in your code: 1) You tried to set low-level event handlers. That's not what you are supposed to do. 2) On the other hand, after handling a requests's write event you are responsible to call ngx_handle_write_event(). 3) The r->count++ operation was done too many times, it's just wrong. 4) For some unknown reason r->header_only was set. It's just wrong. 5) Return code from ngx_http_send_header() wasn't handled. 6) Code returned from the ngx_http_test_handler() was NGX_OK instead of NGX_DONE which is expected to be used if want to continue processing. 7) ... (there were many others, but I'm to lazy to remember all of them) Overral, I would recommend you to read and understand at least several standard modules _before_ you'll start further coding. #include #include #include typedef struct { ngx_buf_t *buf; } ngx_http_test_ctx_t; static void ngx_http_test_stream_handler(ngx_http_request_t *r); static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static u_char message[] = "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" "\r\n"; static ngx_command_t ngx_http_test_case_commands[] = { { ngx_string("test_module"), NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS, ngx_http_test, 0, 0, NULL }, ngx_null_command }; static ngx_http_module_t ngx_http_test_case_ctx = { NULL, /* preconfiguration */ NULL, /* postconfiguration */ NULL, /* create main configuration */ NULL, /* init main configuration */ NULL, /* create server configuration */ NULL, /* merge server configuration */ NULL, /* create location configuration */ NULL /* merge location configuration */ }; ngx_module_t ngx_http_test_module = { NGX_MODULE_V1, &ngx_http_test_case_ctx, /* module context */ ngx_http_test_case_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; static void ngx_http_test_stream_handler(ngx_http_request_t *r) { ngx_int_t rc; ngx_buf_t *b; ngx_chain_t *out, cl; ngx_http_test_ctx_t *ctx; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "test stream handler"); ctx = ngx_http_get_module_ctx(r, ngx_http_test_module); if (ctx == NULL) { /* * create new context and a buffer we will use to send * our data */ ctx = ngx_palloc(r->pool, sizeof(ngx_http_test_ctx_t)); if (ctx == NULL) { ngx_http_finalize_request(r, NGX_ERROR); return; } ctx->buf = ngx_create_temp_buf(r->pool, sizeof(message)); ngx_http_set_ctx(r, ctx, ngx_http_test_module); } out = NULL; b = ctx->buf; if (ngx_buf_size(b) == 0) { /* new buffer or everything send, start over */ b->pos = b->start; b->last = ngx_cpymem(b->pos, message, sizeof(message) - 1); b->flush = 1; cl.buf = b; cl.next = NULL; out = &cl; } rc = ngx_http_output_filter(r, out); if (rc == NGX_ERROR) { ngx_http_finalize_request(r, rc); return; } if (ngx_buf_size(b) == 0) { ngx_add_timer(r->connection->write, 1); } if (ngx_handle_write_event(r->connection->write, 0) != NGX_OK) { ngx_http_finalize_request(r, NGX_ERROR); return; } } static ngx_int_t ngx_http_test_handler(ngx_http_request_t *r) { ngx_int_t rc; /* set response headers */ r->headers_out.content_type.len = sizeof("video/mp2t") - 1; r->headers_out.content_type.data = (u_char *) "video/mp2t"; r->headers_out.status = NGX_HTTP_OK; r->write_event_handler = ngx_http_test_stream_handler; rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { return rc; } r->main->count++; ngx_http_test_stream_handler(r); return NGX_DONE; } static char * ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { ngx_http_core_loc_conf_t *clcf; clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); clcf->handler = ngx_http_test_handler; return NGX_CONF_OK; } -- Maxim Dounin http://nginx.org/en/donation.html From witekfl at gazeta.pl Sat Jul 27 08:49:45 2013 From: witekfl at gazeta.pl ( Witold Filipczyk) Date: Sat, 27 Jul 2013 10:49:45 +0200 Subject: appcache Message-ID: Hi, could you add text/cache-manifest appcache; to the mime-types? TIA. http://www.w3schools.com/html/html5_app_cache.asp From wandenberg at gmail.com Sat Jul 27 19:10:51 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 27 Jul 2013 16:10:51 -0300 Subject: Help with shared memory usage In-Reply-To: <20130701113629.GO20717@mdounin.ru> References: <20130701113629.GO20717@mdounin.ru> Message-ID: Hello Maxim. I've been looking into those functions and guided by your comments made the following patch to merge continuous block of memory. Can you check if it is ok? Comments are welcome. --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 +++ src/core/ngx_slab.c 2013-07-27 15:54:55.316995223 -0300 @@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo page->next->prev = (uintptr_t) page; pool->free.next = page; + + for (page = pool->pages; ((page->slab > 0) && (&page[page->slab] < (ngx_slab_page_t *) (pool->start - sizeof(ngx_slab_page_t))));) { + ngx_slab_page_t *neighbour = &page[page->slab]; + if (((ngx_slab_page_t *) page->prev != NULL) && (page->next != NULL) && ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) && + ((ngx_slab_page_t *) neighbour->prev != NULL) && (neighbour->next != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { + + page->slab += neighbour->slab; + + ((ngx_slab_page_t *) neighbour->prev)->next = neighbour->next; + neighbour->next->prev = neighbour->prev; + + neighbour->slab = NGX_SLAB_PAGE_FREE; + neighbour->prev = (uintptr_t) &pool->free; + neighbour->next = &pool->free; + continue; + } + + page += ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) ? page->slab : 1; + } } Regards, Wandenberg On Mon, Jul 1, 2013 at 8:36 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote: > > > Hi, > > > > I'm trying to understand how the shared memory pool works inside the > Nginx. > > To do that, I made a very small module which create a shared memory zone > > with 2097152 bytes, > > and allocating and freeing blocks of memory, starting from 0 and > increasing > > by 1kb until the allocation fails. > > > > The strange parts to me were: > > - the maximum block I could allocate was 128000 bytes > > - each time the allocation fails, I started again from 0, but the maximum > > allocated block changed with the following profile > > 128000 > > 87040 > > 70656 > > 62464 > > 58368 > > 54272 > > 50176 > > 46080 > > 41984 > > 37888 > > 33792 > > 29696 > > > > This is the expected behavior? > > Can anyone help me explaining how shared memory works? > > I have another module which do an intensive shared memory usage, and > > understanding this can help me improve it solving some "no memory" > messages. > > > > I put the code in attach. > > I've looked into this, and the behaviour is expected as per > nginx slab allocator code and the way you do allocations in your > test. > > Increasing allocations of large blocks immediately followed by > freeing them result in free memory blocks split into smaller > blocks, eventually resulting in at most page size allocations > being possible. Take a look at ngx_slab_alloc_pages() and > ngx_slab_free_pages() for details. > > Note that slab allocator nginx uses for allocations in shared > memory is designed mostly for small allocations. It works well > for allocations less than page size, but large allocations support > is very simple. Probably it should be improved, but as of now > nothing in nginx uses large allocations in shared memory. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unfrag_slab_memory.patch Type: application/octet-stream Size: 1210 bytes Desc: not available URL: From sepherosa at gmail.com Sun Jul 28 13:11:26 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Sun, 28 Jul 2013 21:11:26 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: Message-ID: On Fri, Jul 26, 2013 at 6:59 PM, Piotr Sikora wrote: > Hey, > >> @@ -76,6 +78,13 @@ >> 0, >> NULL }, >> >> + { ngx_string("so_reuseport"), >> + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >> + ngx_set_so_reuseport, >> + 0, >> + 0, >> + NULL }, >> + >> { ngx_string("debug_points"), >> NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >> ngx_conf_set_enum_slot, >> @@ -1361,3 +1370,24 @@ >> >> return NGX_CONF_OK; >> } >> + >> + >> +static char * >> +ngx_set_so_reuseport(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) >> +{ >> + ngx_str_t *value; >> + ngx_core_conf_t *ccf; >> + >> + ccf = (ngx_core_conf_t *) conf; >> + >> + value = (ngx_str_t *) cf->args->elts; >> + >> + if (ngx_strcmp(value[1].data, "on") == 0) { >> + ccf->so_reuseport = 1; >> + } else if (ngx_strcmp(value[1].data, "off") == 0) { >> + ccf->so_reuseport = 0; >> + } else { >> + return "invalid value"; >> + } >> + return NGX_CONF_OK; >> +} > > This can be replaced with ngx_conf_set_flag_slot(), i.e.: > > + { ngx_string("so_reuseport"), > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG, > + ngx_conf_set_flag_slot, > + 0, > + offsetof(ngx_core_conf_t, so_reuseport), > + NULL }, > > Also: > 1) like Tom said, you definitely need to guard code to make sure > SO_REUSEPORT is available, Yeah. I will take care of it. > 2) this feature should be disabled on DragonFly versions prior to the > 740d1d9 commit, because it clearly wouldn't do any good there, On DragonFlyBSD, I could use a sysctl node to detect this feature. However, this obviously is OS specific, I am not quite sure about where to put that code. Any hint on this? > 3) it might make sense to expose this as an option of "listen" > directive, instead of a global setting, I tried to do so_reuseport as "listen" option. However, the accept mutex to be disabled is actually global, as far as I understand the code. That's why I currently implement so_reuseport as global option instead of "listen" option. > 4) how does this (OS-level sharding) play with nginx's upgrade process > (forking of new binary and passing listening fds)? Are there any > side-effects of this change that could result in dropped packets / > requests? I am not quite sure about how nginx handles upgrade. But if so_reuseport is enabled and worker process exits (assuming new worker is not forked by old worker), any pending sockets on listen socket's completion queue but not yet accept(2)'ed will be dropped (at least this is the case in DragonFlyBSD). Best Regards, sephe -- Tomorrow Will Never Die From sepherosa at gmail.com Sun Jul 28 13:21:51 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Sun, 28 Jul 2013 21:21:51 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: <20130726112456.GO90722@mdounin.ru> References: <20130726112456.GO90722@mdounin.ru> Message-ID: On Fri, Jul 26, 2013 at 7:24 PM, Maxim Dounin wrote: > Hello! Hi, > > On Fri, Jul 26, 2013 at 03:59:47AM -0700, Piotr Sikora wrote: > >> Hey, >> >> > @@ -76,6 +78,13 @@ >> > 0, >> > NULL }, >> > >> > + { ngx_string("so_reuseport"), >> > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >> > + ngx_set_so_reuseport, >> > + 0, >> > + 0, >> > + NULL }, >> > + >> > { ngx_string("debug_points"), >> > NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >> > ngx_conf_set_enum_slot, >> > @@ -1361,3 +1370,24 @@ >> > >> > return NGX_CONF_OK; >> > } >> > + >> > + >> > +static char * >> > +ngx_set_so_reuseport(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) >> > +{ >> > + ngx_str_t *value; >> > + ngx_core_conf_t *ccf; >> > + >> > + ccf = (ngx_core_conf_t *) conf; >> > + >> > + value = (ngx_str_t *) cf->args->elts; >> > + >> > + if (ngx_strcmp(value[1].data, "on") == 0) { >> > + ccf->so_reuseport = 1; >> > + } else if (ngx_strcmp(value[1].data, "off") == 0) { >> > + ccf->so_reuseport = 0; >> > + } else { >> > + return "invalid value"; >> > + } >> > + return NGX_CONF_OK; >> > +} >> >> This can be replaced with ngx_conf_set_flag_slot(), i.e.: >> >> + { ngx_string("so_reuseport"), >> + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG, >> + ngx_conf_set_flag_slot, >> + 0, >> + offsetof(ngx_core_conf_t, so_reuseport), >> + NULL }, > > If it's kept as a global setting, it would be good idea to move > this into events module if possible. > >> Also: >> 1) like Tom said, you definitely need to guard code to make sure >> SO_REUSEPORT is available, >> 2) this feature should be disabled on DragonFly versions prior to the >> 740d1d9 commit, because it clearly wouldn't do any good there, > > I believe SO_REUSEPORT doesn't do accept() load balancing on many > OSes right now (e.g. FreeBSD doesn't do that), and it might not be > a good idea to track this in nginx code. It might be better to > just allow users to decide whether to use it or not. Not sure though. Since so_reuseport is off by default, I don't think it will do any harm. Any users of newer DragonFlyBSD (it will be in 3.6 release) and Linux kernel >= 3.10, could turn it on by themselves (here I assume, they know what they are doing). > >> 3) it might make sense to expose this as an option of "listen" >> directive, instead of a global setting, > > Agree. See my previous reply. Mainly because accept mutex is global (If I didn't misunderstand the code) and accept mutex is useless if SO_REUSEPORT is used. If making so_reuseport a "listen" option is a must, I think we will have to make accept mutex per-listen socket first. > >> 4) how does this (OS-level sharding) play with nginx's upgrade process >> (forking of new binary and passing listening fds)? Are there any >> side-effects of this change that could result in dropped packets / >> requests? > > And obvious downside I see is that with the SO_REUSEPORT causes OS > to allow duplicate bindings from different processes, which makes > it possible to unintentionally run 2 copies of nginx. It might be I think nginx is using pid file to make sure there are only one copy of nginx is running. > also possible that configuration test will start to do bad things > as a result. Yeah, if so_reuseport is specified, my patch causes nginx not creating listen socket during configuration test (well, again, this is based on my understanding of the original code). Best Regards, sephe -- Tomorrow Will Never Die From yurnerola at gmail.com Mon Jul 29 06:29:54 2013 From: yurnerola at gmail.com (yurnerola at gmail.com) Date: Mon, 29 Jul 2013 14:29:54 +0800 Subject: when EPOLLIN and EPOLLOUT returned Message-ID: <201307291429516518268@gmail.com> Hi,all I find it hard to understand in function ngx_epoll_process_events as following. if ((revents & (EPOLLERR|EPOLLHUP)) && (revents & (EPOLLIN|EPOLLOUT)) == 0) { /* * if the error events were returned without EPOLLIN or EPOLLOUT, * then add these flags to handle the events at least in one * active handler */ revents |= EPOLLIN|EPOLLOUT; } As the comment said we should check if ((revents & (EPOLLERR|EPOLLHUP))==0) that we can end the connection when EPOLLERR and EPOLLHUP is returned. ,but why not this? Help... yurnerola at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sepherosa at gmail.com Mon Jul 29 09:52:36 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Mon, 29 Jul 2013 17:52:36 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: <20130726112456.GO90722@mdounin.ru> Message-ID: Hi all, Sorry for the top post, here is the patch in the second around: http://leaf.dragonflybsd.org/~sephe/ngx_reuseport2.diff Addressed two problems based on feedbacks: - Condition the code directly operates on SO_REUSEPORT - Don't enable so_reuseport, if '-t' (i.e. ngx_test_config==1) is specified on the command line Best Regards, sephe On Sun, Jul 28, 2013 at 9:21 PM, Sepherosa Ziehau wrote: > On Fri, Jul 26, 2013 at 7:24 PM, Maxim Dounin wrote: >> Hello! > > Hi, > >> >> On Fri, Jul 26, 2013 at 03:59:47AM -0700, Piotr Sikora wrote: >> >>> Hey, >>> >>> > @@ -76,6 +78,13 @@ >>> > 0, >>> > NULL }, >>> > >>> > + { ngx_string("so_reuseport"), >>> > + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >>> > + ngx_set_so_reuseport, >>> > + 0, >>> > + 0, >>> > + NULL }, >>> > + >>> > { ngx_string("debug_points"), >>> > NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, >>> > ngx_conf_set_enum_slot, >>> > @@ -1361,3 +1370,24 @@ >>> > >>> > return NGX_CONF_OK; >>> > } >>> > + >>> > + >>> > +static char * >>> > +ngx_set_so_reuseport(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) >>> > +{ >>> > + ngx_str_t *value; >>> > + ngx_core_conf_t *ccf; >>> > + >>> > + ccf = (ngx_core_conf_t *) conf; >>> > + >>> > + value = (ngx_str_t *) cf->args->elts; >>> > + >>> > + if (ngx_strcmp(value[1].data, "on") == 0) { >>> > + ccf->so_reuseport = 1; >>> > + } else if (ngx_strcmp(value[1].data, "off") == 0) { >>> > + ccf->so_reuseport = 0; >>> > + } else { >>> > + return "invalid value"; >>> > + } >>> > + return NGX_CONF_OK; >>> > +} >>> >>> This can be replaced with ngx_conf_set_flag_slot(), i.e.: >>> >>> + { ngx_string("so_reuseport"), >>> + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG, >>> + ngx_conf_set_flag_slot, >>> + 0, >>> + offsetof(ngx_core_conf_t, so_reuseport), >>> + NULL }, >> >> If it's kept as a global setting, it would be good idea to move >> this into events module if possible. >> >>> Also: >>> 1) like Tom said, you definitely need to guard code to make sure >>> SO_REUSEPORT is available, >>> 2) this feature should be disabled on DragonFly versions prior to the >>> 740d1d9 commit, because it clearly wouldn't do any good there, >> >> I believe SO_REUSEPORT doesn't do accept() load balancing on many >> OSes right now (e.g. FreeBSD doesn't do that), and it might not be >> a good idea to track this in nginx code. It might be better to >> just allow users to decide whether to use it or not. Not sure though. > > Since so_reuseport is off by default, I don't think it will do any > harm. Any users of newer DragonFlyBSD (it will be in 3.6 release) and > Linux kernel >= 3.10, could turn it on by themselves (here I assume, > they know what they are doing). > >> >>> 3) it might make sense to expose this as an option of "listen" >>> directive, instead of a global setting, >> >> Agree. > > See my previous reply. Mainly because accept mutex is global (If I > didn't misunderstand the code) and accept mutex is useless if > SO_REUSEPORT is used. If making so_reuseport a "listen" option is a > must, I think we will have to make accept mutex per-listen socket > first. > >> >>> 4) how does this (OS-level sharding) play with nginx's upgrade process >>> (forking of new binary and passing listening fds)? Are there any >>> side-effects of this change that could result in dropped packets / >>> requests? >> >> And obvious downside I see is that with the SO_REUSEPORT causes OS >> to allow duplicate bindings from different processes, which makes >> it possible to unintentionally run 2 copies of nginx. It might be > > I think nginx is using pid file to make sure there are only one copy > of nginx is running. > >> also possible that configuration test will start to do bad things >> as a result. > > Yeah, if so_reuseport is specified, my patch causes nginx not creating > listen socket during configuration test (well, again, this is based on > my understanding of the original code). > > Best Regards, > sephe > > -- > Tomorrow Will Never Die -- Tomorrow Will Never Die From ru at nginx.com Mon Jul 29 10:55:09 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 29 Jul 2013 10:55:09 +0000 Subject: [nginx] Upstream: reliably detect connection failures with SSL p... Message-ID: details: http://hg.nginx.org/nginx/rev/12b750d35162 branches: changeset: 5305:12b750d35162 user: Ruslan Ermilov date: Mon Jul 29 13:23:16 2013 +0400 description: Upstream: reliably detect connection failures with SSL peers. diffstat: src/http/ngx_http_upstream.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff -r d3eab5e2df5f -r 12b750d35162 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Jul 25 15:00:41 2013 +0400 +++ b/src/http/ngx_http_upstream.c Mon Jul 29 13:23:16 2013 +0400 @@ -1282,6 +1282,11 @@ ngx_http_upstream_ssl_init_connection(ng { ngx_int_t rc; + if (ngx_http_upstream_test_connect(c) != NGX_OK) { + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + return; + } + if (ngx_ssl_create_connection(u->conf->ssl, c, NGX_SSL_BUFFER|NGX_SSL_CLIENT) != NGX_OK) From mdounin at mdounin.ru Mon Jul 29 13:16:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2013 17:16:29 +0400 Subject: when EPOLLIN and EPOLLOUT returned In-Reply-To: <201307291429516518268@gmail.com> References: <201307291429516518268@gmail.com> Message-ID: <20130729131629.GC48301@mdounin.ru> Hello! On Mon, Jul 29, 2013 at 02:29:54PM +0800, yurnerola at gmail.com wrote: > Hi,all > I find it hard to understand in function ngx_epoll_process_events as following. > if ((revents & (EPOLLERR|EPOLLHUP)) > && (revents & (EPOLLIN|EPOLLOUT)) == 0) > { > /* > * if the error events were returned without EPOLLIN or EPOLLOUT, > * then add these flags to handle the events at least in one > * active handler > */ > > revents |= EPOLLIN|EPOLLOUT; > } > As the comment said we should check if ((revents & (EPOLLERR|EPOLLHUP))==0) that we can end the connection when EPOLLERR and EPOLLHUP is returned. > ,but why not this? Help... Sorry, but I failed to understand your question, and likely others failed too. It might be a good idea to clarify what do you want to know. -- Maxim Dounin http://nginx.org/en/donation.html From pluknet at nginx.com Mon Jul 29 14:46:52 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 29 Jul 2013 14:46:52 +0000 Subject: [nginx] Perl: fixed syntax usage for C preprocessor directives. Message-ID: details: http://hg.nginx.org/nginx/rev/43900b822890 branches: changeset: 5306:43900b822890 user: Sergey Kandaurov date: Mon Jul 29 17:30:01 2013 +0400 description: Perl: fixed syntax usage for C preprocessor directives. As per perlxs, C preprocessor directives should be at the first non-whitespace of a line to avoid interpreting them as comments. #if and #endif are moved so that there are no blank lines before them to retain them as part of the function body. diffstat: src/http/modules/perl/nginx.xs | 11 ++++------- 1 files changed, 4 insertions(+), 7 deletions(-) diffs (39 lines): diff -r 12b750d35162 -r 43900b822890 src/http/modules/perl/nginx.xs --- a/src/http/modules/perl/nginx.xs Mon Jul 29 13:23:16 2013 +0400 +++ b/src/http/modules/perl/nginx.xs Mon Jul 29 17:30:01 2013 +0400 @@ -261,13 +261,12 @@ header_in(r, key) sep = ';'; goto multi; } - - #if (NGX_HTTP_X_FORWARDED_FOR) +#if (NGX_HTTP_X_FORWARDED_FOR) if (hh->offset == offsetof(ngx_http_headers_in_t, x_forwarded_for)) { sep = ','; goto multi; } - #endif +#endif if (hh->offset) { @@ -898,8 +897,7 @@ variable(r, name, value = NULL) var.len = len; var.data = lowcase; - - #if (NGX_DEBUG) +#if (NGX_DEBUG) if (value) { ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -908,8 +906,7 @@ variable(r, name, value = NULL) ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "perl variable: \"%V\"", &var); } - - #endif +#endif vv = ngx_http_get_variable(r, &var, hash); if (vv == NULL) { From mdounin at mdounin.ru Mon Jul 29 14:57:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2013 18:57:08 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: References: Message-ID: <20130729145708.GF48301@mdounin.ru> Hello! On Sun, Jul 28, 2013 at 09:11:26PM +0800, Sepherosa Ziehau wrote: [...] > > 2) this feature should be disabled on DragonFly versions prior to the > > 740d1d9 commit, because it clearly wouldn't do any good there, > > On DragonFlyBSD, I could use a sysctl node to detect this feature. > However, this obviously is OS specific, I am not quite sure about > where to put that code. Any hint on this? DragonFly is currently handled as a variant of FreeBSD (see auto/os/conf, src/core/ngx_config.h, src/os/unix/ngx_freebsd_config.h, src/os/unix/ngx_freebsd_init.c). If you want to do sysctl runtime checks, correct place would be in src/os/unix/ngx_freebsd_init.c, ngx_os_specific_init(). I'm not sure it worth the effort though, probably just assuming user knows better will be ok. [...] > > 4) how does this (OS-level sharding) play with nginx's upgrade process > > (forking of new binary and passing listening fds)? Are there any > > side-effects of this change that could result in dropped packets / > > requests? > > I am not quite sure about how nginx handles upgrade. But if > so_reuseport is enabled and worker process exits (assuming new worker > is not forked by old worker), any pending sockets on listen socket's > completion queue but not yet accept(2)'ed will be dropped (at least > this is the case in DragonFlyBSD). By "pending sockets completion queue" you mean exactly one socket's queue, as created by a worker process? I.e., there is no mechanism to pass unaccepted connections from one socket to other sockets listening on the same address if the socket is closed (e.g. due to process exit)? This sounds bad, as it will result in connections being lost not only on binary upgrades but also on normal configuration reloads. Just for reference, upgrade process works as follows: - old master fork()'s and then uses execve() to start a new master process, with listenings sockets descriptors enumerated in the environment - new master process parses configuration (using inherited listening sockets fds it got from old master) and fork()'s new worker processes - old master asks old worker processes to exit gracefully (to stop accepting new connections and to exit once processing of currently active requests is complete) Configuration reload works as follows: - master process loads new configuration - master process fork()'s new worker processes (with a new configuration) - master proces asks old worker processes to exit gracefully Some documentation is here: http://nginx.org/en/docs/control.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Jul 29 17:11:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2013 21:11:10 +0400 Subject: Help with shared memory usage In-Reply-To: References: <20130701113629.GO20717@mdounin.ru> Message-ID: <20130729171109.GA2130@mdounin.ru> Hello! On Sat, Jul 27, 2013 at 04:10:51PM -0300, Wandenberg Peixoto wrote: > Hello Maxim. > > I've been looking into those functions and guided by your comments > made the following patch to merge continuous block of memory. > Can you check if it is ok? > Comments are welcome. > > --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 > +++ src/core/ngx_slab.c 2013-07-27 15:54:55.316995223 -0300 > @@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo > page->next->prev = (uintptr_t) page; > > pool->free.next = page; > + > + for (page = pool->pages; ((page->slab > 0) && (&page[page->slab] < > (ngx_slab_page_t *) (pool->start - sizeof(ngx_slab_page_t))));) { > + ngx_slab_page_t *neighbour = &page[page->slab]; > + if (((ngx_slab_page_t *) page->prev != NULL) && (page->next != > NULL) && ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) && > + ((ngx_slab_page_t *) neighbour->prev != NULL) && > (neighbour->next != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == > NGX_SLAB_PAGE)) { > + > + page->slab += neighbour->slab; > + > + ((ngx_slab_page_t *) neighbour->prev)->next = neighbour->next; > + neighbour->next->prev = neighbour->prev; > + > + neighbour->slab = NGX_SLAB_PAGE_FREE; > + neighbour->prev = (uintptr_t) &pool->free; > + neighbour->next = &pool->free; > + continue; > + } > + > + page += ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) ? > page->slab : 1; > + } > } The patch doesn't look right (well, may be it works - but at least it's not something I would like to see committed). The pool->pages isn't something you should iterate though, it's just a preallocated storage space for ngx_slab_page_t structures. Additionally, doing a full merge of all free blocks on a free operation looks too much. It might be something we want to do on allocation failure, but not on a normal path in ngx_slab_free_pages(). And/or something lightweight may be done in ngx_slab_free_pages(), e.g., checking if pages following pages we are freeing are free too, and merging them in this case. > > > > Regards, > Wandenberg > > > On Mon, Jul 1, 2013 at 8:36 AM, Maxim Dounin wrote: > > > Hello! > > > > On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote: > > > > > Hi, > > > > > > I'm trying to understand how the shared memory pool works inside the > > Nginx. > > > To do that, I made a very small module which create a shared memory zone > > > with 2097152 bytes, > > > and allocating and freeing blocks of memory, starting from 0 and > > increasing > > > by 1kb until the allocation fails. > > > > > > The strange parts to me were: > > > - the maximum block I could allocate was 128000 bytes > > > - each time the allocation fails, I started again from 0, but the maximum > > > allocated block changed with the following profile > > > 128000 > > > 87040 > > > 70656 > > > 62464 > > > 58368 > > > 54272 > > > 50176 > > > 46080 > > > 41984 > > > 37888 > > > 33792 > > > 29696 > > > > > > This is the expected behavior? > > > Can anyone help me explaining how shared memory works? > > > I have another module which do an intensive shared memory usage, and > > > understanding this can help me improve it solving some "no memory" > > messages. > > > > > > I put the code in attach. > > > > I've looked into this, and the behaviour is expected as per > > nginx slab allocator code and the way you do allocations in your > > test. > > > > Increasing allocations of large blocks immediately followed by > > freeing them result in free memory blocks split into smaller > > blocks, eventually resulting in at most page size allocations > > being possible. Take a look at ngx_slab_alloc_pages() and > > ngx_slab_free_pages() for details. > > > > Note that slab allocator nginx uses for allocations in shared > > memory is designed mostly for small allocations. It works well > > for allocations less than page size, but large allocations support > > is very simple. Probably it should be improved, but as of now > > nothing in nginx uses large allocations in shared memory. > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/en/donation.html From wandenberg at gmail.com Mon Jul 29 19:01:37 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 29 Jul 2013 16:01:37 -0300 Subject: Help with shared memory usage In-Reply-To: <20130729171109.GA2130@mdounin.ru> References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> Message-ID: Hello! I see your point, and I will split the patch to do both actions, on ngx_slab_free_pages() and on allocation when has a failure. What would be an alternative to not loop on pool->pages? Regards, Wandenberg On Mon, Jul 29, 2013 at 2:11 PM, Maxim Dounin wrote: > Hello! > > On Sat, Jul 27, 2013 at 04:10:51PM -0300, Wandenberg Peixoto wrote: > > > Hello Maxim. > > > > I've been looking into those functions and guided by your comments > > made the following patch to merge continuous block of memory. > > Can you check if it is ok? > > Comments are welcome. > > > > --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 > > +++ src/core/ngx_slab.c 2013-07-27 15:54:55.316995223 -0300 > > @@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo > > page->next->prev = (uintptr_t) page; > > > > pool->free.next = page; > > + > > + for (page = pool->pages; ((page->slab > 0) && (&page[page->slab] < > > (ngx_slab_page_t *) (pool->start - sizeof(ngx_slab_page_t))));) { > > + ngx_slab_page_t *neighbour = &page[page->slab]; > > + if (((ngx_slab_page_t *) page->prev != NULL) && (page->next != > > NULL) && ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) && > > + ((ngx_slab_page_t *) neighbour->prev != NULL) && > > (neighbour->next != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == > > NGX_SLAB_PAGE)) { > > + > > + page->slab += neighbour->slab; > > + > > + ((ngx_slab_page_t *) neighbour->prev)->next = > neighbour->next; > > + neighbour->next->prev = neighbour->prev; > > + > > + neighbour->slab = NGX_SLAB_PAGE_FREE; > > + neighbour->prev = (uintptr_t) &pool->free; > > + neighbour->next = &pool->free; > > + continue; > > + } > > + > > + page += ((page->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE) ? > > page->slab : 1; > > + } > > } > > The patch doesn't look right (well, may be it works - but at least it's > not something I would like to see committed). > > The pool->pages isn't something you should iterate though, it's > just a preallocated storage space for ngx_slab_page_t structures. > > Additionally, doing a full merge of all free blocks on a free > operation looks too much. It might be something we want to do on > allocation failure, but not on a normal path in > ngx_slab_free_pages(). And/or something lightweight may be done > in ngx_slab_free_pages(), e.g., checking if pages following pages > we are freeing are free too, and merging them in this case. > > > > > > > > > Regards, > > Wandenberg > > > > > > On Mon, Jul 1, 2013 at 8:36 AM, Maxim Dounin wrote: > > > > > Hello! > > > > > > On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote: > > > > > > > Hi, > > > > > > > > I'm trying to understand how the shared memory pool works inside the > > > Nginx. > > > > To do that, I made a very small module which create a shared memory > zone > > > > with 2097152 bytes, > > > > and allocating and freeing blocks of memory, starting from 0 and > > > increasing > > > > by 1kb until the allocation fails. > > > > > > > > The strange parts to me were: > > > > - the maximum block I could allocate was 128000 bytes > > > > - each time the allocation fails, I started again from 0, but the > maximum > > > > allocated block changed with the following profile > > > > 128000 > > > > 87040 > > > > 70656 > > > > 62464 > > > > 58368 > > > > 54272 > > > > 50176 > > > > 46080 > > > > 41984 > > > > 37888 > > > > 33792 > > > > 29696 > > > > > > > > This is the expected behavior? > > > > Can anyone help me explaining how shared memory works? > > > > I have another module which do an intensive shared memory usage, and > > > > understanding this can help me improve it solving some "no memory" > > > messages. > > > > > > > > I put the code in attach. > > > > > > I've looked into this, and the behaviour is expected as per > > > nginx slab allocator code and the way you do allocations in your > > > test. > > > > > > Increasing allocations of large blocks immediately followed by > > > freeing them result in free memory blocks split into smaller > > > blocks, eventually resulting in at most page size allocations > > > being possible. Take a look at ngx_slab_alloc_pages() and > > > ngx_slab_free_pages() for details. > > > > > > Note that slab allocator nginx uses for allocations in shared > > > memory is designed mostly for small allocations. It works well > > > for allocations less than page size, but large allocations support > > > is very simple. Probably it should be improved, but as of now > > > nothing in nginx uses large allocations in shared memory. > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/en/donation.html > > > > > > _______________________________________________ > > > nginx-devel mailing list > > > nginx-devel at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzefip at gmail.com Tue Jul 30 01:07:10 2013 From: jzefip at gmail.com (Julien Zefi) Date: Mon, 29 Jul 2013 19:07:10 -0600 Subject: Looking for developer to fix a NginX test case module In-Reply-To: <20130726133136.GP90722@mdounin.ru> References: <20130726133136.GP90722@mdounin.ru> Message-ID: hi Maxim, thanks so much for the code provided, i have merged that code in my module and it worked as expected!. Would you please send me the details to send you the money ? thanks On Fri, Jul 26, 2013 at 7:31 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 26, 2013 at 12:05:11AM -0600, Julien Zefi wrote: > > > Hi, > > > > As i am not able to fix a problem for a NginX module that i am writing, > and > > after iterate a couple of times in the mailing list, i am still not able > to > > solve the problem. So i would like to offer 100 USD for who is able to > help > > me to fix the problem in my test case module. > > > > As a reference please check the following thread: > > > > http://mailman.nginx.org/pipermail/nginx-devel/2013-July/003923.html > > > > If you think you can fix it, please reply me back in private so we can > > discuss how to proceed, > > Below is working example of what you've tried to do. It still > lacks various things like request body handling which should be > here in real module, but it's expected to work. > > Notable problems in your code: > > 1) You tried to set low-level event handlers. That's not what > you are supposed to do. > > 2) On the other hand, after handling a requests's write event you > are responsible to call ngx_handle_write_event(). > > 3) The r->count++ operation was done too many times, it's just wrong. > > 4) For some unknown reason r->header_only was set. It's just wrong. > > 5) Return code from ngx_http_send_header() wasn't handled. > > 6) Code returned from the ngx_http_test_handler() was NGX_OK > instead of NGX_DONE which is expected to be used if want to > continue processing. > > 7) ... (there were many others, but I'm to lazy to remember all of > them) > > Overral, I would recommend you to read and understand at least > several standard modules _before_ you'll start further coding. > > > #include > #include > #include > > > typedef struct { > ngx_buf_t *buf; > } ngx_http_test_ctx_t; > > > static void ngx_http_test_stream_handler(ngx_http_request_t *r); > static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > > static u_char message[] = > "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" > "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" > "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" > "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" > "AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFF" > "\r\n"; > > > static ngx_command_t ngx_http_test_case_commands[] = { > { ngx_string("test_module"), > NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS, > ngx_http_test, > 0, > 0, > NULL }, > > ngx_null_command > }; > > > static ngx_http_module_t ngx_http_test_case_ctx = { > NULL, /* preconfiguration */ > NULL, /* postconfiguration */ > > NULL, /* create main configuration */ > NULL, /* init main configuration */ > > NULL, /* create server configuration */ > NULL, /* merge server configuration */ > > NULL, /* create location configuration */ > NULL /* merge location configuration */ > }; > > > ngx_module_t ngx_http_test_module = { > NGX_MODULE_V1, > &ngx_http_test_case_ctx, /* module context */ > ngx_http_test_case_commands, /* module directives */ > NGX_HTTP_MODULE, /* module type */ > NULL, /* init master */ > NULL, /* init module */ > NULL, /* init process */ > NULL, /* init thread */ > NULL, /* exit thread */ > NULL, /* exit process */ > NULL, /* exit master */ > NGX_MODULE_V1_PADDING > }; > > > static void > ngx_http_test_stream_handler(ngx_http_request_t *r) > { > ngx_int_t rc; > ngx_buf_t *b; > ngx_chain_t *out, cl; > ngx_http_test_ctx_t *ctx; > > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > "test stream handler"); > > ctx = ngx_http_get_module_ctx(r, ngx_http_test_module); > > if (ctx == NULL) { > /* > * create new context and a buffer we will use to send > * our data > */ > > ctx = ngx_palloc(r->pool, sizeof(ngx_http_test_ctx_t)); > if (ctx == NULL) { > ngx_http_finalize_request(r, NGX_ERROR); > return; > } > > ctx->buf = ngx_create_temp_buf(r->pool, sizeof(message)); > > ngx_http_set_ctx(r, ctx, ngx_http_test_module); > } > > out = NULL; > b = ctx->buf; > > if (ngx_buf_size(b) == 0) { > /* new buffer or everything send, start over */ > > b->pos = b->start; > b->last = ngx_cpymem(b->pos, message, sizeof(message) - 1); > b->flush = 1; > > cl.buf = b; > cl.next = NULL; > > out = &cl; > } > > rc = ngx_http_output_filter(r, out); > > if (rc == NGX_ERROR) { > ngx_http_finalize_request(r, rc); > return; > } > > if (ngx_buf_size(b) == 0) { > ngx_add_timer(r->connection->write, 1); > } > > if (ngx_handle_write_event(r->connection->write, 0) != NGX_OK) { > ngx_http_finalize_request(r, NGX_ERROR); > return; > } > } > > > static ngx_int_t > ngx_http_test_handler(ngx_http_request_t *r) > { > ngx_int_t rc; > > /* set response headers */ > > r->headers_out.content_type.len = sizeof("video/mp2t") - 1; > r->headers_out.content_type.data = (u_char *) "video/mp2t"; > r->headers_out.status = NGX_HTTP_OK; > > r->write_event_handler = ngx_http_test_stream_handler; > > rc = ngx_http_send_header(r); > > if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { > return rc; > } > > r->main->count++; > ngx_http_test_stream_handler(r); > > return NGX_DONE; > } > > > static char * > ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > { > ngx_http_core_loc_conf_t *clcf; > > clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); > clcf->handler = ngx_http_test_handler; > > return NGX_CONF_OK; > } > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sepherosa at gmail.com Tue Jul 30 08:22:33 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Tue, 30 Jul 2013 16:22:33 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets In-Reply-To: <20130729145708.GF48301@mdounin.ru> References: <20130729145708.GF48301@mdounin.ru> Message-ID: On Mon, Jul 29, 2013 at 10:57 PM, Maxim Dounin wrote: > Hello! > > On Sun, Jul 28, 2013 at 09:11:26PM +0800, Sepherosa Ziehau wrote: > > [...] > >> > 2) this feature should be disabled on DragonFly versions prior to the >> > 740d1d9 commit, because it clearly wouldn't do any good there, >> >> On DragonFlyBSD, I could use a sysctl node to detect this feature. >> However, this obviously is OS specific, I am not quite sure about >> where to put that code. Any hint on this? > > DragonFly is currently handled as a variant of FreeBSD (see > auto/os/conf, src/core/ngx_config.h, src/os/unix/ngx_freebsd_config.h, > src/os/unix/ngx_freebsd_init.c). > > If you want to do sysctl runtime checks, correct place would be in > src/os/unix/ngx_freebsd_init.c, ngx_os_specific_init(). I'm not > sure it worth the effort though, probably just assuming user knows > better will be ok. Agree. We will just trust users. >> I am not quite sure about how nginx handles upgrade. But if >> so_reuseport is enabled and worker process exits (assuming new worker >> is not forked by old worker), any pending sockets on listen socket's >> completion queue but not yet accept(2)'ed will be dropped (at least >> this is the case in DragonFlyBSD). > > By "pending sockets completion queue" you mean exactly one > socket's queue, as created by a worker process? I.e., there is no > mechanism to pass unaccepted connections from one socket to other > sockets listening on the same address if the socket is closed > (e.g. due to process exit)? > > This sounds bad, as it will result in connections being lost not > only on binary upgrades but also on normal configuration reloads. > > Just for reference, upgrade process works as follows: > > - old master fork()'s and then uses execve() to start a new master > process, with listenings sockets descriptors enumerated in the > environment > > - new master process parses configuration (using inherited > listening sockets fds it got from old master) and fork()'s new > worker processes > > - old master asks old worker processes to exit gracefully (to stop > accepting new connections and to exit once processing of > currently active requests is complete) > > Configuration reload works as follows: > > - master process loads new configuration > > - master process fork()'s new worker processes (with a new configuration) > > - master proces asks old worker processes to exit gracefully > > Some documentation is here: > > http://nginx.org/en/docs/control.html Thank you very much for the hint! The patch needs some changes to handle this, as well as DragonFly's kernel. After I have done the DragonFly kernel part, I would do the following changes to the current patch: If so_reuseport is enabled, the master will open the listen sockets and after forking the first worker, the master closes all opened listen sockets (they are inherited by the first worker); so the kernel would have at least one listen socket to migrate the completed-but-not-yet-accepted sockets of the to-be-closed listen sockets. Best Regards, sephe -- Tomorrow Will Never Die From mdounin at mdounin.ru Tue Jul 30 09:34:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2013 13:34:54 +0400 Subject: Looking for developer to fix a NginX test case module In-Reply-To: References: <20130726133136.GP90722@mdounin.ru> Message-ID: <20130730093454.GB2130@mdounin.ru> Hello! On Mon, Jul 29, 2013 at 07:07:10PM -0600, Julien Zefi wrote: > hi Maxim, > > thanks so much for the code provided, i have merged that code in my module > and it worked as expected!. Would you please send me the details to send > you the money ? Please use donations form here: http://nginx.org/en/donation.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Jul 30 10:09:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2013 14:09:32 +0400 Subject: Help with shared memory usage In-Reply-To: References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> Message-ID: <20130730100931.GD2130@mdounin.ru> Hello! On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote: [...] > What would be an alternative to not loop on pool->pages? Free memory blocks are linked in pool->free list, it should be enough to look there. [...] -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Jul 30 13:36:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2013 13:36:18 +0000 Subject: [nginx] nginx-1.5.3-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/644a07952629 branches: changeset: 5307:644a07952629 user: Maxim Dounin date: Tue Jul 30 17:27:55 2013 +0400 description: nginx-1.5.3-RELEASE diffstat: docs/xml/nginx/changes.xml | 75 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 75 insertions(+), 0 deletions(-) diffs (85 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,81 @@ + + + + +????????? ?? ?????????? API: +?????? ??? ?????????????????? ?????? ? ????????? +u->length ?? ????????? ??????????????? ? -1. + + +Change in internal API: +now u->length defaults to -1 +if working with backends in unbuffered mode. + + + + + +?????? ??? ????????? ????????? ?????? ?? ??????? +nginx ?????????? ?????????? ????? ??????, +????? ???? ????????? ?????????? ? ????????. + + +now after receiving an incomplete response from a backend server +nginx tries to send an available part of the response to a client, +and then closes client connection. + + + + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ????????????? ?????? ngx_http_spdy_module +? ????????? client_body_in_file_only. + + +a segmentation fault might occur in a worker process +if the ngx_http_spdy_module was used +with the "client_body_in_file_only" directive. + + + + + +???????? so_keepalive ????????? listen +??? ???????? ??????????? ?? DragonFlyBSD.
+??????? Sepherosa Ziehau. +
+ +the "so_keepalive" parameter of the "listen" directive +might be handled incorrectly on DragonFlyBSD.
+Thanks to Sepherosa Ziehau. +
+
+ + + +? ?????? ngx_http_xslt_filter_module. + + +in the ngx_http_xslt_filter_module. + + + + + +? ?????? ngx_http_sub_filter_module. + + +in the ngx_http_sub_filter_module. + + + +
+ + From mdounin at mdounin.ru Tue Jul 30 13:36:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2013 13:36:19 +0000 Subject: [nginx] release-1.5.3 tag Message-ID: details: http://hg.nginx.org/nginx/rev/0ff3dc9081a1 branches: changeset: 5308:0ff3dc9081a1 user: Maxim Dounin date: Tue Jul 30 17:27:55 2013 +0400 description: release-1.5.3 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -358,3 +358,4 @@ 7809529022b83157067e7d1e2fb65d57db5f4d99 48a84bc3ff074a65a63e353b9796ff2b14239699 release-1.5.0 99eed1a88fc33f32d66e2ec913874dfef3e12fcc release-1.5.1 5bdca4812974011731e5719a6c398b54f14a6d61 release-1.5.2 +644a079526295aca11c52c46cb81e3754e6ad4ad release-1.5.3 From wandenberg at gmail.com Wed Jul 31 03:28:02 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Wed, 31 Jul 2013 00:28:02 -0300 Subject: Help with shared memory usage In-Reply-To: <20130730100931.GD2130@mdounin.ru> References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> Message-ID: Hello! Thanks for your help. I hope that the patch be OK now. I don't know if the function and variable names are on nginx pattern. Feel free to change the patch. If you have any other point before accept it, will be a pleasure to fix it. --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 +++ src/core/ngx_slab.c 2013-07-31 00:21:08.043034442 -0300 @@ -615,6 +615,26 @@ fail: static ngx_slab_page_t * +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t *page) +{ + ngx_slab_page_t *neighbour = &page[page->slab]; + if (((ngx_slab_page_t *) neighbour->prev != NULL) && (neighbour->next != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { + page->slab += neighbour->slab; + + ((ngx_slab_page_t *) neighbour->prev)->next = neighbour->next; + neighbour->next->prev = neighbour->prev; + + neighbour->slab = NGX_SLAB_PAGE_FREE; + neighbour->prev = (uintptr_t) &pool->free; + neighbour->next = &pool->free; + + return page; + } + return NULL; +} + + +static ngx_slab_page_t * ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages) { ngx_slab_page_t *page, *p; @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po } } + ngx_flag_t retry = 0; + for (page = pool->free.next; page != &pool->free;) { + if (ngx_slab_merge_with_neighbour(pool, page)) { + retry = 1; + } else { + page = page->next; + } + } + + if (retry) { + return ngx_slab_alloc_pages(pool, pages); + } + ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no memory"); return NULL; @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo page->next->prev = (uintptr_t) page; pool->free.next = page; + + ngx_slab_merge_with_neighbour(pool, page); } On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote: > > [...] > > > What would be an alternative to not loop on pool->pages? > > Free memory blocks are linked in pool->free list, it should be > enough to look there. > > [...] > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unfrag_slab_memory2.patch Type: application/octet-stream Size: 1631 bytes Desc: not available URL: From pluknet at nginx.com Wed Jul 31 14:18:01 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 31 Jul 2013 14:18:01 +0000 Subject: [nginx] Configure: fixed autotest cleanup commands. Message-ID: details: http://hg.nginx.org/nginx/rev/434548349838 branches: changeset: 5309:434548349838 user: Sergey Kandaurov date: Wed Jul 31 18:16:40 2013 +0400 description: Configure: fixed autotest cleanup commands. Previously, if configured with --with-cc="clang -g", the autotest.dSYM directories were left unremoved. diffstat: auto/cc/sunc | 2 +- auto/endianness | 4 ++-- auto/feature | 2 +- auto/include | 2 +- auto/lib/test | 2 +- auto/types/sizeof | 2 +- auto/types/typedef | 2 +- auto/types/uintptr_t | 2 +- 8 files changed, 9 insertions(+), 9 deletions(-) diffs (91 lines): diff -r 0ff3dc9081a1 -r 434548349838 auto/cc/sunc --- a/auto/cc/sunc Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/cc/sunc Wed Jul 31 18:16:40 2013 +0400 @@ -30,7 +30,7 @@ if [ -x $NGX_AUTOTEST ]; then ngx_sunc_ver=`$NGX_AUTOTEST` fi -rm $NGX_AUTOTEST* +rm -rf $NGX_AUTOTEST* # 1424 == 0x590, Sun Studio 12 diff -r 0ff3dc9081a1 -r 434548349838 auto/endianness --- a/auto/endianness Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/endianness Wed Jul 31 18:16:40 2013 +0400 @@ -34,10 +34,10 @@ if [ -x $NGX_AUTOTEST ]; then echo " big endian" fi - rm $NGX_AUTOTEST* + rm -rf $NGX_AUTOTEST* else - rm $NGX_AUTOTEST* + rm -rf $NGX_AUTOTEST* echo echo "$0: error: cannot detect system byte ordering" diff -r 0ff3dc9081a1 -r 434548349838 auto/feature --- a/auto/feature Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/feature Wed Jul 31 18:16:40 2013 +0400 @@ -120,4 +120,4 @@ else echo "----------" >> $NGX_AUTOCONF_ERR fi -rm $NGX_AUTOTEST* +rm -rf $NGX_AUTOTEST* diff -r 0ff3dc9081a1 -r 434548349838 auto/include --- a/auto/include Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/include Wed Jul 31 18:16:40 2013 +0400 @@ -58,4 +58,4 @@ else echo "----------" >> $NGX_AUTOCONF_ERR fi -rm $NGX_AUTOTEST* +rm -rf $NGX_AUTOTEST* diff -r 0ff3dc9081a1 -r 434548349838 auto/lib/test --- a/auto/lib/test Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/lib/test Wed Jul 31 18:16:40 2013 +0400 @@ -37,4 +37,4 @@ else echo " not found" fi -rm $NGX_AUTOTEST* +rm -rf $NGX_AUTOTEST* diff -r 0ff3dc9081a1 -r 434548349838 auto/types/sizeof --- a/auto/types/sizeof Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/types/sizeof Wed Jul 31 18:16:40 2013 +0400 @@ -45,7 +45,7 @@ if [ -x $NGX_AUTOTEST ]; then fi -rm -f $NGX_AUTOTEST +rm -rf $NGX_AUTOTEST* case $ngx_size in diff -r 0ff3dc9081a1 -r 434548349838 auto/types/typedef --- a/auto/types/typedef Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/types/typedef Wed Jul 31 18:16:40 2013 +0400 @@ -49,7 +49,7 @@ END fi fi - rm -f $NGX_AUTOTEST + rm -rf $NGX_AUTOTEST* if [ $ngx_found = no ]; then echo $ngx_n " $ngx_try not found$ngx_c" diff -r 0ff3dc9081a1 -r 434548349838 auto/types/uintptr_t --- a/auto/types/uintptr_t Tue Jul 30 17:27:55 2013 +0400 +++ b/auto/types/uintptr_t Wed Jul 31 18:16:40 2013 +0400 @@ -33,7 +33,7 @@ else echo $ngx_n " uintptr_t not found" $ngx_c fi -rm $NGX_AUTOTEST* +rm -rf $NGX_AUTOTEST* if [ $found = no ]; then From pluknet at nginx.com Wed Jul 31 14:37:54 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 31 Jul 2013 14:37:54 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/fa07bec738ac branches: changeset: 5310:fa07bec738ac user: Sergey Kandaurov date: Wed Jul 31 18:35:57 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 434548349838 -r fa07bec738ac src/core/nginx.h --- a/src/core/nginx.h Wed Jul 31 18:16:40 2013 +0400 +++ b/src/core/nginx.h Wed Jul 31 18:35:57 2013 +0400 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005003 -#define NGINX_VERSION "1.5.3" +#define nginx_version 1005004 +#define NGINX_VERSION "1.5.4" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From vbart at nginx.com Wed Jul 31 19:41:36 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 31 Jul 2013 19:41:36 +0000 Subject: [nginx] MIME: use "application/javascript" for .js files. Message-ID: details: http://hg.nginx.org/nginx/rev/ae3fd1ca62e0 branches: changeset: 5311:ae3fd1ca62e0 user: Valentin Bartenev date: Wed Jul 31 23:40:46 2013 +0400 description: MIME: use "application/javascript" for .js files. Though there are several MIME types commonly used for JavaScript nowadays, the most common being "text/javascript", "application/javascript", and currently used by nginx "application/x-javascript", RFC 4329 prefers "application/javascript". The "charset_types" directive's default value was adjusted accordingly. diffstat: conf/mime.types | 2 +- src/http/modules/ngx_http_charset_filter_module.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff -r fa07bec738ac -r ae3fd1ca62e0 conf/mime.types --- a/conf/mime.types Wed Jul 31 18:35:57 2013 +0400 +++ b/conf/mime.types Wed Jul 31 23:40:46 2013 +0400 @@ -5,7 +5,7 @@ types { text/xml xml; image/gif gif; image/jpeg jpeg jpg; - application/x-javascript js; + application/javascript js; application/atom+xml atom; application/rss+xml rss; diff -r fa07bec738ac -r ae3fd1ca62e0 src/http/modules/ngx_http_charset_filter_module.c --- a/src/http/modules/ngx_http_charset_filter_module.c Wed Jul 31 18:35:57 2013 +0400 +++ b/src/http/modules/ngx_http_charset_filter_module.c Wed Jul 31 23:40:46 2013 +0400 @@ -128,7 +128,7 @@ ngx_str_t ngx_http_charset_default_type ngx_string("text/xml"), ngx_string("text/plain"), ngx_string("text/vnd.wap.wml"), - ngx_string("application/x-javascript"), + ngx_string("application/javascript"), ngx_string("application/rss+xml"), ngx_null_string }; From jzefip at gmail.com Wed Jul 31 21:33:03 2013 From: jzefip at gmail.com (Julien Zefi) Date: Wed, 31 Jul 2013 15:33:03 -0600 Subject: Looking for developer to fix a NginX test case module In-Reply-To: <20130730093454.GB2130@mdounin.ru> References: <20130726133136.GP90722@mdounin.ru> <20130730093454.GB2130@mdounin.ru> Message-ID: On Tue, Jul 30, 2013 at 3:34 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jul 29, 2013 at 07:07:10PM -0600, Julien Zefi wrote: > > > hi Maxim, > > > > thanks so much for the code provided, i have merged that code in my > module > > and it worked as expected!. Would you please send me the details to send > > you the money ? > > Please use donations form here: > > http://nginx.org/en/donation.html > thanks, i will be transferring the money this Friday. -------------- next part -------------- An HTML attachment was scrubbed... URL: