From jfhamlin at cotap.com Mon Mar 2 02:24:28 2015 From: jfhamlin at cotap.com (James Hamlin) Date: Sun, 1 Mar 2015 18:24:28 -0800 Subject: SSL+ProxyProtocol: Fix connection hang when a header-only packet is received Message-ID: # HG changeset patch # User James Hamlin # Date 1425260813 28800 # Sun Mar 01 17:46:53 2015 -0800 # Branch fix-deferred-with-proxy-protocol # Node ID 3835928c9e046bab0f6bc8d35d3ede468b6a07ce # Parent 6a7c6973d6fc3b628b38e000f0ed192c99bdfc49 SSL+ProxyProtocol: Fix conn. hang when header-only packet received This is a fix for a bug exposed when using deferred accept, SSL, and the proxy protocol. When accept deferral is enabled (the "deferred" option on "listen" directives), the "ready" bit is preemptively set on the connection's "read" event. If the data first received contains _only_ the proxy protocol header, then the "ready" bit will not be cleared by the call to ngx_recv(), since the call does not attempt to read more than the header itself. If the first byte from the client has not been received by the time the posted event is run, the call to ngx_handle_read_event will do nothing, as "ready" will still be set, and the connection will time out despite later receipt of the bytes. The fix is to clear the "ready" bit from within ngx_http_ssl_handshake when it is known that only the header was available. This is not a problem when using KQUEUE, as the "ready" bit is cleared based on available byte tracking. diff -r 6a7c6973d6fc -r 3835928c9e04 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Fri Feb 27 16:28:31 2015 +0300 +++ b/src/http/ngx_http_request.c Sun Mar 01 17:46:53 2015 -0800 @@ -691,6 +691,12 @@ c->log->action = "SSL handshaking"; if (n == (ssize_t) size) { +#if (NGX_HAVE_KQUEUE) + if ((ngx_event_flags & NGX_USE_KQUEUE_EVENT) == 0) +#endif + { + rev->ready = 0; + } ngx_post_event(rev, &ngx_posted_events); return; } From tigran.bayburtsyan at gmail.com Mon Mar 2 09:07:45 2015 From: tigran.bayburtsyan at gmail.com (Tigran Bayburtsyan) Date: Mon, 2 Mar 2015 13:07:45 +0400 Subject: Get ngx_http_request_t as a char array In-Reply-To: <20150227122344.GC19012@mdounin.ru> References: <1e1801d0527c$1562e750$4028b5f0$@gmail.com> <20150227122344.GC19012@mdounin.ru> Message-ID: <002001d054c8$59f20510$0dd60f30$@gmail.com> Hi All. I've asked this question, and didn't get any solution for this. So I decided to write my own module to solve my own problem :) Checkout my module https://github.com/flaxtonio/nginx-flaxton-logger-module , and let me know if you will have some suggestions for it. Maxim Dounin. Thanks for your response. To transfer Request into a char * array I've crated functionality for it in this module. Let me know if it works wrong for you. Thanks. -----Original Message----- From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Friday, February 27, 2015 4:24 PM To: nginx-devel at nginx.org Subject: Re: Get ngx_http_request_t as a char array Hello! On Fri, Feb 27, 2015 at 02:56:46PM +0400, Tigran Bayburtsyan wrote: > Hi. > > I'm trying to make a smart logging module for Nginx and I need to get > all HTTP request from client as a string (char *). > > I know that ngx_http_request_t contains all HTTP request data , but I > don't need to make a loop through all headers_in parameters or request > structure parameters. > > I want to get all request with body as a char * array, like Nginx is > receiving from tcp socket. There are two problems here: - nginx is not receiving a request as a string from tcp socket, even if you talk about headers only; - consequently, it is not available as a string within nginx. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com From info at phpgangsta.de Mon Mar 2 12:12:44 2015 From: info at phpgangsta.de (Michael Kliewe) Date: Mon, 2 Mar 2015 13:12:44 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <54EDEAE2.80904@phpgangsta.de> References: <51fd90f96449c23af007.1394099969@HPC> <20140306162718.GL34696@mdounin.ru> <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> Message-ID: <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> Hi Maxim, with your changes there is a problem: nginx now just sends the header if the connection is encrypted. If the connection is not encrypted, then there is no header sent to the auth script. In the auth script I cannot distinguish between "user did not use encryption" and "nginx doesn't have the feature" (because of mixed nginx versions). With the original version of the patch this was possible. Kind regards Michael On Feb 25, 2015, at 4:31 PM, Michael Kliewe wrote: > Hi Maxim, > > thank you very much, that helps a lot! Then we can use the unpatched nginx version again instead of self-compiling it every time ;-) > > Michael > > Am 25.02.2015 um 16:28 schrieb Maxim Dounin: >> Hello! >> >> On Thu, Feb 05, 2015 at 04:00:28PM +0300, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Feb 04, 2015 at 10:07:50PM +0100, Michael Kliewe wrote: >>> >>>> Hi Maxim, >>>> >>>> I would like to remind again this feature patch. It would help a lot to get >>>> this information about transport encryption into the auth script. It does >>>> not hurt the performance, and is a very tiny patch. >>>> >>>> You can rename the header name and values as you like. It would be very nice >>>> if you could please merge it into nginx. >>> I'm planning to look into this patch and other mail SSL >>> improvements once I've done with unbuffered upload feature I'm >>> currently working on. >> Just an update: a patch to address this was committed, see >> http://hg.nginx.org/nginx/rev/3b3f789655dc. >> >> Thanks Filipe for the original patch, and thanks Michael for >> prodding this. >> > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From fdasilvayy at gmail.com Mon Mar 2 13:12:48 2015 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Mon, 2 Mar 2015 14:12:48 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> References: <51fd90f96449c23af007.1394099969@HPC> <20140306162718.GL34696@mdounin.ru> <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> Message-ID: 2015-03-02 13:12 GMT+01:00 Michael Kliewe : > Hi Maxim, > > with your changes there is a problem: > nginx now just sends the header if the connection is encrypted. If the connection is not encrypted, then there is no header sent to the auth script. > In the auth script I cannot distinguish between "user did not use encryption" and "nginx doesn't have the feature" (because of mixed nginx versions). > With the original version of the patch this was possible. > > Kind regards > Michael > > On Feb 25, 2015, at 4:31 PM, Michael Kliewe wrote: > >> Hi Maxim, >> >> thank you very much, that helps a lot! Then we can use the unpatched nginx version again instead of self-compiling it every time ;-) >> >> Michael >> >> Am 25.02.2015 um 16:28 schrieb Maxim Dounin: >>> Hello! >>> >>> On Thu, Feb 05, 2015 at 04:00:28PM +0300, Maxim Dounin wrote: >>> >>>> Hello! >>>> >>>> On Wed, Feb 04, 2015 at 10:07:50PM +0100, Michael Kliewe wrote: >>>> >>>>> Hi Maxim, >>>>> >>>>> I would like to remind again this feature patch. It would help a lot to get >>>>> this information about transport encryption into the auth script. It does >>>>> not hurt the performance, and is a very tiny patch. >>>>> >>>>> You can rename the header name and values as you like. It would be very nice >>>>> if you could please merge it into nginx. >>>> I'm planning to look into this patch and other mail SSL >>>> improvements once I've done with unbuffered upload feature I'm >>>> currently working on. >>> Just an update: a patch to address this was committed, see >>> http://hg.nginx.org/nginx/rev/3b3f789655dc. >>> >>> Thanks Filipe for the original patch, and thanks Michael for >>> prodding this. >>> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Mon Mar 2 14:14:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 17:14:52 +0300 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> References: <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> Message-ID: <20150302141452.GT19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 01:12:44PM +0100, Michael Kliewe wrote: > with your changes there is a problem: > nginx now just sends the header if the connection is encrypted. > If the connection is not encrypted, then there is no header sent > to the auth script. > In the auth script I cannot distinguish between "user did not > use encryption" and "nginx doesn't have the feature" (because of > mixed nginx versions). > With the original version of the patch this was possible. Try updating all your nginx instances before using the header for something limiting, it is expected to resolve your problem. Either way, the only safe thing to do if "nginx doesn't have the feature" is to assume there is no SSL if SSL matters. And that's what current behaviour encourages. -- Maxim Dounin http://nginx.org/ From info at phpgangsta.de Mon Mar 2 14:32:03 2015 From: info at phpgangsta.de (Michael Kliewe) Date: Mon, 2 Mar 2015 15:32:03 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <20150302141452.GT19012@mdounin.ru> References: <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> Message-ID: Hi Maxim, On Mar 2, 2015, at 3:14 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 02, 2015 at 01:12:44PM +0100, Michael Kliewe wrote: > >> with your changes there is a problem: >> nginx now just sends the header if the connection is encrypted. >> If the connection is not encrypted, then there is no header sent >> to the auth script. >> In the auth script I cannot distinguish between "user did not >> use encryption" and "nginx doesn't have the feature" (because of >> mixed nginx versions). >> With the original version of the patch this was possible. > > Try updating all your nginx instances before using the header for > something limiting, it is expected to resolve your problem. > > Either way, the only safe thing to do if "nginx doesn't have the > feature" is to assume there is no SSL if SSL matters. And that's > what current behaviour encourages. You are kind of right, but currently I'm distinguishing between "encrypted", "not-encrypted" and "unknown", because we have different versions of nginx in different setups. I cannot update all nginx versions in parallel in all setups. That's why your tip does not help me ;-/ I need to distinguish between "not-encrypted" and "unknown", because I want to warn all users still using not-encrypted connections. With your patch I cannot distinguish between them, and would send false warnings... Would it be complicated to send "Auth-SSL: off" in case there was no encryption? It's just one "else" more, and solves all problems. else { b->last = ngx_cpymem(b->last, "Auth-SSL: off" CRLF, sizeof("Auth-SSL: off" CRLF) - 1); } That would really help me, and would replace the old patch from Filipe that I'm using since 6 months (which also sends the header in case there is no encryption)... Thanks Michael From mdounin at mdounin.ru Mon Mar 2 14:56:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 17:56:55 +0300 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: References: <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> Message-ID: <20150302145655.GV19012@mdounin.ru> Hello! On Mon, Mar 02, 2015 at 03:32:03PM +0100, Michael Kliewe wrote: > Hi Maxim, > > On Mar 2, 2015, at 3:14 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Mar 02, 2015 at 01:12:44PM +0100, Michael Kliewe > > wrote: > > > >> with your changes there is a problem: > >> nginx now just sends the header if the connection is > >> encrypted. If the connection is not encrypted, then there is > >> no header sent to the auth script. > >> In the auth script I cannot distinguish between "user did not > >> use encryption" and "nginx doesn't have the feature" (because > >> of mixed nginx versions). > >> With the original version of the patch this was possible. > > > > Try updating all your nginx instances before using the header > > for something limiting, it is expected to resolve your > > problem. > > > > Either way, the only safe thing to do if "nginx doesn't have > > the feature" is to assume there is no SSL if SSL matters. And > > that's what current behaviour encourages. > > You are kind of right, but currently I'm distinguishing between > "encrypted", "not-encrypted" and "unknown", because we have > different versions of nginx in different setups. I cannot update > all nginx versions in parallel in all setups. That's why your > tip does not help me ;-/ > I need to distinguish between "not-encrypted" and "unknown", > because I want to warn all users still using not-encrypted > connections. With your patch I cannot distinguish between them, > and would send false warnings... So switch off warnings till the update is complete. That's an easy way to go. Alternatively, you may use the "auth_http_header" directive (http://nginx.org/r/auth_http_header) to distinguish between various installations. > Would it be complicated to send "Auth-SSL: off" in case there > was no encryption? It's just one "else" more, and solves all > problems. You are trying to solve your particular deployment problem by introducing the flag which will be here for all users forever. This doesn't looks like a good solution to me. -- Maxim Dounin http://nginx.org/ From arut at nginx.com Mon Mar 2 15:42:39 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 02 Mar 2015 15:42:39 +0000 Subject: [nginx] Upstream hash: speedup consistent hash init. Message-ID: details: http://hg.nginx.org/nginx/rev/435ee290c2e1 branches: changeset: 5991:435ee290c2e1 user: Roman Arutyunyan date: Mon Mar 02 18:41:29 2015 +0300 description: Upstream hash: speedup consistent hash init. Repeatedly calling ngx_http_upstream_add_chash_point() to create the points array in sorted order, is O(n^2) to the total weight. This can cause nginx startup and reconfigure to be substantially delayed. For example, when total weight is 1000, startup takes 5s on a modern laptop. Replace this with a linear insertion followed by QuickSort and duplicates removal. Startup for total weight of 1000 reduces to 40ms. Based on a patch by Wai Keen Woon. diffstat: src/http/modules/ngx_http_upstream_hash_module.c | 52 ++++++++++++++--------- 1 files changed, 31 insertions(+), 21 deletions(-) diffs (85 lines): diff -r 6a7c6973d6fc -r 435ee290c2e1 src/http/modules/ngx_http_upstream_hash_module.c --- a/src/http/modules/ngx_http_upstream_hash_module.c Fri Feb 27 16:28:31 2015 +0300 +++ b/src/http/modules/ngx_http_upstream_hash_module.c Mon Mar 02 18:41:29 2015 +0300 @@ -49,8 +49,8 @@ static ngx_int_t ngx_http_upstream_get_h static ngx_int_t ngx_http_upstream_init_chash(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us); -static void ngx_http_upstream_add_chash_point( - ngx_http_upstream_chash_points_t *points, uint32_t hash, ngx_str_t *server); +static int ngx_libc_cdecl + ngx_http_upstream_chash_cmp_points(const void *one, const void *two); static ngx_uint_t ngx_http_upstream_find_chash_point( ngx_http_upstream_chash_points_t *points, uint32_t hash); static ngx_int_t ngx_http_upstream_init_chash_peer(ngx_http_request_t *r, @@ -360,12 +360,27 @@ ngx_http_upstream_init_chash(ngx_conf_t ngx_crc32_update(&hash, (u_char *) &prev_hash, sizeof(uint32_t)); ngx_crc32_final(hash); - ngx_http_upstream_add_chash_point(points, hash, &peer->server); + points->point[points->number].hash = hash; + points->point[points->number].server = server; + points->number++; prev_hash = hash; } } + ngx_qsort(points->point, + points->number, + sizeof(ngx_http_upstream_chash_point_t), + ngx_http_upstream_chash_cmp_points); + + for (i = 0, j = 1; j < points->number; j++) { + if (points->point[i].hash != points->point[j].hash) { + points->point[++i] = points->point[j]; + } + } + + points->number = i + 1; + hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); hcf->points = points; @@ -373,28 +388,23 @@ ngx_http_upstream_init_chash(ngx_conf_t } -static void -ngx_http_upstream_add_chash_point(ngx_http_upstream_chash_points_t *points, - uint32_t hash, ngx_str_t *server) +static int ngx_libc_cdecl +ngx_http_upstream_chash_cmp_points(const void *one, const void *two) { - size_t size; - ngx_uint_t i; - ngx_http_upstream_chash_point_t *point; + ngx_http_upstream_chash_point_t *first = + (ngx_http_upstream_chash_point_t *) one; + ngx_http_upstream_chash_point_t *second = + (ngx_http_upstream_chash_point_t *) two; - i = ngx_http_upstream_find_chash_point(points, hash); - point = &points->point[i]; + if (first->hash < second->hash) { + return -1; - if (point->hash == hash) { - return; + } else if (first->hash > second->hash) { + return 1; + + } else { + return 0; } - - size = (points->number - i) * sizeof(ngx_http_upstream_chash_point_t); - - ngx_memmove(point + 1, point, size); - - points->number++; - point->hash = hash; - point->server = server; } From arut at nginx.com Mon Mar 2 15:46:40 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 2 Mar 2015 18:46:40 +0300 Subject: [PATCH] Upstream hash: speedup consistent hash init In-Reply-To: <54E5758F.8070009@onapp.com> References: <54E48088.5070203@onapp.com> <5FDA6B3E-63FC-4A05-A259-9B334923D151@nginx.com> <54E5758F.8070009@onapp.com> Message-ID: <20150302154640.GA4865@Romans-MacBook-Air.local> Hello! On Thu, Feb 19, 2015 at 01:33:03PM +0800, Wai Keen Woon wrote: > On 2/18/2015 8:49 PM, Roman Arutyunyan wrote: > >>Note that in the original implementation, if there are points > >>with duplicate hash, only the first is kept. In this change, all > >>are included. > >This is the intended behaviour. Consistent hash array is build over > >server entries, but not addresses resolved from them. Duplicate points > >are ignored since most likely they refer to multiple addresses of > >the same host. > I see. I could add a loop to remove duplicate hashes and maybe adjacent > points referencing the same server too. Or do you prefer to take some time > to look into it in more detail first? Thanks for your work. We have committed a slightly modified version of the patch. -- Roman Arutyunyan From arut at nginx.com Mon Mar 2 16:48:56 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 02 Mar 2015 16:48:56 +0000 Subject: [nginx] Cache: do not inherit last_modified and etag from stale ... Message-ID: details: http://hg.nginx.org/nginx/rev/174512857ccf branches: changeset: 5992:174512857ccf user: Roman Arutyunyan date: Mon Mar 02 19:47:13 2015 +0300 description: Cache: do not inherit last_modified and etag from stale response. When replacing a stale cache entry, its last_modified and etag could be inherited from the old entry if the response code is not 200 or 206. Moreover, etag could be inherited with any response code if it's missing in the new response. As a result, the cache entry is left with invalid last_modified or etag which could lead to broken revalidation. For example, when a file is deleted from backend, its last_modified is copied to the new 404 cache entry and is used later for revalidation. Once the old file appears again with its original timestamp, revalidation succeeds and the cached 404 response is sent to client instead of the file. The problem appeared with etags in 44b9ab7752e3 (1.7.3) and affected last_modified in 1573fc7875fa (1.7.9). diffstat: src/http/ngx_http_file_cache.c | 2 -- src/http/ngx_http_upstream.c | 7 +++++++ 2 files changed, 7 insertions(+), 2 deletions(-) diffs (30 lines): diff -r 435ee290c2e1 -r 174512857ccf src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Mon Mar 02 18:41:29 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Mon Mar 02 19:47:13 2015 +0300 @@ -181,8 +181,6 @@ ngx_http_file_cache_new(ngx_http_request c->file.log = r->connection->log; c->file.fd = NGX_INVALID_FILE; - c->last_modified = -1; - return NGX_OK; } diff -r 435ee290c2e1 -r 174512857ccf src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Mar 02 18:41:29 2015 +0300 +++ b/src/http/ngx_http_upstream.c Mon Mar 02 19:47:13 2015 +0300 @@ -2635,7 +2635,14 @@ ngx_http_upstream_send_response(ngx_http if (u->headers_in.etag) { r->cache->etag = u->headers_in.etag->value; + + } else { + ngx_str_null(&r->cache->etag); } + + } else { + r->cache->last_modified = -1; + ngx_str_null(&r->cache->etag); } if (ngx_http_file_cache_set_header(r, u->buffer.start) != NGX_OK) { From arut at nginx.com Mon Mar 2 16:53:27 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 2 Mar 2015 19:53:27 +0300 Subject: cache revalidation bug In-Reply-To: <1A3D6C2F-F553-420A-A387-112DED7EDDB5@isix.nl> References: <1A3D6C2F-F553-420A-A387-112DED7EDDB5@isix.nl> Message-ID: <20150302165327.GB4865@Romans-MacBook-Air.local> Hello Jeffrey K, On Fri, Feb 27, 2015 at 09:16:27AM +0100, Jeffrey K. wrote: > I?m experiencing an issue that cached 404 responses are revalidated when the requested file are available again on the backend server with an older time stamp. Hereby the details of my issue. > > > > Nginx version/build details > > # nginx -V > nginx version: nginx/1.7.10 > built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/tmp --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-aio_module --with-file-aio --with-http_spdy_module --with-debug > > > vhost is configured with > > server { > listen 80; > > server_name test.domain.tld; > set $origin backend.domain.tld; > > expires off; > > location / > { > client_max_body_size 0; > client_body_buffer_size 8k; > proxy_connect_timeout 60; > proxy_send_timeout 60; > proxy_read_timeout 60; > proxy_buffer_size 16k; > proxy_buffers 256 16k; > proxy_buffering on; > proxy_max_temp_file_size 1m; > proxy_ignore_client_abort on; > proxy_intercept_errors on; > proxy_next_upstream error timeout invalid_header; > > proxy_cache one; > proxy_cache_min_uses 1; > proxy_cache_lock off; > proxy_cache_lock_timeout 5s; > > proxy_cache_valid 200 302 301 1m; > proxy_cache_valid 404 5s; > proxy_cache_revalidate on; > > proxy_set_header Host $origin; > proxy_pass_header Set-Cookie; > > proxy_set_header Range ""; > proxy_set_header Request-Range ""; > proxy_set_header If-Range ""; > > proxy_cache_key "$scheme://$host$uri"; > proxy_pass http://$origin$uri; > proxy_redirect off; > } > } > > > > > Log format used > > ?$bytes_sent?$remote_addr?$msec?$status?$http_referer?$http_user_agent?$request_time?$request_method $request_uri $server_protocol?$server_port?$upstream_cache_status?$upstream_status?$upstream_response_time?$request_completion?$backend_server? > > > > Requesting non-existen file. 404 will be cached for 5 second > after 5 seconds its expires, file is fetched from backend that gives 404 again > > ?469?[remote.ip.addr]?1424963103.586?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?MISS?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963104.605?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963108.679?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963109.724?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963110.742?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963114.815?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963115.860?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963116.879?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963120.952?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > [Placed file on backend server (move), file has timestamp of year 2013] > 404 expires, file is fetched from backend and cached > > ?49166?[remote.ip.addr]?1424963122.033?200?-?curl/7.35.0?0.063?GET /pica.jpg HTTP/1.1?80?EXPIRED?200?0.063?OK?[backend.server]? > ?49162?[remote.ip.addr]?1424963123.037?200?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?49162?[remote.ip.addr]?1424963181.036?200?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > File expires, revalidation is send to backend and cached file is updated > > ?49170?[remote.ip.addr]?1424963182.081?200?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?REVALIDATED?304?0.027?OK?[backend.server]? > ?49162?[remote.ip.addr]?1424963183.098?200?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?49162?[remote.ip.addr]?1424963242.103?200?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > [File is removed from backend] > File expires, file is fetched from backend that gives 404 > > ?469?[remote.ip.addr]?1424963243.148?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963244.167?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963248.240?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963249.285?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963250.304?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963254.376?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > [Placed file back on backend server (move), file has timestamp of year 2013] > 404 expires, revalidation is done, because times stamp of file is older then time the 404 was fetched/cached it revalidates?? > - bug? it should not revalidate 404, just expire and fetch actual file! > > ?469?[remote.ip.addr]?1424963255.421?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?REVALIDATED?304?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963256.439?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963260.514?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963261.559?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?REVALIDATED?304?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963262.577?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963266.650?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963267.697?404?-?curl/7.35.0?0.029?GET /pica.jpg HTTP/1.1?80?REVALIDATED?304?0.029?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963268.715?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963272.788?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > [File is removed from backend] > Now the File expires, file is fetched from backend that gives 404 (probably because timestamp is newer?) > > ?469?[remote.ip.addr]?1424963291.224?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963292.242?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963296.315?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963297.360?404?-?curl/7.35.0?0.027?GET /pica.jpg HTTP/1.1?80?EXPIRED?404?0.027?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963298.379?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > ?469?[remote.ip.addr]?1424963302.457?404?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > [Placed file back on backend server (copy), file has timestamp of current time > 404 expires, revalidation is done??, because times stamp of file is newer then time the 404 was fetched/cached it actually fetches the file > > ?49166?[remote.ip.addr]?1424963303.537?200?-?curl/7.35.0?0.059?GET /pica.jpg HTTP/1.1?80?EXPIRED?200?0.059?OK?[backend.server]? > ?49162?[remote.ip.addr]?1424963304.544?200?-?curl/7.35.0?0.000?GET /pica.jpg HTTP/1.1?80?HIT?-?-?OK?[backend.server]? > > > Regards, > > Jeffrey K. > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel Thanks for your report. We've just committed a fix for this. -- Roman Arutyunyan From mail at isix.nl Mon Mar 2 17:37:39 2015 From: mail at isix.nl (Jeffrey K.) Date: Mon, 2 Mar 2015 18:37:39 +0100 Subject: cache revalidation bug In-Reply-To: <20150302165327.GB4865@Romans-MacBook-Air.local> References: <1A3D6C2F-F553-420A-A387-112DED7EDDB5@isix.nl> <20150302165327.GB4865@Romans-MacBook-Air.local> Message-ID: <2956A6ED-E7CB-42BA-87F2-85F724C04958@isix.nl> Hello Roman, I?ve applied he fix to my build and can confirm that the problem has been solved. Thanks of the fix! > On 02 Mar 2015, at 17:53, Roman Arutyunyan wrote: > > Hello Jeffrey K, > > > Thanks for your report. > > We've just committed a fix for this. > > -- > Roman Arutyunyan > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From arut at nginx.com Mon Mar 2 18:28:25 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 02 Mar 2015 18:28:25 +0000 Subject: [nginx] SSL: reset ready flag if recv(MSG_PEEK) found no bytes i... Message-ID: details: http://hg.nginx.org/nginx/rev/5b549cc7f698 branches: changeset: 5993:5b549cc7f698 user: Roman Arutyunyan date: Mon Mar 02 21:15:46 2015 +0300 description: SSL: reset ready flag if recv(MSG_PEEK) found no bytes in socket. Previously, connection hung after calling ngx_http_ssl_handshake() with rev->ready set and no bytes in socket to read. It's possible in at least the following cases: - when processing a connection with expired TCP_DEFER_ACCEPT on Linux - after parsing PROXY protocol header if it arrived in a separate TCP packet Thanks to James Hamlin. diffstat: src/http/ngx_http_request.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 174512857ccf -r 5b549cc7f698 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Mar 02 19:47:13 2015 +0300 +++ b/src/http/ngx_http_request.c Mon Mar 02 21:15:46 2015 +0300 @@ -652,6 +652,7 @@ ngx_http_ssl_handshake(ngx_event_t *rev) if (n == -1) { if (err == NGX_EAGAIN) { + rev->ready = 0; if (!rev->timer_set) { ngx_add_timer(rev, c->listening->post_accept_timeout); From arut at nginx.com Mon Mar 2 18:36:03 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 2 Mar 2015 21:36:03 +0300 Subject: SSL+ProxyProtocol: Fix connection hang when a header-only packet is received In-Reply-To: References: Message-ID: <20150302183603.GC4865@Romans-MacBook-Air.local> Hello James, On Sun, Mar 01, 2015 at 06:24:28PM -0800, James Hamlin wrote: > # HG changeset patch > # User James Hamlin > # Date 1425260813 28800 > # Sun Mar 01 17:46:53 2015 -0800 > # Branch fix-deferred-with-proxy-protocol > # Node ID 3835928c9e046bab0f6bc8d35d3ede468b6a07ce > # Parent 6a7c6973d6fc3b628b38e000f0ed192c99bdfc49 > SSL+ProxyProtocol: Fix conn. hang when header-only packet received > > This is a fix for a bug exposed when using deferred accept, SSL, and the proxy > protocol. > > When accept deferral is enabled (the "deferred" option on "listen" > directives), the "ready" bit is preemptively set on the connection's "read" > event. If the data first received contains _only_ the proxy protocol header, > then the "ready" bit will not be cleared by the call to ngx_recv(), since the > call does not attempt to read more than the header itself. If the first byte > from the client has not been received by the time the posted event is run, the > call to ngx_handle_read_event will do nothing, as "ready" will still be set, > and the connection will time out despite later receipt of the bytes. > > The fix is to clear the "ready" bit from within ngx_http_ssl_handshake when > it is known that only the header was available. > > This is not a problem when using KQUEUE, as the "ready" bit is cleared based > on available byte tracking. > > diff -r 6a7c6973d6fc -r 3835928c9e04 src/http/ngx_http_request.c > --- a/src/http/ngx_http_request.c Fri Feb 27 16:28:31 2015 +0300 > +++ b/src/http/ngx_http_request.c Sun Mar 01 17:46:53 2015 -0800 > @@ -691,6 +691,12 @@ > c->log->action = "SSL handshaking"; > > if (n == (ssize_t) size) { > +#if (NGX_HAVE_KQUEUE) > + if ((ngx_event_flags & NGX_USE_KQUEUE_EVENT) == 0) > +#endif > + { > + rev->ready = 0; > + } > ngx_post_event(rev, &ngx_posted_events); > return; > } Thanks for reporting the issue. We've committed a slightly different solution for this. http://hg.nginx.org/nginx/rev/5b549cc7f698 -- Roman From mdounin at mdounin.ru Mon Mar 2 19:04:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 02 Mar 2015 19:04:41 +0000 Subject: [nginx] Upstream: avoid duplicate finalization. Message-ID: details: http://hg.nginx.org/nginx/rev/5abf5af257a7 branches: changeset: 5994:5abf5af257a7 user: Maxim Dounin date: Mon Mar 02 21:44:32 2015 +0300 description: Upstream: avoid duplicate finalization. A request may be already finalized when ngx_http_upstream_finalize_request() is called, due to filter finalization: after filter finalization upstream can be finalized via ngx_http_upstream_cleanup(), either from ngx_http_terminate_request(), or because a new request was initiated to an upstream. Then the upstream code will see an error returned from the filter chain and will call the ngx_http_upstream_finalize_request() function again. To prevent corruption of various upstream data in this situation, make sure to do nothing but merely call ngx_http_finalize_request(). Prodded by Yichun Zhang, for details see the thread at http://nginx.org/pipermail/nginx-devel/2015-February/006539.html. diffstat: src/http/ngx_http_upstream.c | 12 ++++++++---- 1 files changed, 8 insertions(+), 4 deletions(-) diffs (22 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3751,10 +3751,14 @@ ngx_http_upstream_finalize_request(ngx_h ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "finalize http upstream request: %i", rc); - if (u->cleanup) { - *u->cleanup = NULL; - u->cleanup = NULL; - } + if (u->cleanup == NULL) { + /* the request was already finalized */ + ngx_http_finalize_request(r, NGX_DONE); + return; + } + + *u->cleanup = NULL; + u->cleanup = NULL; if (u->resolved && u->resolved->ctx) { ngx_resolve_name_done(u->resolved->ctx); From mdounin at mdounin.ru Mon Mar 2 19:04:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 02 Mar 2015 19:04:44 +0000 Subject: [nginx] Upstream: upstream argument in ngx_http_upstream_process... Message-ID: details: http://hg.nginx.org/nginx/rev/5f179f344096 branches: changeset: 5995:5f179f344096 user: Maxim Dounin date: Mon Mar 02 21:44:42 2015 +0300 description: Upstream: upstream argument in ngx_http_upstream_process_request(). In case of filter finalization, r->upstream might be changed during the ngx_event_pipe() call. Added an argument to preserve it while calling the ngx_http_upstream_process_request() function. diffstat: src/http/ngx_http_upstream.c | 14 +++++++------- 1 files changed, 7 insertions(+), 7 deletions(-) diffs (45 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -76,7 +76,8 @@ static ngx_int_t ngx_http_upstream_non_b static void ngx_http_upstream_process_downstream(ngx_http_request_t *r); static void ngx_http_upstream_process_upstream(ngx_http_request_t *r, ngx_http_upstream_t *u); -static void ngx_http_upstream_process_request(ngx_http_request_t *r); +static void ngx_http_upstream_process_request(ngx_http_request_t *r, + ngx_http_upstream_t *u); static void ngx_http_upstream_store(ngx_http_request_t *r, ngx_http_upstream_t *u); static void ngx_http_upstream_dummy_handler(ngx_http_request_t *r, @@ -3349,7 +3350,7 @@ ngx_http_upstream_process_downstream(ngx } } - ngx_http_upstream_process_request(r); + ngx_http_upstream_process_request(r, u); } @@ -3417,18 +3418,17 @@ ngx_http_upstream_process_upstream(ngx_h } } - ngx_http_upstream_process_request(r); + ngx_http_upstream_process_request(r, u); } static void -ngx_http_upstream_process_request(ngx_http_request_t *r) +ngx_http_upstream_process_request(ngx_http_request_t *r, + ngx_http_upstream_t *u) { ngx_temp_file_t *tf; ngx_event_pipe_t *p; - ngx_http_upstream_t *u; - - u = r->upstream; + p = u->pipe; if (u->peer.connection) { From mdounin at mdounin.ru Mon Mar 2 19:09:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Mar 2015 22:09:20 +0300 Subject: [PATCH] Upstream: fixed $upstream_response_time for filter_finalize + error_page. In-Reply-To: References: <20150213150512.GG19012@mdounin.ru> Message-ID: <20150302190920.GL19012@mdounin.ru> Hello! On Sun, Feb 15, 2015 at 02:04:03PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > On Fri, Feb 13, 2015 at 7:05 AM, Maxim Dounin wrote: > > Rather, I would suggest something like this: > > > > --- a/src/http/ngx_http_upstream.c > > +++ b/src/http/ngx_http_upstream.c > > @@ -3744,10 +3744,13 @@ ngx_http_upstream_finalize_request(ngx_h > > ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > > "finalize http upstream request: %i", rc); > > > > - if (u->cleanup) { > > - *u->cleanup = NULL; > > - u->cleanup = NULL; > > - } > > + if (u->cleanup == NULL) { > > + /* the request was already finalized */ > > + ngx_http_finalize_request(r, NGX_DONE); > + return > > + } > > + > > + *u->cleanup = NULL; > > + u->cleanup = NULL; > > > > This patch works for me and yeah it's better. Will you commit it? I've committed this and another patch related to filter finalization, see here: http://hg.nginx.org/nginx/rev/5abf5af257a7 http://hg.nginx.org/nginx/rev/5f179f344096 In the particular case you've described in the commit log of your patch, I would also recommend to avoid using filter finalization. When in header filter, it should be enough to just return appropriate code instead. Filter finalization is needed when working with a response body, not headers. -- Maxim Dounin http://nginx.org/ From tigran.bayburtsyan at gmail.com Mon Mar 2 19:11:40 2015 From: tigran.bayburtsyan at gmail.com (Tigran Bayburtsyan) Date: Mon, 2 Mar 2015 23:11:40 +0400 Subject: Control workers in module Message-ID: <54f4b5f9.21eac20a.3b59.ffffa3fd@mx.google.com> Hi all. I have a quick question. Is it possible to create new worker and kill existing one from Nginx module ? Let me know if anyone can explain. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfhamlin at cotap.com Mon Mar 2 19:27:42 2015 From: jfhamlin at cotap.com (James Hamlin) Date: Mon, 2 Mar 2015 11:27:42 -0800 Subject: SSL+ProxyProtocol: Fix connection hang when a header-only packet is received In-Reply-To: <20150302183603.GC4865@Romans-MacBook-Air.local> References: <20150302183603.GC4865@Romans-MacBook-Air.local> Message-ID: Hi, Roman, On Mon, Mar 2, 2015 at 10:36 AM, Roman Arutyunyan wrote: > Hello James, > > On Sun, Mar 01, 2015 at 06:24:28PM -0800, James Hamlin wrote: >> # HG changeset patch >> # User James Hamlin >> # Date 1425260813 28800 >> # Sun Mar 01 17:46:53 2015 -0800 >> # Branch fix-deferred-with-proxy-protocol >> # Node ID 3835928c9e046bab0f6bc8d35d3ede468b6a07ce >> # Parent 6a7c6973d6fc3b628b38e000f0ed192c99bdfc49 >> SSL+ProxyProtocol: Fix conn. hang when header-only packet received >> >> This is a fix for a bug exposed when using deferred accept, SSL, and the proxy >> protocol. >> >> When accept deferral is enabled (the "deferred" option on "listen" >> directives), the "ready" bit is preemptively set on the connection's "read" >> event. If the data first received contains _only_ the proxy protocol header, >> then the "ready" bit will not be cleared by the call to ngx_recv(), since the >> call does not attempt to read more than the header itself. If the first byte >> from the client has not been received by the time the posted event is run, the >> call to ngx_handle_read_event will do nothing, as "ready" will still be set, >> and the connection will time out despite later receipt of the bytes. >> >> The fix is to clear the "ready" bit from within ngx_http_ssl_handshake when >> it is known that only the header was available. >> >> This is not a problem when using KQUEUE, as the "ready" bit is cleared based >> on available byte tracking. >> >> diff -r 6a7c6973d6fc -r 3835928c9e04 src/http/ngx_http_request.c >> --- a/src/http/ngx_http_request.c Fri Feb 27 16:28:31 2015 +0300 >> +++ b/src/http/ngx_http_request.c Sun Mar 01 17:46:53 2015 -0800 >> @@ -691,6 +691,12 @@ >> c->log->action = "SSL handshaking"; >> >> if (n == (ssize_t) size) { >> +#if (NGX_HAVE_KQUEUE) >> + if ((ngx_event_flags & NGX_USE_KQUEUE_EVENT) == 0) >> +#endif >> + { >> + rev->ready = 0; >> + } >> ngx_post_event(rev, &ngx_posted_events); >> return; >> } > > Thanks for reporting the issue. > We've committed a slightly different solution for this. Thanks for the quick resolution with the improved fix! I wasn't aware of TCP_DEFER_ACCEPT expiration; very good to know. Cheers, James From agentzh at gmail.com Mon Mar 2 19:37:33 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 2 Mar 2015 11:37:33 -0800 Subject: [PATCH] Upstream: fixed $upstream_response_time for filter_finalize + error_page. In-Reply-To: <20150302190920.GL19012@mdounin.ru> References: <20150213150512.GG19012@mdounin.ru> <20150302190920.GL19012@mdounin.ru> Message-ID: Hi Maxim On Mon, Mar 2, 2015 at 11:09 AM, Maxim Dounin wrote: > I've committed this and another patch related to filter > finalization, see here: > > http://hg.nginx.org/nginx/rev/5abf5af257a7 > http://hg.nginx.org/nginx/rev/5f179f344096 > Great. Thanks! > In the particular case you've described in the commit log of your > patch, I would also recommend to avoid using filter finalization. > When in header filter, it should be enough to just return > appropriate code instead. Filter finalization is needed when > working with a response body, not headers. > Oh I was not aware of that. Thanks for the suggestion and clarification! I'll adjust my module code accordingly :) Thanks! -agentzh From mdounin at mdounin.ru Mon Mar 2 22:30:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 02 Mar 2015 22:30:34 +0000 Subject: [nginx] Style. Message-ID: details: http://hg.nginx.org/nginx/rev/ab660d7c9980 branches: changeset: 5996:ab660d7c9980 user: Maxim Dounin date: Tue Mar 03 01:15:21 2015 +0300 description: Style. Noted by Ruslan Ermilov. diffstat: src/http/ngx_http_upstream.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3426,8 +3426,8 @@ static void ngx_http_upstream_process_request(ngx_http_request_t *r, ngx_http_upstream_t *u) { - ngx_temp_file_t *tf; - ngx_event_pipe_t *p; + ngx_temp_file_t *tf; + ngx_event_pipe_t *p; p = u->pipe; From alex at cooperi.net Tue Mar 3 04:49:44 2015 From: alex at cooperi.net (Alex Wilson) Date: Tue, 3 Mar 2015 14:49:44 +1000 Subject: [PATCH] set $https for use behind SSL-stripping load-balancer In-Reply-To: <556B82D2-1214-4F53-9424-8EA18BAB65B1@cooperi.net> References: <556B82D2-1214-4F53-9424-8EA18BAB65B1@cooperi.net> Message-ID: <476A2E60-3DA6-4598-8AB2-DA9D5CE8002D@cooperi.net> > On 28 Jan 2015, at 2:35 pm, Alex Wilson wrote: > > Currently when using nginx behind an SSL-stripping load-balancer, there is no way to control the scheme used when generating directory redirects?. > Has anyone got any feedback for me on this patch by any chance? Is this something that?s just not wanted, or is my code terrible, or?? Sorry to be a nag, I was hoping to at least get a ?no, go away? :) From mdounin at mdounin.ru Tue Mar 3 12:01:01 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Mar 2015 15:01:01 +0300 Subject: [PATCH] set $https for use behind SSL-stripping load-balancer In-Reply-To: <476A2E60-3DA6-4598-8AB2-DA9D5CE8002D@cooperi.net> References: <556B82D2-1214-4F53-9424-8EA18BAB65B1@cooperi.net> <476A2E60-3DA6-4598-8AB2-DA9D5CE8002D@cooperi.net> Message-ID: <20150303120101.GX19012@mdounin.ru> Hello! On Tue, Mar 03, 2015 at 02:49:44PM +1000, Alex Wilson wrote: > > > On 28 Jan 2015, at 2:35 pm, Alex Wilson wrote: > > > > Currently when using nginx behind an SSL-stripping load-balancer, there is no way to control the scheme used when generating directory redirects?. > > > > Has anyone got any feedback for me on this patch by any chance? Is this something that?s just not wanted, or is my code terrible, or?? > > Sorry to be a nag, I was hoping to at least get a ?no, go away? :) While the problem you are trying to solve is clear enough, I can't say I like the patch suggested. If we are going to address this, I would prefer something in line with server_name_in_redirect (may be something like "relative_redirects" will do the trick, it's allowed as per RFC7231 now) and/or realip module. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Mar 3 13:12:13 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 03 Mar 2015 13:12:13 +0000 Subject: [nginx] Refactored ngx_linux_sendfile_chain() even more. Message-ID: details: http://hg.nginx.org/nginx/rev/c901f2764c27 branches: changeset: 5997:c901f2764c27 user: Valentin Bartenev date: Fri Feb 27 19:19:08 2015 +0300 description: Refactored ngx_linux_sendfile_chain() even more. The code that calls sendfile() was cut into a separate function. This simplifies EINTR processing, yet is needed for the following changes that add threads support. diffstat: src/os/unix/ngx_linux_sendfile_chain.c | 106 ++++++++++++++++++-------------- 1 files changed, 60 insertions(+), 46 deletions(-) diffs (155 lines): diff -r ab660d7c9980 -r c901f2764c27 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Tue Mar 03 01:15:21 2015 +0300 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Fri Feb 27 19:19:08 2015 +0300 @@ -10,6 +10,10 @@ #include +static ssize_t ngx_linux_sendfile(ngx_connection_t *c, ngx_buf_t *file, + size_t size); + + /* * On Linux up to 2.4.21 sendfile() (syscall #187) works with 32-bit * offsets only, and the including breaks the compiling, @@ -36,16 +40,10 @@ ngx_linux_sendfile_chain(ngx_connection_ ssize_t n; ngx_err_t err; ngx_buf_t *file; - ngx_uint_t eintr; ngx_event_t *wev; ngx_chain_t *cl; ngx_iovec_t header; struct iovec headers[NGX_IOVS_PREALLOCATE]; -#if (NGX_HAVE_SENDFILE64) - off_t offset; -#else - int32_t offset; -#endif wev = c->write; @@ -67,7 +65,6 @@ ngx_linux_sendfile_chain(ngx_connection_ header.nalloc = NGX_IOVS_PREALLOCATE; for ( ;; ) { - eintr = 0; prev_send = send; /* create the iovec and coalesce the neighbouring bufs */ @@ -161,43 +158,13 @@ ngx_linux_sendfile_chain(ngx_connection_ return NGX_CHAIN_ERROR; } #endif -#if (NGX_HAVE_SENDFILE64) - offset = file->file_pos; -#else - offset = (int32_t) file->file_pos; -#endif + n = ngx_linux_sendfile(c, file, file_size); - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "sendfile: @%O %uz", file->file_pos, file_size); - - n = sendfile(c->fd, file->file->fd, &offset, file_size); - - if (n == -1) { - err = ngx_errno; - - switch (err) { - case NGX_EAGAIN: - break; - - case NGX_EINTR: - eintr = 1; - break; - - default: - wev->error = 1; - ngx_connection_error(c, err, "sendfile() failed"); - return NGX_CHAIN_ERROR; - } - - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, - "sendfile() is not ready"); + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; } - sent = n > 0 ? n : 0; - - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, - "sendfile: %z, @%O %O:%uz", - n, file->file_pos, sent, file_size); + sent = (n == NGX_AGAIN) ? 0 : n; } else { n = ngx_writev(c, &header); @@ -213,11 +180,6 @@ ngx_linux_sendfile_chain(ngx_connection_ in = ngx_chain_update_sent(in, sent); - if (eintr) { - send = prev_send; - continue; - } - if (send - prev_send != sent) { wev->ready = 0; return in; @@ -228,3 +190,55 @@ ngx_linux_sendfile_chain(ngx_connection_ } } } + + +static ssize_t +ngx_linux_sendfile(ngx_connection_t *c, ngx_buf_t *file, size_t size) +{ +#if (NGX_HAVE_SENDFILE64) + off_t offset; +#else + int32_t offset; +#endif + ssize_t n; + ngx_err_t err; + +#if (NGX_HAVE_SENDFILE64) + offset = file->file_pos; +#else + offset = (int32_t) file->file_pos; +#endif + +eintr: + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "sendfile: @%O %uz", file->file_pos, size); + + n = sendfile(c->fd, file->file->fd, &offset, size); + + if (n == -1) { + err = ngx_errno; + + switch (err) { + case NGX_EAGAIN: + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + "sendfile() is not ready"); + return NGX_AGAIN; + + case NGX_EINTR: + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + "sendfile() was interrupted"); + goto eintr; + + default: + c->write->error = 1; + ngx_connection_error(c, err, "sendfile() failed"); + return NGX_ERROR; + } + } + + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, "sendfile: %z of %uz @%O", + n, size, file->file_pos); + + return n; +} From info at phpgangsta.de Tue Mar 3 14:14:50 2015 From: info at phpgangsta.de (Michael Kliewe) Date: Tue, 3 Mar 2015 15:14:50 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <20150302145655.GV19012@mdounin.ru> References: <53D9AAB0.5060501@phpgangsta.de> <20140801185919.GU1849@mdounin.ru> <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> Message-ID: Hi again, On Mar 2, 2015, at 3:56 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 02, 2015 at 03:32:03PM +0100, Michael Kliewe wrote: > >> Hi Maxim, >> >> On Mar 2, 2015, at 3:14 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Mon, Mar 02, 2015 at 01:12:44PM +0100, Michael Kliewe >>> wrote: >>> >>>> with your changes there is a problem: >>>> nginx now just sends the header if the connection is >>>> encrypted. If the connection is not encrypted, then there is >>>> no header sent to the auth script. >>>> In the auth script I cannot distinguish between "user did not >>>> use encryption" and "nginx doesn't have the feature" (because >>>> of mixed nginx versions). >>>> With the original version of the patch this was possible. >>> >>> Try updating all your nginx instances before using the header >>> for something limiting, it is expected to resolve your >>> problem. >>> >>> Either way, the only safe thing to do if "nginx doesn't have >>> the feature" is to assume there is no SSL if SSL matters. And >>> that's what current behaviour encourages. >> >> You are kind of right, but currently I'm distinguishing between >> "encrypted", "not-encrypted" and "unknown", because we have >> different versions of nginx in different setups. I cannot update >> all nginx versions in parallel in all setups. That's why your >> tip does not help me ;-/ >> I need to distinguish between "not-encrypted" and "unknown", >> because I want to warn all users still using not-encrypted >> connections. With your patch I cannot distinguish between them, >> and would send false warnings... > > So switch off warnings till the update is complete. That's an > easy way to go. > > Alternatively, you may use the "auth_http_header" directive > (http://nginx.org/r/auth_http_header) to distinguish between > various installations. I'm sorry, I don't really want to repeat my arguments, but as I said I don't have control over all nginx servers that are used. Some will be "older", some will be newer. And I cannot force "them" to introduce the auth_http_header to just send the nginx version or capability of sending Auth-SSL header or not... Filipe's patch is working fine since > 6 month, it's either sending 0 or 1. The 0 is an important information and should not be dropped. Can you tell me the disadvantage of sending "off" in case the connection is unencrypted? I don't really see the problem at the moment why you don't add the else branch, you are dropping information that is needed (and that was there in the original patch)... It's just 3 lines more code and doesn't hurt anybody, but provides important information to the auth script. Kind regards Michael From ru at nginx.com Tue Mar 3 15:10:19 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 03 Mar 2015 15:10:19 +0000 Subject: [nginx] Events: simplified ngx_event_aio_t definition. Message-ID: details: http://hg.nginx.org/nginx/rev/ea58dfd07782 branches: changeset: 5998:ea58dfd07782 user: Ruslan Ermilov date: Tue Mar 03 18:09:13 2015 +0300 description: Events: simplified ngx_event_aio_t definition. No functional changes. diffstat: src/event/ngx_event.h | 6 ++---- 1 files changed, 2 insertions(+), 4 deletions(-) diffs (17 lines): diff -r c901f2764c27 -r ea58dfd07782 src/event/ngx_event.h --- a/src/event/ngx_event.h Fri Feb 27 19:19:08 2015 +0300 +++ b/src/event/ngx_event.h Tue Mar 03 18:09:13 2015 +0300 @@ -176,11 +176,9 @@ struct ngx_event_aio_s { #if (NGX_HAVE_EVENTFD) int64_t res; -#if (NGX_TEST_BUILD_EPOLL) - ngx_err_t err; - size_t nbytes; #endif -#else + +#if !(NGX_HAVE_EVENTFD) || (NGX_TEST_BUILD_EPOLL) ngx_err_t err; size_t nbytes; #endif From vbart at nginx.com Tue Mar 3 15:46:43 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 03 Mar 2015 15:46:43 +0000 Subject: [nginx] Upstream keepalive: drop ready flag on EAGAIN from recv(... Message-ID: details: http://hg.nginx.org/nginx/rev/4d8936b1fc32 branches: changeset: 5999:4d8936b1fc32 user: Valentin Bartenev date: Tue Mar 03 17:48:57 2015 +0300 description: Upstream keepalive: drop ready flag on EAGAIN from recv(MSG_PEEK). Keeping the ready flag in this case might results in missing notification of broken connection until nginx tried to use it again. While there, stale comment about stale event was removed since this function is also can be called directly. diffstat: src/http/modules/ngx_http_upstream_keepalive_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r ea58dfd07782 -r 4d8936b1fc32 src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c Tue Mar 03 18:09:13 2015 +0300 +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c Tue Mar 03 17:48:57 2015 +0300 @@ -387,7 +387,7 @@ ngx_http_upstream_keepalive_close_handle n = recv(c->fd, buf, 1, MSG_PEEK); if (n == -1 && ngx_socket_errno == NGX_EAGAIN) { - /* stale event */ + ev->ready = 0; if (ngx_handle_read_event(c->read, 0) != NGX_OK) { goto close; From mdounin at mdounin.ru Tue Mar 3 15:50:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Mar 2015 18:50:29 +0300 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: References: <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> Message-ID: <20150303155028.GB19012@mdounin.ru> Hello! On Tue, Mar 03, 2015 at 03:14:50PM +0100, Michael Kliewe wrote: > Hi again, > > On Mar 2, 2015, at 3:56 PM, Maxim Dounin wrote: > > > Hello! > > > > On Mon, Mar 02, 2015 at 03:32:03PM +0100, Michael Kliewe wrote: > > > >> Hi Maxim, > >> > >> On Mar 2, 2015, at 3:14 PM, Maxim Dounin wrote: > >> > >>> Hello! > >>> > >>> On Mon, Mar 02, 2015 at 01:12:44PM +0100, Michael Kliewe > >>> wrote: > >>> > >>>> with your changes there is a problem: > >>>> nginx now just sends the header if the connection is > >>>> encrypted. If the connection is not encrypted, then there is > >>>> no header sent to the auth script. > >>>> In the auth script I cannot distinguish between "user did not > >>>> use encryption" and "nginx doesn't have the feature" (because > >>>> of mixed nginx versions). > >>>> With the original version of the patch this was possible. > >>> > >>> Try updating all your nginx instances before using the header > >>> for something limiting, it is expected to resolve your > >>> problem. > >>> > >>> Either way, the only safe thing to do if "nginx doesn't have > >>> the feature" is to assume there is no SSL if SSL matters. And > >>> that's what current behaviour encourages. > >> > >> You are kind of right, but currently I'm distinguishing between > >> "encrypted", "not-encrypted" and "unknown", because we have > >> different versions of nginx in different setups. I cannot update > >> all nginx versions in parallel in all setups. That's why your > >> tip does not help me ;-/ > >> I need to distinguish between "not-encrypted" and "unknown", > >> because I want to warn all users still using not-encrypted > >> connections. With your patch I cannot distinguish between them, > >> and would send false warnings... > > > > So switch off warnings till the update is complete. That's an > > easy way to go. > > > > Alternatively, you may use the "auth_http_header" directive > > (http://nginx.org/r/auth_http_header) to distinguish between > > various installations. > > I'm sorry, I don't really want to repeat my arguments, but as I > said I don't have control over all nginx servers that are used. > Some will be "older", some will be newer. And I cannot force > "them" to introduce the auth_http_header to just send the nginx > version or capability of sending Auth-SSL header or not... If you can't, than just switch off warnings till the update is complete, as already suggested. > Filipe's patch is working fine since > 6 month, it's either > sending 0 or 1. The 0 is an important information and should not > be dropped. > > Can you tell me the disadvantage of sending "off" in case the > connection is unencrypted? I don't really see the problem at the > moment why you don't add the else branch, you are dropping > information that is needed (and that was there in the original > patch)... It's just 3 lines more code and doesn't hurt anybody, > but provides important information to the auth script. As already explained, the problem is that the header will be added forever for all setups, and it will be waste of resources in all these setups. It will be waste of resources in your setup as well after the transition period. -- Maxim Dounin http://nginx.org/ From info at phpgangsta.de Tue Mar 3 16:28:13 2015 From: info at phpgangsta.de (Michael Kliewe) Date: Tue, 3 Mar 2015 17:28:13 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <20150303155028.GB19012@mdounin.ru> References: <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> <20150303155028.GB19012@mdounin.ru> Message-ID: <0657E6C2-E795-4880-83DA-3D9EE5C0C655@phpgangsta.de> Hi Maxim, On Mar 3, 2015, at 4:50 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 03, 2015 at 03:14:50PM +0100, Michael Kliewe wrote: > >> Hi again, >> >> On Mar 2, 2015, at 3:56 PM, Maxim Dounin wrote: >> >> I'm sorry, I don't really want to repeat my arguments, but as I >> said I don't have control over all nginx servers that are used. >> Some will be "older", some will be newer. And I cannot force >> "them" to introduce the auth_http_header to just send the nginx >> version or capability of sending Auth-SSL header or not... > > If you can't, than just switch off warnings till the update is > complete, as already suggested. That might take months or years, some are out of my control as I said. And we are already sending warnings currently because of the patch from Filipe, which works fine. I cannot use your modified patch, I still have to patch Filipes version manually then. > >> Filipe's patch is working fine since > 6 month, it's either >> sending 0 or 1. The 0 is an important information and should not >> be dropped. >> >> Can you tell me the disadvantage of sending "off" in case the >> connection is unencrypted? I don't really see the problem at the >> moment why you don't add the else branch, you are dropping >> information that is needed (and that was there in the original >> patch)... It's just 3 lines more code and doesn't hurt anybody, >> but provides important information to the auth script. > > As already explained, the problem is that the header will be added > forever for all setups, and it will be waste of resources in all > these setups. It will be waste of resources in your setup as well > after the transition period. But you are already adding the header in case it is an encrypted connection, which currently is >90% of all cases, at least here in Germany. If you call that "waste of ressources", you are already doing that for 90% of all IMAP/POP3 connections, I'm just asking to do that for the last 10% that are unencrypted (and will fade away during the next years, as more and more providers disallow unencrypted connections). I'm just asking for the last 10% of connections, which are the important ones, if you need that feature. Otherwise I still have to use the patch from Filipe everywhere, because it allows slow migration and distinction between "encrypted", "unencrypted" and "unknown" in the auth script. If you want to be as efficient as possible, you should send just "AUTH_SSL: off" in case of an unencrypted connection, and no header at all for an encrypted connection. That would be a lot better, because >90% of all IMAP/POP3 connections are encrypted today. Michael From shawgoff at amazon.com Tue Mar 3 21:28:36 2015 From: shawgoff at amazon.com (Shawn J. Goff) Date: Tue, 3 Mar 2015 13:28:36 -0800 Subject: Alternative to propagate_connection_close Message-ID: <54F62784.20005@amazon.com> I wrote the propagate_connection_close patch previously and Maxim had concerns that there is no way to close the downstream connection without also closing the upstream connection, which isn't ideal. I'll have time to do more work on it, so I'd like to know if this suggestion is worth implementing. If this suggestion is palatable, I'll make a more formal definition and coordinate with HAProxy and possibly other projects. The quick background is that a proxy server is frequently used just for HTTPS termination, with the plaintext request being forwarded to a port on the localhost where an upstream server is listening. With this setup, when the upstream issues a "Connection: close", only the connection to the local proxy is closed; this is not useful. There should be a way to treat the proxy as part of the same service and ask it to close the downstream connection. I would like to introduce two new headers, Close-Before and Close-After. The Connection header can reference either of those headers. Both headers take a comma-delimited list of node-identifiers [1]. If the Close-Before header is present and the next hop is a host in the associated node-identifiers list, the Connection" header will be set to "close", the underlying connection will be closed, and the Close-Before header is removed. The Close-After header is similar except the condition is when the current host is in the list. Example: Connection: Close-After Close-After: 127.0.0.1 [1] node-identifiers: http://tools.ietf.org/html/rfc7239#section-6 From ganzhi at gmail.com Wed Mar 4 06:56:55 2015 From: ganzhi at gmail.com (James Gan) Date: Tue, 3 Mar 2015 22:56:55 -0800 Subject: Best place to store per-request state ? Message-ID: I'm learning how to develop module for nginx. In one of my testing module, I'm trying to save several per-request int/long states for each http request in my module. The easiest approach seems to add these variables to ngx_http_request_s struct. Though I don't really want to modify core modules. Is there a recommended approach for saving per-request state without modifying core http modules? Thanks a lot! -- Many Thanks! Best Regards James Gan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Mar 4 08:29:14 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 4 Mar 2015 11:29:14 +0300 Subject: Best place to store per-request state ? In-Reply-To: References: Message-ID: <20150304082914.GG50767@lo0.su> On Tue, Mar 03, 2015 at 10:56:55PM -0800, James Gan wrote: > I'm learning how to develop module for nginx. > > In one of my testing module, I'm trying to save several per-request > int/long states for each http request in my module. The easiest approach > seems to add these variables to ngx_http_request_s struct. Though I don't > really want to modify core modules. > > Is there a recommended approach for saving per-request state without > modifying core http modules? > > Thanks a lot! See ngx_http_set_ctx() and ngx_http_get_module_ctx(). From johnzeng2013 at yahoo.com Wed Mar 4 13:27:56 2015 From: johnzeng2013 at yahoo.com (johnzeng) Date: Wed, 04 Mar 2015 21:27:56 +0800 Subject: Whether nginx can cache video file and large file via the way ( monitoring Port mirroring + send 302 http, packet to redirect ) ? In-Reply-To: <54F707C7.1090304@yahoo.com> References: <54F707C7.1090304@yahoo.com> Message-ID: <54F7085C.7080005@yahoo.com> Hi , i have a switch , and i hope to redirect video traffic to Cache via using Port mirroring feature , and monitoring network traffic that involves forwarding a copy of each packet from one network switch. Whether nginx can listen and identify mirroring data packet ? maybe we can use gor ( https://github.com/buger/gor/blob/master/README.md ) if nginx can identify , i hope to match video part and send 302 http packet to end user via url_rewrite_access and redirect the user's request to Cache Whether my thought is correct way ? please give me some advisement and i am reading the detail http://xathrya.web.id/blog/2013/05/14/caching-youtube-video-with-squid-and-nginx/ http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ http://blog.multiplay.co.uk/2013/04/caching-steam-downloads-lans/ From skaurus at gmail.com Wed Mar 4 14:18:49 2015 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Wed, 4 Mar 2015 17:18:49 +0300 Subject: Making new server parameter inside upstream block In-Reply-To: <20150213131216.GD19012@mdounin.ru> References: <20150212133642.GU19012@mdounin.ru> <20150212145303.GW19012@mdounin.ru> <20150213131216.GD19012@mdounin.ru> Message-ID: Thanks! Turned out my code had differences with Cache::Memcached::Fast hashing anyway and I decided to fix them while also moving from ips to names. Best regards, Dmitriy Shalashov 2015-02-13 16:12 GMT+03:00 Maxim Dounin : > Hello! > > On Thu, Feb 12, 2015 at 11:11:00PM +0300, ??????? ??????? wrote: > > > > Just use names in the configuration. > > > > You mean local DNS? > > I mean names, as resolvable by gethostbyname()/getaddrinfo() > functions on your OS. It's up to you and your OS how these names > will be resolved. In most simple cases even /etc/hosts will be > enough. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Mar 4 16:21:46 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 04 Mar 2015 16:21:46 +0000 Subject: [nginx] Log: use ngx_cpymem() in a couple of places, no function... Message-ID: details: http://hg.nginx.org/nginx/rev/93fee708f168 branches: changeset: 6000:93fee708f168 user: Valentin Bartenev date: Wed Mar 04 19:20:30 2015 +0300 description: Log: use ngx_cpymem() in a couple of places, no functional changes. diffstat: src/core/ngx_log.c | 9 +++------ 1 files changed, 3 insertions(+), 6 deletions(-) diffs (27 lines): diff -r 4d8936b1fc32 -r 93fee708f168 src/core/ngx_log.c --- a/src/core/ngx_log.c Tue Mar 03 17:48:57 2015 +0300 +++ b/src/core/ngx_log.c Wed Mar 04 19:20:30 2015 +0300 @@ -97,10 +97,8 @@ ngx_log_error_core(ngx_uint_t level, ngx last = errstr + NGX_MAX_ERROR_STR; - ngx_memcpy(errstr, ngx_cached_err_log_time.data, - ngx_cached_err_log_time.len); - - p = errstr + ngx_cached_err_log_time.len; + p = ngx_cpymem(errstr, ngx_cached_err_log_time.data, + ngx_cached_err_log_time.len); p = ngx_slprintf(p, last, " [%V] ", &err_levels[level]); @@ -248,9 +246,8 @@ ngx_log_stderr(ngx_err_t err, const char u_char errstr[NGX_MAX_ERROR_STR]; last = errstr + NGX_MAX_ERROR_STR; - p = errstr + 7; - ngx_memcpy(errstr, "nginx: ", 7); + p = ngx_cpymem(errstr, "nginx: ", 7); va_start(args, fmt); p = ngx_vslprintf(p, last, fmt, args); From ru at nginx.com Thu Mar 5 06:17:16 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 05 Mar 2015 06:17:16 +0000 Subject: [nginx] Style: use %*s format, as in 68d21fd1dc64. Message-ID: details: http://hg.nginx.org/nginx/rev/add12ee1d01c branches: changeset: 6001:add12ee1d01c user: Ruslan Ermilov date: Wed Mar 04 08:05:38 2015 +0300 description: Style: use %*s format, as in 68d21fd1dc64. diffstat: src/mail/ngx_mail_auth_http_module.c | 11 +++-------- 1 files changed, 3 insertions(+), 8 deletions(-) diffs (21 lines): diff -r 93fee708f168 -r add12ee1d01c src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Wed Mar 04 19:20:30 2015 +0300 +++ b/src/mail/ngx_mail_auth_http_module.c Wed Mar 04 08:05:38 2015 +0300 @@ -1394,14 +1394,9 @@ ngx_mail_auth_http_create_request(ngx_ma *b->last++ = CR; *b->last++ = LF; #if (NGX_DEBUG_MAIL_PASSWD) - { - ngx_str_t l; - - l.len = b->last - b->pos; - l.data = b->pos; - ngx_log_debug1(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, - "mail auth http header:%N\"%V\"", &l); - } + ngx_log_debug2(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, + "mail auth http header:%N\"%*s\"", + (size_t) (b->last - b->pos), b->pos); #endif return b; From ru at nginx.com Thu Mar 5 06:17:19 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 05 Mar 2015 06:17:19 +0000 Subject: [nginx] Style: moved ngx_http_ephemeral() macro to ngx_http_requ... Message-ID: details: http://hg.nginx.org/nginx/rev/f8ee988cfe6d branches: changeset: 6002:f8ee988cfe6d user: Ruslan Ermilov date: Wed Mar 04 08:10:40 2015 +0300 description: Style: moved ngx_http_ephemeral() macro to ngx_http_request.h. diffstat: src/http/ngx_http.h | 3 --- src/http/ngx_http_request.h | 3 +++ 2 files changed, 3 insertions(+), 3 deletions(-) diffs (26 lines): diff -r add12ee1d01c -r f8ee988cfe6d src/http/ngx_http.h --- a/src/http/ngx_http.h Wed Mar 04 08:05:38 2015 +0300 +++ b/src/http/ngx_http.h Wed Mar 04 08:10:40 2015 +0300 @@ -131,9 +131,6 @@ void ngx_http_empty_handler(ngx_event_t void ngx_http_request_empty_handler(ngx_http_request_t *r); -#define ngx_http_ephemeral(r) (void *) (&r->uri_start) - - #define NGX_HTTP_LAST 1 #define NGX_HTTP_FLUSH 2 diff -r add12ee1d01c -r f8ee988cfe6d src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h Wed Mar 04 08:05:38 2015 +0300 +++ b/src/http/ngx_http_request.h Wed Mar 04 08:10:40 2015 +0300 @@ -577,6 +577,9 @@ typedef struct { } ngx_http_ephemeral_t; +#define ngx_http_ephemeral(r) (void *) (&r->uri_start) + + extern ngx_http_header_t ngx_http_headers_in[]; extern ngx_http_header_out_t ngx_http_headers_out[]; From ru at nginx.com Thu Mar 5 06:17:21 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 05 Mar 2015 06:17:21 +0000 Subject: [nginx] Proxy: use an appropriate error on memory allocation fai... Message-ID: details: http://hg.nginx.org/nginx/rev/cf2f8d91cf09 branches: changeset: 6003:cf2f8d91cf09 user: Ruslan Ermilov date: Wed Mar 04 08:12:53 2015 +0300 description: Proxy: use an appropriate error on memory allocation failure. diffstat: src/http/modules/ngx_http_proxy_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r f8ee988cfe6d -r cf2f8d91cf09 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Mar 04 08:10:40 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Mar 04 08:12:53 2015 +0300 @@ -812,7 +812,7 @@ ngx_http_proxy_handler(ngx_http_request_ ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_proxy_ctx_t)); if (ctx == NULL) { - return NGX_ERROR; + return NGX_HTTP_INTERNAL_SERVER_ERROR; } ngx_http_set_ctx(r, ctx, ngx_http_proxy_module); From hungnv at opensource.com.vn Thu Mar 5 08:48:01 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Thu, 5 Mar 2015 15:48:01 +0700 Subject: Recommend way to write module to do expensive job Message-ID: Hello, We are about to write a module that?s similar to image filter module, the difference is image filter module processes JPEG, PNG? format, we work on video file, so it?s much bigger and video processing is too much longer. As in nginx mp4 module ( we use h264) too, client just need to receive moov atom data to start playback, my module should be able to produce moov atom and send those bytes to client while processing other part of video file. Maybe I ask is there any recommend way to do this? Thanks. -- H?ng Email: hungnv at opensource.com.vn From vozlt at vozlt.com Thu Mar 5 15:52:13 2015 From: vozlt at vozlt.com (YoungJoo.Kim) Date: Fri, 6 Mar 2015 00:52:13 +0900 (KST) Subject: [request] create account for wiki.nginx.org Message-ID: <51ed9bda4919132c704930c9be4b5d@cvweb07.wmail.nhnsystem.com> Hi folks, I have refered link to http://forum.nginx.org/read.php?29,249919,249933. A few days ago, I opened an nginx module to output the status of virtual host traffic(including upstreams) at http://github.com/vozlt/nginx-module-vts.I'd like to register into http://wiki.nginx.org/3rdPartyModules to listen more feedback. So I have been a request one to obtain a user account and confirmed email address last week. The requested account is vozlt.When can I get the account? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommywatson+nginx-devel at gmail.com Thu Mar 5 16:26:34 2015 From: tommywatson+nginx-devel at gmail.com (tommy watson) Date: Thu, 5 Mar 2015 11:26:34 -0500 Subject: Terminating requests In-Reply-To: References: Message-ID: I have investigated this further and found that ngx_finalize_connection() is being called recursively and on the third recursive call, called by ngx_upstream_finalize_request(), sometimes the call to set_lingering_close() calls ngx_http_close_request() which runs the log handler then closes the connection. This frees up r->pool and during the unwinding of the stack ngx_http_log_request()/ngx_http_log_handler() are called a second time which end up calling ngx_pnalloc() with a null r->pool pointer here: http://lxr.nginx.org/source/src/http/modules/ngx_http_log_module.c#0349 This is reproducible with the module linked below when setup with an upstream and nikto pointed at nginx. I have found a fix by setting r->keepalive to 0 before finalising the request, if you revert this commit nginx will stop coring and the issue seems to be taken care of. https://github.com/tommywatson/nginx-hello-world-module/commit/1d94b065be875d26e11ff14257c411076aa79eaa Any help on a better solution would be great. Cheers. On Fri, Feb 13, 2015 at 8:44 PM, tommy watson < tommywatson+nginx-devel at gmail.com> wrote: > Hello, > I'm trying to continue or cancel an ngx_http_request_t after a slight > delay but am failing miserably, I keep getting crashes and am not sure what > I'm doing wrong. > The code is here https://github.com/tommywatson/nginx-hello-world-module > (borrowed from https://www.ruby-forum.com/topic/5564332) basically it > pauses and fires and event to continue or finalize the request. Firing > nikto at it brings the dump below. > Any help/insight appreciated. > > Cheers. > > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x0000000000406af2 in ngx_pnalloc (pool=0x0, size=181) at > src/core/ngx_palloc.c:155 > 155 if (size <= pool->max) { > (gdb) where > #0 0x0000000000406af2 in ngx_pnalloc (pool=0x0, size=181) at > src/core/ngx_palloc.c:155 > #1 0x0000000000452692 in ngx_http_log_handler (r=0x6676b50) at > src/http/modules/ngx_http_log_module.c:349 > #2 0x000000000044c385 in ngx_http_log_request (r=0x6676b50) at > src/http/ngx_http_request.c:3510 > #3 0x000000000044c1f2 in ngx_http_free_request (r=0x6676b50, rc=0) at > src/http/ngx_http_request.c:3457 > #4 0x000000000044b297 in ngx_http_set_keepalive (r=0x6676b50) at > src/http/ngx_http_request.c:2895 > #5 0x000000000044a994 in ngx_http_finalize_connection (r=0x6676b50) at > src/http/ngx_http_request.c:2532 > #6 0x000000000044a10b in ngx_http_finalize_request (r=0x6676b50, rc=-4) > at src/http/ngx_http_request.c:2262 > #7 0x000000000043cb18 in ngx_http_core_content_phase (r=0x6676b50, > ph=0x60b7798) at src/http/ngx_http_core_module.c:1407 > #8 0x000000000043b911 in ngx_http_core_run_phases (r=0x6676b50) at > src/http/ngx_http_core_module.c:888 > #9 0x00000000004af101 in hack_event (e=0x6677bc8) at > ../nginx-hello-world-module/ngx_http_hello_world_module.c:85 > #10 0x000000000042afac in ngx_event_expire_timers () at > src/event/ngx_event_timer.c:94 > #11 0x00000000004290a7 in ngx_process_events_and_timers (cycle=0x608f310) > at src/event/ngx_event.c:262 > #12 0x000000000043493f in ngx_worker_process_cycle (cycle=0x608f310, > data=0x0) at src/os/unix/ngx_process_cycle.c:824 > #13 0x000000000043176d in ngx_spawn_process (cycle=0x608f310, > proc=0x43476b , data=0x0, name=0x4b3180 "worker > process", respawn=-3) at src/os/unix/ngx_process.c:198 > #14 0x0000000000433a71 in ngx_start_worker_processes (cycle=0x608f310, > n=1, type=-3) at src/os/unix/ngx_process_cycle.c:368 > #15 0x00000000004331cd in ngx_master_process_cycle (cycle=0x608f310) at > src/os/unix/ngx_process_cycle.c:140 > #16 0x00000000004037c6 in main (argc=1, argv=0xffefffbe8) at > src/core/nginx.c:407 > (gdb) quit > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **1** ++++++++++++ [2256] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to upstrea 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2257:273:396 **1** || RC -4 || r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to upstream, clien 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2524:273:396 **1** ++++++++++++ [2524] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connectin 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2548:273:396 **1** || Close rq|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to u 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_close_request:3427:273:396 **1** ++++++++++++ [3427] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to u 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_close_request:3427:273:396 **1** ------------ [3441] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to u 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2524:273:396 **1** ------------ [2550] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connectin 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **1** ------------ [2267] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while connecting to upstrea 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **1** ++++++++++++ [2256] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending request to up 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2257:273:396 **1** || RC 404 || r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending request to upstream, 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **2** ++++++++++++ [2256] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending request to up 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2257:273:396 **2** || RC 0 || r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending request to upstream, c 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2524:273:396 **1** ++++++++++++ [2524] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending r 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_set_keepalive:2868:273:396 **1** ++++++++++++ [2868] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending request 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3468:273:396 **1** ++++++++++++ [3468] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while closing request, 2015/02/17 16:27:53 [alert] 3537#0: *381 +++ handler 000000000043F237 0000000001CA1640 +++ while closing request, client: 127.0.0.1, server: localhost, request: "GET /web800fo/ HTTP 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_cleanup:3730:273:396 **1** ++++++++++++ [3730] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while closing requ 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_finalize_request:3745:273:396 **59** ++++++++++++ [3745] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while cl 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_finalize_request:3768:273:396 **59** || +++ U->FINALIZE|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while c 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_finalize_request:3770:273:396 **59** || --- U->FINALIZE|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while c 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_finalize_request:3867:273:396 **59** || +++ HEADER SENT +++|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 whi 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **3** ++++++++++++ [2256] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending to client, cl 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2257:273:396 **3** || RC -4 || r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending to client, client: 12 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2524:273:396 **2** ++++++++++++ [2524] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending t 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_set_lingering_close:3235:273:396 **1** ++++++++++++ [3235] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending t 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_close_request:3427:273:396 **1** ++++++++++++ [3427] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending to clie 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3468:273:396 **2** ++++++++++++ [3468] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while sending to clien 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3509:273:396 **2** || +++ LOG REQUEST|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while logging request 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_log_request:3558:273:396 **1** ++++++++++++ [3558] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while logging request, 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_log_request:3558:273:396 **1** ------------ [3567] r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while logging request, 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3511:273:396 **2** || --- LOG REQUEST|| r:0000000001CA1640 p:0000000001CA15F0 c:00007F95F5976280 while logging request 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3468:273:396 **2** ------------ [3548] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing request, 2015/02/17 16:27:53 [error] 3537#0: *381 ***** CLOSE CONN while closing request, client: 127.0.0.1, server: 0.0.0.0:8089 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_close_request:3427:273:396 **1** ------------ [3455] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing request 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_set_lingering_close:3235:273:396 **1** ------------ [3274] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing r 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_finalize_connection:2524:273:396 **2** ------------ [2571] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing r 2015/02/17 16:27:53 [error] 3537#0: *381 finalize_request:2256:273:396 **3** ------------ [2267] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing request, clie 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_finalize_request:3869:273:396 **59** || --- HEADER SENT ---|| r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 whi 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_upstream_cleanup:3730:273:396 **1** ------------ [3735] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while closing requ 2015/02/17 16:27:53 [alert] 3537#0: *381 --- handler 0000000000000000 0000000001CA1640 --- while closing request, client: 127.0.0.1, server: 0.0.0.0:8089 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_free_request:3509:273:396 **1** || +++ LOG REQUEST|| r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while logging request 2015/02/17 16:27:53 [error] 3537#0: *381 ngx_http_log_request:3558:273:396 **1** ++++++++++++ [3558] r:0000000001CA1640 p:0000000000000000 c:00007F95F5976280 while logging request From hungnv at opensource.com.vn Fri Mar 6 03:06:30 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Fri, 6 Mar 2015 10:06:30 +0700 Subject: [request] create account for wiki.nginx.org In-Reply-To: <51ed9bda4919132c704930c9be4b5d@cvweb07.wmail.nhnsystem.com> References: <51ed9bda4919132c704930c9be4b5d@cvweb07.wmail.nhnsystem.com> Message-ID: off topic but your module look very good to me. Thanks too much for this :) -- H?ng Email: hungnv at opensource.com.vn > On Mar 5, 2015, at 10:52 PM, YoungJoo.Kim wrote: > > Hi folks, I have refered link to http://forum.nginx.org/read.php?29,249919,249933. > > A few days ago, I opened an nginx module to output the status of virtual host traffic(including upstreams) at http://github.com/vozlt/nginx-module-vts. > I'd like to register into http://wiki.nginx.org/3rdPartyModules to listen more feedback. > So I have been a request one to obtain a user account and confirmed email address last week. > The requested account is vozlt. > When can I get the account? > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Fri Mar 6 10:49:50 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Fri, 6 Mar 2015 17:49:50 +0700 Subject: Recommend way to write module to do expensive job In-Reply-To: References: Message-ID: <4B52BFC7-E6CA-48DD-9BF6-8B3C2B8F27A6@opensource.com.vn> Hello again, I tried to modify our lib to make write job more efficient. At this time current lib can do this: When user request a file, nginx will pass request to intermediate lib (after doing some check), then the libraries will do very expensive job, read the input, process the video. But during processing phase, it will write output which is the data that can be read by browser to temp buffer, and append to output buffer (which is ngx_buf_t). The problem is at current state, I just can send response to client when the output buffer was completely filled, which takes a lot of time. But if I can write output to socket just after intermediate library write to temp buffer, it will be much faster. Is there any way to do this? -- H?ng Email: hungnv at opensource.com.vn > On Mar 5, 2015, at 3:48 PM, hungnv at opensource.com.vn wrote: > > Hello, > > We are about to write a module that?s similar to image filter module, the difference is image filter module processes JPEG, PNG? format, we work on video file, so it?s much bigger and video processing is too much longer. > > As in nginx mp4 module ( we use h264) too, client just need to receive moov atom data to start playback, my module should be able to produce moov atom and send those bytes to client while processing other part of video file. Maybe I ask is there any recommend way to do this? > > Thanks. > > -- > H?ng > Email: hungnv at opensource.com.vn > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_3549.JPG Type: image/jpeg Size: 2125010 bytes Desc: not available URL: From git at cfware.com Fri Mar 6 17:19:29 2015 From: git at cfware.com (Corey Farrell) Date: Fri, 6 Mar 2015 12:19:29 -0500 Subject: include directive not allowed within upstream context Message-ID: Hello everyone, I'm new to nginx development, so I'm not sure the procedure for bug fixes. I submitted http://trac.nginx.org/nginx/ticket/635 a few months ago, how can I bring attention to this bug report and have the patch considered for merge? Thank you, Corey From fdasilvayy at gmail.com Sat Mar 7 10:34:35 2015 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Sat, 7 Mar 2015 11:34:35 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <0657E6C2-E795-4880-83DA-3D9EE5C0C655@phpgangsta.de> References: <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> <20150303155028.GB19012@mdounin.ru> <0657E6C2-E795-4880-83DA-3D9EE5C0C655@phpgangsta.de> Message-ID: I think that the half way solution is this one attached : - when an SSL connection is active : "Auth-SSL: on" ( current code status) - else when it could have been active (using STARTTLS): "Auth-SSL: off" - else SSL was disabled: there is nothing to send. Regards, Filipe DA SILVA. 2015-03-03 17:28 GMT+01:00 Michael Kliewe : > Hi Maxim, > > On Mar 3, 2015, at 4:50 PM, Maxim Dounin wrote: > >> Hello! >> >> On Tue, Mar 03, 2015 at 03:14:50PM +0100, Michael Kliewe wrote: >> >>> Hi again, >>> >>> On Mar 2, 2015, at 3:56 PM, Maxim Dounin wrote: >>> >>> I'm sorry, I don't really want to repeat my arguments, but as I >>> said I don't have control over all nginx servers that are used. >>> Some will be "older", some will be newer. And I cannot force >>> "them" to introduce the auth_http_header to just send the nginx >>> version or capability of sending Auth-SSL header or not... >> >> If you can't, than just switch off warnings till the update is >> complete, as already suggested. > > That might take months or years, some are out of my control as I said. > And we are already sending warnings currently because of the patch from Filipe, which works fine. > I cannot use your modified patch, I still have to patch Filipes version manually then. > >> >>> Filipe's patch is working fine since > 6 month, it's either >>> sending 0 or 1. The 0 is an important information and should not >>> be dropped. >>> >>> Can you tell me the disadvantage of sending "off" in case the >>> connection is unencrypted? I don't really see the problem at the >>> moment why you don't add the else branch, you are dropping >>> information that is needed (and that was there in the original >>> patch)... It's just 3 lines more code and doesn't hurt anybody, >>> but provides important information to the auth script. >> >> As already explained, the problem is that the header will be added >> forever for all setups, and it will be waste of resources in all >> these setups. It will be waste of resources in your setup as well >> after the transition period. > > But you are already adding the header in case it is an encrypted connection, which currently is >90% of all cases, at least here in Germany. If you call that "waste of ressources", you are already doing that for 90% of all IMAP/POP3 connections, I'm just asking to do that for the last 10% that are unencrypted (and will fade away during the next years, as more and more providers disallow unencrypted connections). > I'm just asking for the last 10% of connections, which are the important ones, if you need that feature. > > Otherwise I still have to use the patch from Filipe everywhere, because it allows slow migration and distinction between "encrypted", "unencrypted" and "unknown" in the auth script. > > If you want to be as efficient as possible, you should send just "AUTH_SSL: off" in case of an unencrypted connection, and no header at all for an encrypted connection. That would be a lot better, because >90% of all IMAP/POP3 connections are encrypted today. > > Michael > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- # HG changeset patch # Parent b3dc82de188c6954b5f761d11900309165e77813 Mail: Modify Auth-SSL header to indicate when SSL is not used when it could be (STARTTLS enabled). diff -r b3dc82de188c -r 9aecb997009e src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Sat Mar 07 10:54:11 2015 +0100 +++ b/src/mail/ngx_mail_auth_http_module.c Sat Mar 07 11:04:39 2015 +0100 @@ -1244,7 +1244,7 @@ ngx_mail_auth_http_create_request(ngx_ma + sizeof("Auth-SMTP-From: ") - 1 + s->smtp_from.len + sizeof(CRLF) - 1 + sizeof("Auth-SMTP-To: ") - 1 + s->smtp_to.len + sizeof(CRLF) - 1 #if (NGX_MAIL_SSL) - + sizeof("Auth-SSL: on" CRLF) - 1 + + sizeof("Auth-SSL: off" CRLF) - 1 + sizeof("Auth-SSL-Verify: ") - 1 + verify.len + sizeof(CRLF) - 1 + sizeof("Auth-SSL-Subject: ") - 1 + subject.len + sizeof(CRLF) - 1 + sizeof("Auth-SSL-Issuer: ") - 1 + issuer.len + sizeof(CRLF) - 1 @@ -1383,7 +1383,12 @@ ngx_mail_auth_http_create_request(ngx_ma *b->last++ = CR; *b->last++ = LF; } } - + else if ( s-> starttls ) + { + /* SSL isn't used when it could be. */ + b->last = ngx_cpymem(b->last, "Auth-SSL: off" CRLF, + sizeof("Auth-SSL: off" CRLF) - 1); + } #endif if (ahcf->header.len) { From mdounin at mdounin.ru Sat Mar 7 15:53:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 7 Mar 2015 18:53:39 +0300 Subject: include directive not allowed within upstream context In-Reply-To: References: Message-ID: <20150307155339.GX97191@mdounin.ru> Hello! On Fri, Mar 06, 2015 at 12:19:29PM -0500, Corey Farrell wrote: > Hello everyone, > > I'm new to nginx development, so I'm not sure the procedure for bug > fixes. I submitted http://trac.nginx.org/nginx/ticket/635 a few > months ago, how can I bring attention to this bug report and have the > patch considered for merge? http://nginx.org/en/docs/contributing_changes.html -- Maxim Dounin http://nginx.org/ From fdasilvayy at gmail.com Sat Mar 7 15:56:21 2015 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Sat, 7 Mar 2015 16:56:21 +0100 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: References: <53DBF531.2010308@phpgangsta.de> <54D28A26.60903@phpgangsta.de> <20150205130027.GE99511@mdounin.ru> <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> <20150303155028.GB19012@mdounin.ru> <0657E6C2-E795-4880-83DA-3D9EE5C0C655@phpgangsta.de> Message-ID: Hi, There is small issue, in my previous patch. This one is looking for the right flag. Rgs, Filipe 2015-03-07 11:34 GMT+01:00 Filipe Da Silva : > I think that the half way solution is this one attached : > > - when an SSL connection is active : "Auth-SSL: on" ( current code status) > - else when it could have been active (using STARTTLS): "Auth-SSL: off" > - else SSL was disabled: there is nothing to send. > > Regards, > Filipe DA SILVA. > > 2015-03-03 17:28 GMT+01:00 Michael Kliewe : >> Hi Maxim, >> >> On Mar 3, 2015, at 4:50 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Tue, Mar 03, 2015 at 03:14:50PM +0100, Michael Kliewe wrote: >>> >>>> Hi again, >>>> >>>> On Mar 2, 2015, at 3:56 PM, Maxim Dounin wrote: >>>> >>>> I'm sorry, I don't really want to repeat my arguments, but as I >>>> said I don't have control over all nginx servers that are used. >>>> Some will be "older", some will be newer. And I cannot force >>>> "them" to introduce the auth_http_header to just send the nginx >>>> version or capability of sending Auth-SSL header or not... >>> >>> If you can't, than just switch off warnings till the update is >>> complete, as already suggested. >> >> That might take months or years, some are out of my control as I said. >> And we are already sending warnings currently because of the patch from Filipe, which works fine. >> I cannot use your modified patch, I still have to patch Filipes version manually then. >> >>> >>>> Filipe's patch is working fine since > 6 month, it's either >>>> sending 0 or 1. The 0 is an important information and should not >>>> be dropped. >>>> >>>> Can you tell me the disadvantage of sending "off" in case the >>>> connection is unencrypted? I don't really see the problem at the >>>> moment why you don't add the else branch, you are dropping >>>> information that is needed (and that was there in the original >>>> patch)... It's just 3 lines more code and doesn't hurt anybody, >>>> but provides important information to the auth script. >>> >>> As already explained, the problem is that the header will be added >>> forever for all setups, and it will be waste of resources in all >>> these setups. It will be waste of resources in your setup as well >>> after the transition period. >> >> But you are already adding the header in case it is an encrypted connection, which currently is >90% of all cases, at least here in Germany. If you call that "waste of ressources", you are already doing that for 90% of all IMAP/POP3 connections, I'm just asking to do that for the last 10% that are unencrypted (and will fade away during the next years, as more and more providers disallow unencrypted connections). >> I'm just asking for the last 10% of connections, which are the important ones, if you need that feature. >> >> Otherwise I still have to use the patch from Filipe everywhere, because it allows slow migration and distinction between "encrypted", "unencrypted" and "unknown" in the auth script. >> >> If you want to be as efficient as possible, you should send just "AUTH_SSL: off" in case of an unencrypted connection, and no header at all for an encrypted connection. That would be a lot better, because >90% of all IMAP/POP3 connections are encrypted today. >> >> Michael >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- # HG changeset patch # Parent ec01b1d1fff12468fe1a2a1ee8e385c514358356 ssl: remove some magic numbers about SSL verify setting . diff -r ec01b1d1fff1 -r c3b52156de53 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Wed Feb 25 17:48:05 2015 +0300 +++ b/src/event/ngx_event_openssl.h Thu Feb 26 14:06:24 2015 +0100 @@ -114,6 +114,11 @@ typedef struct { #define NGX_SSL_TLSv1_2 0x0020 +#define NGX_SSL_VERIFY_OFF 0 +#define NGX_SSL_VERIFY_ON 1 +#define NGX_SSL_VERIFY_OPTIONAL 2 +#define NGX_SSL_VERIFY_OPTIONAL_NO_CA 3 + #define NGX_SSL_BUFFER 1 #define NGX_SSL_CLIENT 2 diff -r ec01b1d1fff1 -r c3b52156de53 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Wed Feb 25 17:48:05 2015 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Thu Feb 26 14:06:24 2015 +0100 @@ -62,10 +62,10 @@ static ngx_conf_bitmask_t ngx_http_ssl_ static ngx_conf_enum_t ngx_http_ssl_verify[] = { - { ngx_string("off"), 0 }, - { ngx_string("on"), 1 }, - { ngx_string("optional"), 2 }, - { ngx_string("optional_no_ca"), 3 }, + { ngx_string("off"), NGX_SSL_VERIFY_OFF }, + { ngx_string("on"), NGX_SSL_VERIFY_ON }, + { ngx_string("optional"), NGX_SSL_VERIFY_OPTIONAL }, + { ngx_string("optional_no_ca"), NGX_SSL_VERIFY_OPTIONAL_NO_CA }, { ngx_null_string, 0 } }; @@ -567,7 +567,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, NGX_SSL_BUFSIZE); - ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_SSL_VERIFY_OFF); ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); @@ -684,7 +684,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * if (conf->verify) { - if (conf->client_certificate.len == 0 && conf->verify != 3) { + if (conf->client_certificate.len == 0 && conf->verify != NGX_SSL_VERIFY_OPTIONAL_NO_CA) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "no ssl_client_certificate for ssl_client_verify"); return NGX_CONF_ERROR; diff -r ec01b1d1fff1 -r c3b52156de53 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Wed Feb 25 17:48:05 2015 +0300 +++ b/src/http/ngx_http_request.c Thu Feb 26 14:06:24 2015 +0100 @@ -1849,7 +1849,8 @@ ngx_http_process_request(ngx_http_reques rc = SSL_get_verify_result(c->ssl->connection); if (rc != X509_V_OK - && (sscf->verify != 3 || !ngx_ssl_verify_error_optional(rc))) + && (sscf->verify != NGX_SSL_VERIFY_OPTIONAL_NO_CA + || !ngx_ssl_verify_error_optional(rc))) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client SSL certificate verify error: (%l:%s)", @@ -1862,7 +1863,7 @@ ngx_http_process_request(ngx_http_reques return; } - if (sscf->verify == 1) { + if (sscf->verify == NGX_SSL_VERIFY_ON) { cert = SSL_get_peer_certificate(c->ssl->connection); if (cert == NULL) { diff -r ec01b1d1fff1 -r c3b52156de53 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Wed Feb 25 17:48:05 2015 +0300 +++ b/src/mail/ngx_mail_handler.c Thu Feb 26 14:06:24 2015 +0100 @@ -291,7 +291,8 @@ ngx_mail_verify_cert(ngx_mail_session_t rc = SSL_get_verify_result(c->ssl->connection); if (rc != X509_V_OK - && (sslcf->verify != 3 || !ngx_ssl_verify_error_optional(rc))) + && (sslcf->verify != NGX_SSL_VERIFY_OPTIONAL_NO_CA + || !ngx_ssl_verify_error_optional(rc))) { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client SSL certificate verify error: (%l:%s)", diff -r ec01b1d1fff1 -r c3b52156de53 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Wed Feb 25 17:48:05 2015 +0300 +++ b/src/mail/ngx_mail_ssl_module.c Thu Feb 26 14:06:24 2015 +0100 @@ -47,10 +47,10 @@ static ngx_conf_bitmask_t ngx_mail_ssl_ static ngx_conf_enum_t ngx_mail_ssl_verify[] = { - { ngx_string("off"), 0 }, - { ngx_string("on"), 1 }, - { ngx_string("optional"), 2 }, - { ngx_string("optional_no_ca"), 3 }, + { ngx_string("off"), NGX_SSL_VERIFY_OFF }, + { ngx_string("on"), NGX_SSL_VERIFY_ON }, + { ngx_string("optional"), NGX_SSL_VERIFY_OPTIONAL }, + { ngx_string("optional_no_ca"), NGX_SSL_VERIFY_OPTIONAL_NO_CA }, { ngx_null_string, 0 } }; @@ -287,7 +287,7 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); - ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); + ngx_conf_merge_uint_value(conf->verify, prev->verify, NGX_SSL_VERIFY_OFF); ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); From dakota at brokenpipe.ru Sat Mar 7 19:55:36 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Sat, 7 Mar 2015 22:55:36 +0300 Subject: Process HTTP-erroneous subrequests as normal subrequests Message-ID: Hi, My module makes subrequests and eats the responses in header and body filters. This works fine, when subrequests return 200. But if the subrequest returns, for example 404, filters are not being called (I just have my post subrequest callback called with 404 code and no filters callbacks). I want to read the response headers and body for the subrequests with 4xx and 5xx too. How to do that? Thanks. -- Marat -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Sun Mar 8 00:56:13 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 7 Mar 2015 21:56:13 -0300 Subject: How to start a new process? Message-ID: Hi, I would like to know what is the right way to start a new process like the "cache manager" to execute jobs non related with directly with user requests. Can you help me? Regards, Wandenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 10 11:46:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Mar 2015 14:46:34 +0300 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: References: <20150225152823.GL19012@mdounin.ru> <54EDEAE2.80904@phpgangsta.de> <15D40182-9191-4CD1-9FE5-0503AEAE847E@phpgangsta.de> <20150302141452.GT19012@mdounin.ru> <20150302145655.GV19012@mdounin.ru> <20150303155028.GB19012@mdounin.ru> <0657E6C2-E795-4880-83DA-3D9EE5C0C655@phpgangsta.de> Message-ID: <20150310114634.GA88631@mdounin.ru> Hello! On Sat, Mar 07, 2015 at 11:34:35AM +0100, Filipe Da Silva wrote: > I think that the half way solution is this one attached : > > - when an SSL connection is active : "Auth-SSL: on" ( current code status) > - else when it could have been active (using STARTTLS): "Auth-SSL: off" > - else SSL was disabled: there is nothing to send. No, thanks. -- Maxim Dounin http://nginx.org/ From ru at nginx.com Thu Mar 12 17:06:56 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 12 Mar 2015 17:06:56 +0000 Subject: [nginx] Deprecated "aio sendfile". Message-ID: details: http://hg.nginx.org/nginx/rev/2dac6ae6d703 branches: changeset: 6004:2dac6ae6d703 user: Ruslan Ermilov date: Thu Mar 12 20:06:04 2015 +0300 description: Deprecated "aio sendfile". Specifying "sendfile on" along with "aio on" activates the aio pre-loading mode for sendfile(). diffstat: src/http/ngx_http_copy_filter_module.c | 10 +++------- src/http/ngx_http_core_module.c | 2 +- src/http/ngx_http_core_module.h | 1 - 3 files changed, 4 insertions(+), 9 deletions(-) diffs (44 lines): diff -r cf2f8d91cf09 -r 2dac6ae6d703 src/http/ngx_http_copy_filter_module.c --- a/src/http/ngx_http_copy_filter_module.c Wed Mar 04 08:12:53 2015 +0300 +++ b/src/http/ngx_http_copy_filter_module.c Thu Mar 12 20:06:04 2015 +0300 @@ -121,14 +121,10 @@ ngx_http_copy_filter(ngx_http_request_t ctx->filter_ctx = r; #if (NGX_HAVE_FILE_AIO) - if (ngx_file_aio) { - if (clcf->aio) { - ctx->aio_handler = ngx_http_copy_aio_handler; - } + if (ngx_file_aio && clcf->aio) { + ctx->aio_handler = ngx_http_copy_aio_handler; #if (NGX_HAVE_AIO_SENDFILE) - if (clcf->aio == NGX_HTTP_AIO_SENDFILE) { - ctx->aio_preload = ngx_http_copy_aio_sendfile_preload; - } + ctx->aio_preload = ngx_http_copy_aio_sendfile_preload; #endif } #endif diff -r cf2f8d91cf09 -r 2dac6ae6d703 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Wed Mar 04 08:12:53 2015 +0300 +++ b/src/http/ngx_http_core_module.c Thu Mar 12 20:06:04 2015 +0300 @@ -120,7 +120,7 @@ static ngx_conf_enum_t ngx_http_core_ai { ngx_string("off"), NGX_HTTP_AIO_OFF }, { ngx_string("on"), NGX_HTTP_AIO_ON }, #if (NGX_HAVE_AIO_SENDFILE) - { ngx_string("sendfile"), NGX_HTTP_AIO_SENDFILE }, + { ngx_string("sendfile"), NGX_HTTP_AIO_ON }, #endif { ngx_null_string, 0 } }; diff -r cf2f8d91cf09 -r 2dac6ae6d703 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Wed Mar 04 08:12:53 2015 +0300 +++ b/src/http/ngx_http_core_module.h Thu Mar 12 20:06:04 2015 +0300 @@ -27,7 +27,6 @@ #define NGX_HTTP_AIO_OFF 0 #define NGX_HTTP_AIO_ON 1 -#define NGX_HTTP_AIO_SENDFILE 2 #define NGX_HTTP_SATISFY_ALL 0 From ru at nginx.com Thu Mar 12 20:03:19 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 12 Mar 2015 20:03:19 +0000 Subject: [nginx] Events: fixed typo in the error message. Message-ID: details: http://hg.nginx.org/nginx/rev/d84f0abd4a53 branches: changeset: 6005:d84f0abd4a53 user: Ruslan Ermilov date: Thu Mar 12 23:03:03 2015 +0300 description: Events: fixed typo in the error message. diffstat: src/event/modules/ngx_eventport_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 2dac6ae6d703 -r d84f0abd4a53 src/event/modules/ngx_eventport_module.c --- a/src/event/modules/ngx_eventport_module.c Thu Mar 12 20:06:04 2015 +0300 +++ b/src/event/modules/ngx_eventport_module.c Thu Mar 12 23:03:03 2015 +0300 @@ -581,7 +581,7 @@ ngx_eventport_process_events(ngx_cycle_t default: ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, - "unexpected even_port object %d", + "unexpected eventport object %d", event_list[i].portev_object); continue; } From ru at nginx.com Fri Mar 13 13:43:34 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 13 Mar 2015 13:43:34 +0000 Subject: [nginx] The "aio" directive parser made smarter. Message-ID: details: http://hg.nginx.org/nginx/rev/942283a53c28 branches: changeset: 6006:942283a53c28 user: Ruslan Ermilov date: Fri Mar 13 16:42:52 2015 +0300 description: The "aio" directive parser made smarter. It now prints meaningful warnings on all platforms. No functional changes. diffstat: src/http/ngx_http_core_module.c | 77 +++++++++++++++++++++++++++------------- src/http/ngx_http_core_module.h | 2 - 2 files changed, 52 insertions(+), 27 deletions(-) diffs (140 lines): diff -r d84f0abd4a53 -r 942283a53c28 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Thu Mar 12 23:03:03 2015 +0300 +++ b/src/http/ngx_http_core_module.c Fri Mar 13 16:42:52 2015 +0300 @@ -54,6 +54,8 @@ static char *ngx_http_core_server_name(n static char *ngx_http_core_root(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_core_limit_except(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_core_set_aio(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); static char *ngx_http_core_directio(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_core_error_page(ngx_conf_t *cf, ngx_command_t *cmd, @@ -114,20 +116,6 @@ static ngx_conf_enum_t ngx_http_core_re }; -#if (NGX_HAVE_FILE_AIO) - -static ngx_conf_enum_t ngx_http_core_aio[] = { - { ngx_string("off"), NGX_HTTP_AIO_OFF }, - { ngx_string("on"), NGX_HTTP_AIO_ON }, -#if (NGX_HAVE_AIO_SENDFILE) - { ngx_string("sendfile"), NGX_HTTP_AIO_ON }, -#endif - { ngx_null_string, 0 } -}; - -#endif - - static ngx_conf_enum_t ngx_http_core_satisfy[] = { { ngx_string("all"), NGX_HTTP_SATISFY_ALL }, { ngx_string("any"), NGX_HTTP_SATISFY_ANY }, @@ -423,16 +411,12 @@ static ngx_command_t ngx_http_core_comm offsetof(ngx_http_core_loc_conf_t, sendfile_max_chunk), NULL }, -#if (NGX_HAVE_FILE_AIO) - { ngx_string("aio"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, - ngx_conf_set_enum_slot, + ngx_http_core_set_aio, NGX_HTTP_LOC_CONF_OFFSET, - offsetof(ngx_http_core_loc_conf_t, aio), - &ngx_http_core_aio }, - -#endif + 0, + NULL }, { ngx_string("read_ahead"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, @@ -3639,9 +3623,7 @@ ngx_http_core_create_loc_conf(ngx_conf_t clcf->internal = NGX_CONF_UNSET; clcf->sendfile = NGX_CONF_UNSET; clcf->sendfile_max_chunk = NGX_CONF_UNSET_SIZE; -#if (NGX_HAVE_FILE_AIO) clcf->aio = NGX_CONF_UNSET; -#endif clcf->read_ahead = NGX_CONF_UNSET_SIZE; clcf->directio = NGX_CONF_UNSET; clcf->directio_alignment = NGX_CONF_UNSET; @@ -3857,9 +3839,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0); ngx_conf_merge_size_value(conf->sendfile_max_chunk, prev->sendfile_max_chunk, 0); -#if (NGX_HAVE_FILE_AIO) ngx_conf_merge_value(conf->aio, prev->aio, NGX_HTTP_AIO_OFF); -#endif ngx_conf_merge_size_value(conf->read_ahead, prev->read_ahead, 0); ngx_conf_merge_off_value(conf->directio, prev->directio, NGX_OPEN_FILE_DIRECTIO_OFF); @@ -4654,6 +4634,53 @@ ngx_http_core_limit_except(ngx_conf_t *c static char * +ngx_http_core_set_aio(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_core_loc_conf_t *clcf = conf; + + ngx_str_t *value; + + if (clcf->aio != NGX_CONF_UNSET) { + return "is duplicate"; + } + + value = cf->args->elts; + + if (ngx_strcmp(value[1].data, "off") == 0) { + clcf->aio = NGX_HTTP_AIO_OFF; + return NGX_CONF_OK; + } + + if (ngx_strcmp(value[1].data, "on") == 0) { +#if (NGX_HAVE_FILE_AIO) + clcf->aio = NGX_HTTP_AIO_ON; + return NGX_CONF_OK; +#else + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"aio on\" " + "is unsupported on this platform"); + return NGX_CONF_ERROR; +#endif + } + +#if (NGX_HAVE_AIO_SENDFILE) + + if (ngx_strcmp(value[1].data, "sendfile") == 0) { + clcf->aio = NGX_HTTP_AIO_ON; + + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "the \"sendfile\" parameter of " + "the \"aio\" directive is deprecated"); + return NGX_CONF_OK; + } + +#endif + + return "invalid value"; +} + + +static char * ngx_http_core_directio(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { ngx_http_core_loc_conf_t *clcf = conf; diff -r d84f0abd4a53 -r 942283a53c28 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Thu Mar 12 23:03:03 2015 +0300 +++ b/src/http/ngx_http_core_module.h Fri Mar 13 16:42:52 2015 +0300 @@ -395,9 +395,7 @@ struct ngx_http_core_loc_conf_s { /* client_body_in_singe_buffer */ ngx_flag_t internal; /* internal */ ngx_flag_t sendfile; /* sendfile */ -#if (NGX_HAVE_FILE_AIO) ngx_flag_t aio; /* aio */ -#endif ngx_flag_t tcp_nopush; /* tcp_nopush */ ngx_flag_t tcp_nodelay; /* tcp_nodelay */ ngx_flag_t reset_timedout_connection; /* reset_timedout_connection */ From ru at nginx.com Fri Mar 13 13:43:37 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 13 Mar 2015 13:43:37 +0000 Subject: [nginx] Configure: removed redundant auto/have call. Message-ID: details: http://hg.nginx.org/nginx/rev/79b473d5381d branches: changeset: 6007:79b473d5381d user: Ruslan Ermilov date: Fri Mar 13 16:43:01 2015 +0300 description: Configure: removed redundant auto/have call. The auto/feature call above is enough to set NGX_HAVE_SENDFILE. diffstat: auto/os/darwin | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff -r 942283a53c28 -r 79b473d5381d auto/os/darwin --- a/auto/os/darwin Fri Mar 13 16:42:52 2015 +0300 +++ b/auto/os/darwin Fri Mar 13 16:43:01 2015 +0300 @@ -100,7 +100,6 @@ ngx_feature_test="int s = 0, fd = 1; . auto/feature if [ $ngx_found = yes ]; then - have=NGX_HAVE_SENDFILE . auto/have CORE_SRCS="$CORE_SRCS $DARWIN_SENDFILE_SRCS" fi From kibrahim at getpantheon.com Sun Mar 15 11:07:11 2015 From: kibrahim at getpantheon.com (Kyle Ibrahim) Date: Sun, 15 Mar 2015 04:07:11 -0700 Subject: [PATCH] Added support for client_scheme_in_redirect directive Message-ID: Currently, there is no way way to control the scheme which will be used in nginx-issued redirects. This is a problem when the client is potentially using a different scheme than nginx due to a SSL terminating load balancer. As some client requests may have started over http and some over https, we'd like to way to dynamically set the proper client scheme. This is a patch which adds a directive `client_scheme_in_redirect` to complement `server_name_in_redirect` and `port_in_redirect`. A suggested documentation block is included in the commit. # HG changeset patch # User Kyle Ibrahim # Date 1426414581 25200 # Sun Mar 15 03:16:21 2015 -0700 # Node ID 9785f13c006025f180b354bdeac2de5d8cc9af8e # Parent 79b473d5381d85f79ab71b7aa85ecf9be1caf9fb Added support for client_scheme_in_redirect directive Syntax: client_scheme_in_redirect scheme; Default: -- Context: http, server, location The client_scheme_in_redirect directive defines the scheme in redirects issued by nginx. When not specified, the scheme will be https if the current connection is over ssl and http otherwise. The scheme value can contain variables. diff -r 79b473d5381d -r 9785f13c0060 contrib/vim/syntax/nginx.vim --- a/contrib/vim/syntax/nginx.vim Fri Mar 13 16:43:01 2015 +0300 +++ b/contrib/vim/syntax/nginx.vim Sun Mar 15 03:16:21 2015 -0700 @@ -96,6 +96,7 @@ syn keyword ngxDirective client_header_buffer_size syn keyword ngxDirective client_header_timeout syn keyword ngxDirective client_max_body_size +syn keyword ngxDirective client_scheme_in_redirect syn keyword ngxDirective connection_pool_size syn keyword ngxDirective create_full_put_path syn keyword ngxDirective daemon diff -r 79b473d5381d -r 9785f13c0060 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Mar 13 16:43:01 2015 +0300 +++ b/src/http/ngx_http_core_module.c Sun Mar 15 03:16:21 2015 -0700 @@ -72,6 +72,8 @@ void *conf); static char *ngx_http_core_resolver(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_client_scheme_in_redirect(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); #if (NGX_HTTP_GZIP) static ngx_int_t ngx_http_gzip_accept_encoding(ngx_str_t *ae); static ngx_uint_t ngx_http_gzip_quantity(u_char *p, u_char *last); @@ -560,6 +562,13 @@ offsetof(ngx_http_core_loc_conf_t, reset_timedout_connection), NULL }, + { ngx_string("client_scheme_in_redirect"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_http_client_scheme_in_redirect, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + { ngx_string("server_name_in_redirect"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -3642,6 +3651,7 @@ clcf->lingering_timeout = NGX_CONF_UNSET_MSEC; clcf->resolver_timeout = NGX_CONF_UNSET_MSEC; clcf->reset_timedout_connection = NGX_CONF_UNSET; + clcf->client_scheme_in_redirect = NGX_CONF_UNSET_PTR; clcf->server_name_in_redirect = NGX_CONF_UNSET; clcf->port_in_redirect = NGX_CONF_UNSET; clcf->msie_padding = NGX_CONF_UNSET; @@ -3898,6 +3908,8 @@ ngx_conf_merge_value(conf->reset_timedout_connection, prev->reset_timedout_connection, 0); + ngx_conf_merge_ptr_value(conf->client_scheme_in_redirect, + prev->client_scheme_in_redirect, NULL); ngx_conf_merge_value(conf->server_name_in_redirect, prev->server_name_in_redirect, 0); ngx_conf_merge_value(conf->port_in_redirect, prev->port_in_redirect, 1); @@ -5066,6 +5078,38 @@ return NGX_CONF_OK; } +static char * +ngx_http_client_scheme_in_redirect(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_core_loc_conf_t *clcf = conf; + + ngx_str_t *value; + ngx_http_compile_complex_value_t ccv; + + if (clcf->client_scheme_in_redirect != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + value = cf->args->elts; + + ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = &value[1]; + ccv.complex_value = ngx_palloc(cf->pool, + sizeof(ngx_http_complex_value_t)); + if (ccv.complex_value == NULL) { + return NGX_CONF_ERROR; + } + + if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + + clcf->client_scheme_in_redirect = ccv.complex_value; + + return NGX_CONF_OK; +} #if (NGX_HTTP_GZIP) diff -r 79b473d5381d -r 9785f13c0060 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Fri Mar 13 16:43:01 2015 +0300 +++ b/src/http/ngx_http_core_module.h Sun Mar 15 03:16:21 2015 -0700 @@ -399,6 +399,7 @@ ngx_flag_t tcp_nopush; /* tcp_nopush */ ngx_flag_t tcp_nodelay; /* tcp_nodelay */ ngx_flag_t reset_timedout_connection; /* reset_timedout_connection */ + ngx_http_complex_value_t *client_scheme_in_redirect; /* client_scheme_in_redirect */ ngx_flag_t server_name_in_redirect; /* server_name_in_redirect */ ngx_flag_t port_in_redirect; /* port_in_redirect */ ngx_flag_t msie_padding; /* msie_padding */ diff -r 79b473d5381d -r 9785f13c0060 src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Fri Mar 13 16:43:01 2015 +0300 +++ b/src/http/ngx_http_header_filter_module.c Sun Mar 15 03:16:21 2015 -0700 @@ -152,7 +152,7 @@ { u_char *p; size_t len; - ngx_str_t host, *status_line; + ngx_str_t client_scheme, host, *status_line; ngx_buf_t *b; ngx_uint_t status, i, port; ngx_chain_t out; @@ -317,6 +317,15 @@ { r->headers_out.location->hash = 0; + if (clcf->client_scheme_in_redirect) { + if (ngx_http_complex_value(r, clcf->client_scheme_in_redirect, &client_scheme) != NGX_OK) { + return NGX_ERROR; + } + + } else { + ngx_str_null(&client_scheme); + } + if (clcf->server_name_in_redirect) { cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); host = cscf->server_name; @@ -352,7 +361,12 @@ break; } - len += sizeof("Location: https://") - 1 + if (client_scheme.len) { + len += client_scheme.len; + } else { + len += sizeof("https") - 1; + } + len += sizeof("Location: ://") - 1 + host.len + r->headers_out.location->value.len + 2; @@ -374,6 +388,7 @@ } } else { + ngx_str_null(&client_scheme); ngx_str_null(&host); port = 0; } @@ -521,14 +536,19 @@ p = b->last + sizeof("Location: ") - 1; - b->last = ngx_cpymem(b->last, "Location: http", - sizeof("Location: http") - 1); + b->last = ngx_cpymem(b->last, "Location: ", + sizeof("Location: ") - 1); + if (client_scheme.len) { + b->last = ngx_copy(b->last, client_scheme.data, client_scheme.len); + } else { + b->last = ngx_cpymem(b->last, "http", sizeof("http") - 1); #if (NGX_HTTP_SSL) - if (c->ssl) { - *b->last++ ='s'; + if (c->ssl) { + *b->last++ ='s'; + } +#endif } -#endif *b->last++ = ':'; *b->last++ = '/'; *b->last++ = '/'; b->last = ngx_copy(b->last, host.data, host.len); diff -r 79b473d5381d -r 9785f13c0060 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Fri Mar 13 16:43:01 2015 +0300 +++ b/src/http/ngx_http_spdy_filter_module.c Sun Mar 15 03:16:21 2015 -0700 @@ -99,7 +99,7 @@ size_t len; u_char *p, *buf, *last; ngx_buf_t *b; - ngx_str_t host; + ngx_str_t client_scheme, host; ngx_uint_t i, j, count, port; ngx_chain_t *cl; ngx_list_part_t *part, *pt; @@ -217,6 +217,15 @@ { r->headers_out.location->hash = 0; + if (clcf->client_scheme_in_redirect) { + if (ngx_http_complex_value(r, clcf->client_scheme_in_redirect, &client_scheme) != NGX_OK) { + return NGX_ERROR; + } + + } else { + ngx_str_null(&client_scheme); + } + if (clcf->server_name_in_redirect) { cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); host = cscf->server_name; @@ -252,8 +261,14 @@ break; } + if (client_scheme.len) { + len += client_scheme.len; + } else { + len += ngx_http_spdy_nv_vsize("https"); + } + len += ngx_http_spdy_nv_nsize("location") - + ngx_http_spdy_nv_vsize("https://") + + ngx_http_spdy_nv_vsize("://") + host.len + r->headers_out.location->value.len; @@ -275,6 +290,7 @@ } } else { + ngx_str_null(&client_scheme); ngx_str_null(&host); port = 0; } @@ -411,13 +427,16 @@ p = last + NGX_SPDY_NV_VLEN_SIZE; - last = ngx_cpymem(p, "http", sizeof("http") - 1); - + if (client_scheme.len) { + last = ngx_cpymem(p, client_scheme.data, client_scheme.len); + } else { + last = ngx_cpymem(p, "http", sizeof("http") - 1); #if (NGX_HTTP_SSL) - if (c->ssl) { - *last++ ='s'; + if (c->ssl) { + *last++ ='s'; + } +#endif } -#endif *last++ = ':'; *last++ = '/'; *last++ = '/'; -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangxiaochen0 at gmail.com Sun Mar 15 13:53:14 2015 From: wangxiaochen0 at gmail.com (Xiaochen Wang) Date: Sun, 15 Mar 2015 21:53:14 +0800 Subject: [PATCH] SPDY: fixed format specifiers in logging. Message-ID: <20150315135314.GB60773@gmail.com> # HG changeset patch # User Xiaochen Wang # Date 1426427181 -28800 # Node ID ec3b9c4277e33bfc9b25bbee67b74d5ee528366a # Parent 79b473d5381d85f79ab71b7aa85ecf9be1caf9fb SPDY: fixed format specifiers in logging. diff -r 79b473d5381d -r ec3b9c4277e3 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Fri Mar 13 16:43:01 2015 +0300 +++ b/src/http/ngx_http_spdy.c Sun Mar 15 21:46:21 2015 +0800 @@ -1353,7 +1353,7 @@ ngx_http_spdy_state_window_update(ngx_ht pos += NGX_SPDY_DELTA_SIZE; ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, - "spdy WINDOW_UPDATE sid:%ui delta:%ui", sid, delta); + "spdy WINDOW_UPDATE sid:%ui delta:%uz", sid, delta); if (sid) { stream = ngx_http_spdy_get_stream_by_id(sc, sid); From dakota at brokenpipe.ru Mon Mar 16 00:05:37 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 03:05:37 +0300 Subject: When r != r->connection->data Message-ID: Hello again. I'm still struggling with the subrequests. And it looks like I've solved the most of everything, except for this. In a few modules I've found a trick: if (r != r->connection->data) r->connection->data = r; I've put this code to my post subrequest callback and before ngx_http_output_filter() call. If I remove this code, the request processing will hang dead. With this code my simple tests work fine. But when I try to make a subrequest from a subrequest I get something like this in the error log: 2015/03/16 02:55:39 [alert] 73485#0: *2 subrequest: "/lalala/?" logged again, client: 127.0.0.1, server: localhost, request: "POST /test1 HTTP/1.1", subrequest: "/lalala/", host: "localhost" And it looks like the subrequest processing is called twice. My question is about the proper scenario to handle the situation and about what exactly r != r->connection->data means. Thanks. -- Marat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dakota at brokenpipe.ru Mon Mar 16 12:14:15 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 15:14:15 +0300 Subject: No headers filter for a subrequest Message-ID: Hi, I was digging the reason why add_header in nginx config doesn't work for a subrequest and I've found this in ngx_http_headers_filter_module.c: if ((conf->expires == NGX_HTTP_EXPIRES_OFF && conf->headers == NULL) || r != r->main) { return ngx_http_next_header_filter(r); } Is there a particular reason to skip the filter for everything that's not a main request? -- Marat -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 12:23:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2015 15:23:38 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: Message-ID: <20150316122338.GW88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 03:14:15PM +0300, Marat Dakota wrote: > Hi, > > I was digging the reason why add_header in nginx config doesn't work for a > subrequest and I've found this in ngx_http_headers_filter_module.c: > > if ((conf->expires == NGX_HTTP_EXPIRES_OFF && conf->headers == NULL) > || r != r->main) > { > return ngx_http_next_header_filter(r); > } > > Is there a particular reason to skip the filter for everything that's not a > main request? The main reason is that there are no headers in subrequest responses. Headers are returned by the main request only. -- Maxim Dounin http://nginx.org/ From dakota at brokenpipe.ru Mon Mar 16 12:27:44 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 15:27:44 +0300 Subject: No headers filter for a subrequest In-Reply-To: <20150316122338.GW88631@mdounin.ru> References: <20150316122338.GW88631@mdounin.ru> Message-ID: But if I have: /location1 { ... } /location2 { add_header XXX-Some-Header Ololo; ... } And I do a subrequest from /location1 handler to /location2? On Mon, Mar 16, 2015 at 3:23 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 16, 2015 at 03:14:15PM +0300, Marat Dakota wrote: > > > Hi, > > > > I was digging the reason why add_header in nginx config doesn't work for > a > > subrequest and I've found this in ngx_http_headers_filter_module.c: > > > > if ((conf->expires == NGX_HTTP_EXPIRES_OFF && conf->headers == NULL) > > || r != r->main) > > { > > return ngx_http_next_header_filter(r); > > } > > > > Is there a particular reason to skip the filter for everything that's > not a > > main request? > > The main reason is that there are no headers in subrequest > responses. Headers are returned by the main request only. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dakota at brokenpipe.ru Mon Mar 16 12:32:30 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 15:32:30 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: <20150316122338.GW88631@mdounin.ru> Message-ID: I also might have: /location3 { proxy_pass https://www.blabla; } In this case my subrequest has the response headers from www.blabla. This means that the statement that there are no headers in subrequest responses is not completely correct. On Mon, Mar 16, 2015 at 3:27 PM, Marat Dakota wrote: > But if I have: > > /location1 { > ... > } > > /location2 { > add_header XXX-Some-Header Ololo; > ... > } > > And I do a subrequest from /location1 handler to /location2? > > On Mon, Mar 16, 2015 at 3:23 PM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Mar 16, 2015 at 03:14:15PM +0300, Marat Dakota wrote: >> >> > Hi, >> > >> > I was digging the reason why add_header in nginx config doesn't work >> for a >> > subrequest and I've found this in ngx_http_headers_filter_module.c: >> > >> > if ((conf->expires == NGX_HTTP_EXPIRES_OFF && conf->headers == NULL) >> > || r != r->main) >> > { >> > return ngx_http_next_header_filter(r); >> > } >> > >> > Is there a particular reason to skip the filter for everything that's >> not a >> > main request? >> >> The main reason is that there are no headers in subrequest >> responses. Headers are returned by the main request only. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 12:39:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2015 15:39:08 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: <20150316122338.GW88631@mdounin.ru> Message-ID: <20150316123908.GY88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 03:32:30PM +0300, Marat Dakota wrote: > I also might have: > > /location3 { > proxy_pass https://www.blabla; > } > > In this case my subrequest has the response headers from www.blabla. This > means that the statement that there are no headers in subrequest responses > is not completely correct. In either case headers returned to the client are ones returned by the main request. Anything else, if present for some unrelated reasons, will be thrown away. [...] -- Maxim Dounin http://nginx.org/ From dakota at brokenpipe.ru Mon Mar 16 12:48:45 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 15:48:45 +0300 Subject: No headers filter for a subrequest In-Reply-To: <20150316123908.GY88631@mdounin.ru> References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> Message-ID: Well, my problem is that I need these headers (I eat them from my module's header filter). I use subrequests for API calls. Does this mean that subrequests are the wrong way to do so? Do I need to use upstreams instead? On Mon, Mar 16, 2015 at 3:39 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 16, 2015 at 03:32:30PM +0300, Marat Dakota wrote: > > > I also might have: > > > > /location3 { > > proxy_pass https://www.blabla; > > } > > > > In this case my subrequest has the response headers from www.blabla. This > > means that the statement that there are no headers in subrequest > responses > > is not completely correct. > > In either case headers returned to the client are ones returned by > the main request. Anything else, if present for some unrelated > reasons, will be thrown away. > > [...] > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 13:00:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2015 16:00:27 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> Message-ID: <20150316130027.GZ88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 03:48:45PM +0300, Marat Dakota wrote: > Well, my problem is that I need these headers (I eat them from my module's > header filter). > I use subrequests for API calls. Does this mean that subrequests are the > wrong way to do so? Do I need to use upstreams instead? You can use/test headers that are already present in a response. But the headers filter module won't add additional headers as it's goal is to add headers visible in a response. -- Maxim Dounin http://nginx.org/ From dakota at brokenpipe.ru Mon Mar 16 13:11:04 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 16:11:04 +0300 Subject: No headers filter for a subrequest In-Reply-To: <20150316130027.GZ88631@mdounin.ru> References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> <20150316130027.GZ88631@mdounin.ru> Message-ID: But I might copy-paste the code from ngx_http_headers_filter() to my header filter, right? Or something will blow up eventually? On Mon, Mar 16, 2015 at 4:00 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 16, 2015 at 03:48:45PM +0300, Marat Dakota wrote: > > > Well, my problem is that I need these headers (I eat them from my > module's > > header filter). > > I use subrequests for API calls. Does this mean that subrequests are the > > wrong way to do so? Do I need to use upstreams instead? > > You can use/test headers that are already present in a response. > But the headers filter module won't add additional headers as it's > goal is to add headers visible in a response. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 13:16:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2015 16:16:44 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> <20150316130027.GZ88631@mdounin.ru> Message-ID: <20150316131644.GA88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 04:11:04PM +0300, Marat Dakota wrote: > But I might copy-paste the code from ngx_http_headers_filter() to my header > filter, right? Or something will blow up eventually? Yes (with appropriate changes). Nothing will blow up if done properly. -- Maxim Dounin http://nginx.org/ From dakota at brokenpipe.ru Mon Mar 16 13:27:56 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 16:27:56 +0300 Subject: No headers filter for a subrequest In-Reply-To: <20150316131644.GA88631@mdounin.ru> References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> <20150316130027.GZ88631@mdounin.ru> <20150316131644.GA88631@mdounin.ru> Message-ID: And one more question. I've noticed that I'll have to copy-paste ngx_http_header_val_s definition too (because it is defined in .c file) and this is a risk to have different structures when something is changed in nginx and my module still have the previous version. Another idea is to mock r->main and put it back after. Something like this in my header filter: main = r->main; r->main = r; rc = ngx_http_next_header_filter(r); r->main = main; This works for my current simple tests. Is this a completely bad idea? Thanks. On Mon, Mar 16, 2015 at 4:16 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 16, 2015 at 04:11:04PM +0300, Marat Dakota wrote: > > > But I might copy-paste the code from ngx_http_headers_filter() to my > header > > filter, right? Or something will blow up eventually? > > Yes (with appropriate changes). Nothing will blow up if done > properly. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 13:58:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2015 16:58:20 +0300 Subject: No headers filter for a subrequest In-Reply-To: References: <20150316122338.GW88631@mdounin.ru> <20150316123908.GY88631@mdounin.ru> <20150316130027.GZ88631@mdounin.ru> <20150316131644.GA88631@mdounin.ru> Message-ID: <20150316135819.GB88631@mdounin.ru> Hello! On Mon, Mar 16, 2015 at 04:27:56PM +0300, Marat Dakota wrote: > And one more question. I've noticed that I'll have to > copy-paste ngx_http_header_val_s definition too (because it is defined in > .c file) and this is a risk to have different structures when something is > changed in nginx and my module still have the previous version. That's an internal structure to store the module configuration information. As long as you are adding your own module, you may use a different structure for this, or avoid it at all if it's not needed by your module. > Another idea is to mock r->main and put it back after. Something like this > in my header filter: > > main = r->main; > r->main = r; > rc = ngx_http_next_header_filter(r); > r->main = main; > > This works for my current simple tests. Is this a completely bad idea? This is a dirty hack and will result in undefined behaviour. -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Mon Mar 16 19:11:56 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 16 Mar 2015 12:11:56 -0700 Subject: When r != r->connection->data In-Reply-To: References: Message-ID: Hello! On Sun, Mar 15, 2015 at 5:05 PM, Marat Dakota wrote: > In a few modules I've found a trick: > > if (r != r->connection->data) > r->connection->data = r; > Careful. This is a common hack to cheat nginx's ngx_http_postpone_filter_module when the in-stock subrequest model cannot serve us well. When the currently serving (sub)request is not the request doing output, r is not equal to r->connection->data. The latter means the currently *active* request. This is needed for the postpone filter module mentioned above. You need to carefully study the filter module (and ngx_http_finalize_request) before doing any serious subrequest programming. And nginx subrequests are really a mess for nontrivial things IMHO (no offense to the official designer) and better avoid them :) Regards, -agentzh From dakota at brokenpipe.ru Mon Mar 16 20:50:35 2015 From: dakota at brokenpipe.ru (Marat Dakota) Date: Mon, 16 Mar 2015 23:50:35 +0300 Subject: When r != r->connection->data In-Reply-To: References: Message-ID: Thanks. I'll try to read the postpone filter code. It's too late to give up :). -- Marat On Mon, Mar 16, 2015 at 10:11 PM, Yichun Zhang (agentzh) wrote: > Hello! > > On Sun, Mar 15, 2015 at 5:05 PM, Marat Dakota wrote: > > In a few modules I've found a trick: > > > > if (r != r->connection->data) > > r->connection->data = r; > > > > Careful. This is a common hack to cheat nginx's > ngx_http_postpone_filter_module when the in-stock subrequest model > cannot serve us well. > > When the currently serving (sub)request is not the request doing > output, r is not equal to r->connection->data. The latter means the > currently *active* request. This is needed for the postpone filter > module mentioned above. You need to carefully study the filter module > (and ngx_http_finalize_request) before doing any serious subrequest > programming. And nginx subrequests are really a mess for nontrivial > things IMHO (no offense to the official designer) and better avoid > them :) > > Regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Mar 16 20:55:21 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 16 Mar 2015 13:55:21 -0700 Subject: No headers filter for a subrequest In-Reply-To: References: Message-ID: Hello! On Mon, Mar 16, 2015 at 5:14 AM, Marat Dakota wrote: > I was digging the reason why add_header in nginx config doesn't work for a > subrequest and I've found this in ngx_http_headers_filter_module.c: > Try using my ngx_headers_more module for adding and overriding subrequests' response headers: https://github.com/openresty/headers-more-nginx-module#readme We've been doing this sort of things for years :) Regards, -agentzh From hungnv at opensource.com.vn Tue Mar 17 06:56:04 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Tue, 17 Mar 2015 13:56:04 +0700 Subject: Couple questions about module behaviour In-Reply-To: <20150213131536.GE19012@mdounin.ru> References: <20150212135252.GV19012@mdounin.ru> <20150213131536.GE19012@mdounin.ru> Message-ID: <686D71EE-06F4-4AB4-BDB3-209F26D9FD67@opensource.com.vn> Hello, > No, it means that a server don't know how many bytes a client > actually recieved. So can we know how many bytes server actually sent (write to the socket)? Thanks. ? H?ng Email: hungnv at opensource.com.vn > On Feb 13, 2015, at 8:15 PM, Maxim Dounin wrote: > > Hello! > > On Fri, Feb 13, 2015 at 09:49:08AM +0700, hungnv at opensource.com.vn wrote: > >> Well, this means there?s another parameter in log module which >> actually log number of bytes client received (other than >> $body_bytes_sent or $bytes_sent). ? > > No, it means that a server don't know how many bytes a client > actually recieved. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From a.marinov at ucdn.com Tue Mar 17 06:57:24 2015 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Tue, 17 Mar 2015 08:57:24 +0200 Subject: Couple questions about module behaviour In-Reply-To: <686D71EE-06F4-4AB4-BDB3-209F26D9FD67@opensource.com.vn> References: <20150212135252.GV19012@mdounin.ru> <20150213131536.GE19012@mdounin.ru> <686D71EE-06F4-4AB4-BDB3-209F26D9FD67@opensource.com.vn> Message-ID: r->connection->sent ? On Tue, Mar 17, 2015 at 8:56 AM, hungnv at opensource.com.vn < hungnv at opensource.com.vn> wrote: > Hello, > > No, it means that a server don't know how many bytes a client > > actually recieved. > > So can we know how many bytes server actually sent (write to the socket)? > > Thanks. > > ? > > H?ng > Email: hungnv at opensource.com.vn > > > > > On Feb 13, 2015, at 8:15 PM, Maxim Dounin wrote: > > > > Hello! > > > > On Fri, Feb 13, 2015 at 09:49:08AM +0700, hungnv at opensource.com.vn > wrote: > > > >> Well, this means there?s another parameter in log module which > >> actually log number of bytes client received (other than > >> $body_bytes_sent or $bytes_sent). ? > > > > No, it means that a server don't know how many bytes a client > > actually recieved. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hungnv at opensource.com.vn Tue Mar 17 07:29:21 2015 From: hungnv at opensource.com.vn (hungnv at opensource.com.vn) Date: Tue, 17 Mar 2015 14:29:21 +0700 Subject: Couple questions about module behaviour In-Reply-To: References: <20150212135252.GV19012@mdounin.ru> <20150213131536.GE19012@mdounin.ru> <686D71EE-06F4-4AB4-BDB3-209F26D9FD67@opensource.com.vn> Message-ID: Hello, No, it?s not. If you take a look at ngx_http_log_module.c, r->connection->sent is use by function ngx_http_log_bytes_sent, which is later logged as $byte_sents variable, and it?s actually file size. -- H?ng Email: hungnv at opensource.com.vn > On Mar 17, 2015, at 1:57 PM, Anatoli Marinov wrote: > > r->connection->sent ? > > > On Tue, Mar 17, 2015 at 8:56 AM, hungnv at opensource.com.vn > wrote: > Hello, > > No, it means that a server don't know how many bytes a client > > actually recieved. > > So can we know how many bytes server actually sent (write to the socket)? > > Thanks. > > ? > > H?ng > Email: hungnv at opensource.com.vn > > > > > On Feb 13, 2015, at 8:15 PM, Maxim Dounin > wrote: > > > > Hello! > > > > On Fri, Feb 13, 2015 at 09:49:08AM +0700, hungnv at opensource.com.vn wrote: > > > >> Well, this means there?s another parameter in log module which > >> actually log number of bytes client received (other than > >> $body_bytes_sent or $bytes_sent). ? > > > > No, it means that a server don't know how many bytes a client > > actually recieved. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Tue Mar 17 09:59:36 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:36 +0000 Subject: [nginx] Core: expose maximum values of time_t and ngx_int_t. Message-ID: details: http://hg.nginx.org/nginx/rev/b92d5a26d55f branches: changeset: 6008:b92d5a26d55f user: Ruslan Ermilov date: Tue Mar 17 00:24:34 2015 +0300 description: Core: expose maximum values of time_t and ngx_int_t. These are needed to detect overflows. diffstat: auto/unix | 1 + src/core/ngx_config.h | 3 +++ src/os/win32/ngx_win32_config.h | 2 ++ 3 files changed, 6 insertions(+), 0 deletions(-) diffs (45 lines): diff -r 79b473d5381d -r b92d5a26d55f auto/unix --- a/auto/unix Fri Mar 13 16:43:01 2015 +0300 +++ b/auto/unix Tue Mar 17 00:24:34 2015 +0300 @@ -510,6 +510,7 @@ ngx_param=NGX_OFF_T_LEN; ngx_value=$ngx_ ngx_type="time_t"; . auto/types/sizeof ngx_param=NGX_TIME_T_SIZE; ngx_value=$ngx_size; . auto/types/value ngx_param=NGX_TIME_T_LEN; ngx_value=$ngx_max_len; . auto/types/value +ngx_param=NGX_MAX_TIME_T_VALUE; ngx_value=$ngx_max_value; . auto/types/value # syscalls, libc calls and some features diff -r 79b473d5381d -r b92d5a26d55f src/core/ngx_config.h --- a/src/core/ngx_config.h Fri Mar 13 16:43:01 2015 +0300 +++ b/src/core/ngx_config.h Tue Mar 17 00:24:34 2015 +0300 @@ -85,8 +85,11 @@ typedef intptr_t ngx_flag_t; #if (NGX_PTR_SIZE == 4) #define NGX_INT_T_LEN NGX_INT32_LEN +#define NGX_MAX_INT_T_VALUE 2147483647 + #else #define NGX_INT_T_LEN NGX_INT64_LEN +#define NGX_MAX_INT_T_VALUE 9223372036854775807 #endif diff -r 79b473d5381d -r b92d5a26d55f src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h Fri Mar 13 16:43:01 2015 +0300 +++ b/src/os/win32/ngx_win32_config.h Tue Mar 17 00:24:34 2015 +0300 @@ -196,6 +196,7 @@ typedef int sig_atomic_t #define NGX_MAX_SIZE_T_VALUE 9223372036854775807 #define NGX_TIME_T_LEN (sizeof("-9223372036854775808") - 1) #define NGX_TIME_T_SIZE 8 +#define NGX_MAX_TIME_T_VALUE 9223372036854775807 #else @@ -204,6 +205,7 @@ typedef int sig_atomic_t #define NGX_MAX_SIZE_T_VALUE 2147483647 #define NGX_TIME_T_LEN (sizeof("-2147483648") - 1) #define NGX_TIME_T_SIZE 4 +#define NGX_MAX_TIME_T_VALUE 2147483647 #endif From ru at nginx.com Tue Mar 17 09:59:44 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:44 +0000 Subject: [nginx] Core: overflow detection in number parsing functions. Message-ID: details: http://hg.nginx.org/nginx/rev/15a15f6ae3a2 branches: changeset: 6009:15a15f6ae3a2 user: Ruslan Ermilov date: Tue Mar 17 00:26:15 2015 +0300 description: Core: overflow detection in number parsing functions. diffstat: src/core/ngx_parse.c | 25 +++++++----- src/core/ngx_string.c | 99 +++++++++++++++++++++++++++++--------------------- 2 files changed, 72 insertions(+), 52 deletions(-) diffs (truncated from 317 to 300 lines): diff -r b92d5a26d55f -r 15a15f6ae3a2 src/core/ngx_parse.c --- a/src/core/ngx_parse.c Tue Mar 17 00:24:34 2015 +0300 +++ b/src/core/ngx_parse.c Tue Mar 17 00:26:15 2015 +0300 @@ -12,10 +12,9 @@ ssize_t ngx_parse_size(ngx_str_t *line) { - u_char unit; - size_t len; - ssize_t size; - ngx_int_t scale; + u_char unit; + size_t len; + ssize_t size, scale, max; len = line->len; unit = line->data[len - 1]; @@ -24,21 +23,24 @@ ngx_parse_size(ngx_str_t *line) case 'K': case 'k': len--; + max = NGX_MAX_SIZE_T_VALUE / 1024; scale = 1024; break; case 'M': case 'm': len--; + max = NGX_MAX_SIZE_T_VALUE / (1024 * 1024); scale = 1024 * 1024; break; default: + max = NGX_MAX_SIZE_T_VALUE; scale = 1; } size = ngx_atosz(line->data, len); - if (size == NGX_ERROR) { + if (size == NGX_ERROR || size > max) { return NGX_ERROR; } @@ -51,10 +53,9 @@ ngx_parse_size(ngx_str_t *line) off_t ngx_parse_offset(ngx_str_t *line) { - u_char unit; - off_t offset; - size_t len; - ngx_int_t scale; + u_char unit; + off_t offset, scale, max; + size_t len; len = line->len; unit = line->data[len - 1]; @@ -63,27 +64,31 @@ ngx_parse_offset(ngx_str_t *line) case 'K': case 'k': len--; + max = NGX_MAX_OFF_T_VALUE / 1024; scale = 1024; break; case 'M': case 'm': len--; + max = NGX_MAX_OFF_T_VALUE / (1024 * 1024); scale = 1024 * 1024; break; case 'G': case 'g': len--; + max = NGX_MAX_OFF_T_VALUE / (1024 * 1024 * 1024); scale = 1024 * 1024 * 1024; break; default: + max = NGX_MAX_OFF_T_VALUE; scale = 1; } offset = ngx_atoof(line->data, len); - if (offset == NGX_ERROR) { + if (offset == NGX_ERROR || offset > max) { return NGX_ERROR; } diff -r b92d5a26d55f -r 15a15f6ae3a2 src/core/ngx_string.c --- a/src/core/ngx_string.c Tue Mar 17 00:24:34 2015 +0300 +++ b/src/core/ngx_string.c Tue Mar 17 00:26:15 2015 +0300 @@ -901,26 +901,28 @@ ngx_filename_cmp(u_char *s1, u_char *s2, ngx_int_t ngx_atoi(u_char *line, size_t n) { - ngx_int_t value; + ngx_int_t value, cutoff, cutlim; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_INT_T_VALUE / 10; + cutlim = NGX_MAX_INT_T_VALUE % 10; + for (value = 0; n--; line++) { if (*line < '0' || *line > '9') { return NGX_ERROR; } + if (value >= cutoff && (value > cutoff || *line - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*line - '0'); } - if (value < 0) { - return NGX_ERROR; - - } else { - return value; - } + return value; } @@ -929,13 +931,16 @@ ngx_atoi(u_char *line, size_t n) ngx_int_t ngx_atofp(u_char *line, size_t n, size_t point) { - ngx_int_t value; + ngx_int_t value, cutoff, cutlim; ngx_uint_t dot; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_INT_T_VALUE / 10; + cutlim = NGX_MAX_INT_T_VALUE % 10; + dot = 0; for (value = 0; n--; line++) { @@ -957,98 +962,107 @@ ngx_atofp(u_char *line, size_t n, size_t return NGX_ERROR; } + if (value >= cutoff && (value > cutoff || *line - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*line - '0'); point -= dot; } while (point--) { + if (value > cutoff) { + return NGX_ERROR; + } + value = value * 10; } - if (value < 0) { - return NGX_ERROR; - - } else { - return value; - } + return value; } ssize_t ngx_atosz(u_char *line, size_t n) { - ssize_t value; + ssize_t value, cutoff, cutlim; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_SIZE_T_VALUE / 10; + cutlim = NGX_MAX_SIZE_T_VALUE % 10; + for (value = 0; n--; line++) { if (*line < '0' || *line > '9') { return NGX_ERROR; } + if (value >= cutoff && (value > cutoff || *line - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*line - '0'); } - if (value < 0) { - return NGX_ERROR; - - } else { - return value; - } + return value; } off_t ngx_atoof(u_char *line, size_t n) { - off_t value; + off_t value, cutoff, cutlim; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_OFF_T_VALUE / 10; + cutlim = NGX_MAX_OFF_T_VALUE % 10; + for (value = 0; n--; line++) { if (*line < '0' || *line > '9') { return NGX_ERROR; } + if (value >= cutoff && (value > cutoff || *line - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*line - '0'); } - if (value < 0) { - return NGX_ERROR; - - } else { - return value; - } + return value; } time_t ngx_atotm(u_char *line, size_t n) { - time_t value; + time_t value, cutoff, cutlim; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_TIME_T_VALUE / 10; + cutlim = NGX_MAX_TIME_T_VALUE % 10; + for (value = 0; n--; line++) { if (*line < '0' || *line > '9') { return NGX_ERROR; } + if (value >= cutoff && (value > cutoff || *line - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*line - '0'); } - if (value < 0) { - return NGX_ERROR; - - } else { - return value; - } + return value; } @@ -1056,13 +1070,19 @@ ngx_int_t ngx_hextoi(u_char *line, size_t n) { u_char c, ch; - ngx_int_t value; + ngx_int_t value, cutoff; if (n == 0) { return NGX_ERROR; } + cutoff = NGX_MAX_INT_T_VALUE / 16; + for (value = 0; n--; line++) { + if (value > cutoff) { + return NGX_ERROR; + } + From ru at nginx.com Tue Mar 17 09:59:47 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:47 +0000 Subject: [nginx] Refactored ngx_parse_time(). Message-ID: details: http://hg.nginx.org/nginx/rev/040e2736e8dc branches: changeset: 6010:040e2736e8dc user: Ruslan Ermilov date: Tue Mar 17 00:26:18 2015 +0300 description: Refactored ngx_parse_time(). No functional changes. diffstat: src/core/ngx_parse.c | 4 +--- 1 files changed, 1 insertions(+), 3 deletions(-) diffs (28 lines): diff -r 15a15f6ae3a2 -r 040e2736e8dc src/core/ngx_parse.c --- a/src/core/ngx_parse.c Tue Mar 17 00:26:15 2015 +0300 +++ b/src/core/ngx_parse.c Tue Mar 17 00:26:18 2015 +0300 @@ -121,7 +121,6 @@ ngx_parse_time(ngx_str_t *line, ngx_uint value = 0; total = 0; step = is_sec ? st_start : st_month; - scale = is_sec ? 1 : 1000; p = line->data; last = p + line->len; @@ -239,7 +238,6 @@ ngx_parse_time(ngx_str_t *line, ngx_uint } value = 0; - scale = is_sec ? 1 : 1000; while (p < last && *p == ' ') { p++; @@ -247,7 +245,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint } if (valid) { - return total + value * scale; + return total + value * (is_sec ? 1 : 1000); } return NGX_ERROR; From ru at nginx.com Tue Mar 17 09:59:49 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:49 +0000 Subject: [nginx] Core: overflow detection in ngx_parse_time() (ticket #732). Message-ID: details: http://hg.nginx.org/nginx/rev/429a8c65f0a7 branches: changeset: 6011:429a8c65f0a7 user: Ruslan Ermilov date: Tue Mar 17 00:26:20 2015 +0300 description: Core: overflow detection in ngx_parse_time() (ticket #732). diffstat: src/core/ngx_parse.c | 53 ++++++++++++++++++++++++++++++++++++--------------- 1 files changed, 37 insertions(+), 16 deletions(-) diffs (161 lines): diff -r 040e2736e8dc -r 429a8c65f0a7 src/core/ngx_parse.c --- a/src/core/ngx_parse.c Tue Mar 17 00:26:18 2015 +0300 +++ b/src/core/ngx_parse.c Tue Mar 17 00:26:20 2015 +0300 @@ -103,7 +103,8 @@ ngx_parse_time(ngx_str_t *line, ngx_uint { u_char *p, *last; ngx_int_t value, total, scale; - ngx_uint_t max, valid; + ngx_int_t max, cutoff, cutlim; + ngx_uint_t valid; enum { st_start = 0, st_year, @@ -120,6 +121,8 @@ ngx_parse_time(ngx_str_t *line, ngx_uint valid = 0; value = 0; total = 0; + cutoff = NGX_MAX_INT_T_VALUE / 10; + cutlim = NGX_MAX_INT_T_VALUE % 10; step = is_sec ? st_start : st_month; p = line->data; @@ -128,6 +131,10 @@ ngx_parse_time(ngx_str_t *line, ngx_uint while (p < last) { if (*p >= '0' && *p <= '9') { + if (value >= cutoff && (value > cutoff || *p - '0' > cutlim)) { + return NGX_ERROR; + } + value = value * 10 + (*p++ - '0'); valid = 1; continue; @@ -140,7 +147,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_year; - max = NGX_MAX_INT32_VALUE / (60 * 60 * 24 * 365); + max = NGX_MAX_INT_T_VALUE / (60 * 60 * 24 * 365); scale = 60 * 60 * 24 * 365; break; @@ -149,7 +156,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_month; - max = NGX_MAX_INT32_VALUE / (60 * 60 * 24 * 30); + max = NGX_MAX_INT_T_VALUE / (60 * 60 * 24 * 30); scale = 60 * 60 * 24 * 30; break; @@ -158,7 +165,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_week; - max = NGX_MAX_INT32_VALUE / (60 * 60 * 24 * 7); + max = NGX_MAX_INT_T_VALUE / (60 * 60 * 24 * 7); scale = 60 * 60 * 24 * 7; break; @@ -167,7 +174,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_day; - max = NGX_MAX_INT32_VALUE / (60 * 60 * 24); + max = NGX_MAX_INT_T_VALUE / (60 * 60 * 24); scale = 60 * 60 * 24; break; @@ -176,7 +183,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_hour; - max = NGX_MAX_INT32_VALUE / (60 * 60); + max = NGX_MAX_INT_T_VALUE / (60 * 60); scale = 60 * 60; break; @@ -187,7 +194,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint } p++; step = st_msec; - max = NGX_MAX_INT32_VALUE; + max = NGX_MAX_INT_T_VALUE; scale = 1; break; } @@ -196,7 +203,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_min; - max = NGX_MAX_INT32_VALUE / 60; + max = NGX_MAX_INT_T_VALUE / 60; scale = 60; break; @@ -205,7 +212,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_sec; - max = NGX_MAX_INT32_VALUE; + max = NGX_MAX_INT_T_VALUE; scale = 1; break; @@ -214,7 +221,7 @@ ngx_parse_time(ngx_str_t *line, ngx_uint return NGX_ERROR; } step = st_last; - max = NGX_MAX_INT32_VALUE; + max = NGX_MAX_INT_T_VALUE; scale = 1; break; @@ -227,16 +234,18 @@ ngx_parse_time(ngx_str_t *line, ngx_uint max /= 1000; } - if ((ngx_uint_t) value > max) { + if (value > max) { return NGX_ERROR; } - total += value * scale; + value *= scale; - if ((ngx_uint_t) total > NGX_MAX_INT32_VALUE) { + if (total > NGX_MAX_INT_T_VALUE - value) { return NGX_ERROR; } + total += value; + value = 0; while (p < last && *p == ' ') { @@ -244,9 +253,21 @@ ngx_parse_time(ngx_str_t *line, ngx_uint } } - if (valid) { - return total + value * (is_sec ? 1 : 1000); + if (!valid) { + return NGX_ERROR; } - return NGX_ERROR; + if (!is_sec) { + if (value > NGX_MAX_INT_T_VALUE / 1000) { + return NGX_ERROR; + } + + value *= 1000; + } + + if (total > NGX_MAX_INT_T_VALUE - value) { + return NGX_ERROR; + } + + return total + value; } From ru at nginx.com Tue Mar 17 09:59:52 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:52 +0000 Subject: [nginx] Overflow detection in ngx_inet_addr(). Message-ID: details: http://hg.nginx.org/nginx/rev/550212836c8f branches: changeset: 6012:550212836c8f user: Ruslan Ermilov date: Tue Mar 17 00:26:22 2015 +0300 description: Overflow detection in ngx_inet_addr(). diffstat: src/core/ngx_inet.c | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-) diffs (32 lines): diff -r 429a8c65f0a7 -r 550212836c8f src/core/ngx_inet.c --- a/src/core/ngx_inet.c Tue Mar 17 00:26:20 2015 +0300 +++ b/src/core/ngx_inet.c Tue Mar 17 00:26:22 2015 +0300 @@ -27,6 +27,10 @@ ngx_inet_addr(u_char *text, size_t len) for (p = text; p < text + len; p++) { + if (octet > 255) { + return INADDR_NONE; + } + c = *p; if (c >= '0' && c <= '9') { @@ -34,7 +38,7 @@ ngx_inet_addr(u_char *text, size_t len) continue; } - if (c == '.' && octet < 256) { + if (c == '.') { addr = (addr << 8) + octet; octet = 0; n++; @@ -44,7 +48,7 @@ ngx_inet_addr(u_char *text, size_t len) return INADDR_NONE; } - if (n == 3 && octet < 256) { + if (n == 3) { addr = (addr << 8) + octet; return htonl(addr); } From ru at nginx.com Tue Mar 17 09:59:54 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 09:59:54 +0000 Subject: [nginx] Overflow detection in ngx_http_range_parse(). Message-ID: details: http://hg.nginx.org/nginx/rev/9653092a79fd branches: changeset: 6013:9653092a79fd user: Ruslan Ermilov date: Tue Mar 17 00:26:24 2015 +0300 description: Overflow detection in ngx_http_range_parse(). diffstat: src/http/modules/ngx_http_range_filter_module.c | 13 ++++++++++++- 1 files changed, 12 insertions(+), 1 deletions(-) diffs (44 lines): diff -r 550212836c8f -r 9653092a79fd src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Tue Mar 17 00:26:22 2015 +0300 +++ b/src/http/modules/ngx_http_range_filter_module.c Tue Mar 17 00:26:24 2015 +0300 @@ -274,7 +274,7 @@ ngx_http_range_parse(ngx_http_request_t ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length; + off_t start, end, size, content_length, cutoff, cutlim; ngx_uint_t suffix; ngx_http_range_t *range; @@ -282,6 +282,9 @@ ngx_http_range_parse(ngx_http_request_t size = 0; content_length = r->headers_out.content_length_n; + cutoff = NGX_MAX_OFF_T_VALUE / 10; + cutlim = NGX_MAX_OFF_T_VALUE % 10; + for ( ;; ) { start = 0; end = 0; @@ -295,6 +298,10 @@ ngx_http_range_parse(ngx_http_request_t } while (*p >= '0' && *p <= '9') { + if (start >= cutoff && (start > cutoff || *p - '0' > cutlim)) { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + start = start * 10 + *p++ - '0'; } @@ -321,6 +328,10 @@ ngx_http_range_parse(ngx_http_request_t } while (*p >= '0' && *p <= '9') { + if (end >= cutoff && (end > cutoff || *p - '0' > cutlim)) { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + end = end * 10 + *p++ - '0'; } From ru at nginx.com Tue Mar 17 10:00:09 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Mar 2015 10:00:09 +0000 Subject: [nginx] Overflow detection in ngx_http_parse_chunked(). Message-ID: details: http://hg.nginx.org/nginx/rev/e370c5fdf4c8 branches: changeset: 6014:e370c5fdf4c8 user: Ruslan Ermilov date: Tue Mar 17 00:26:27 2015 +0300 description: Overflow detection in ngx_http_parse_chunked(). diffstat: src/http/ngx_http_parse.c | 12 ++++++++---- 1 files changed, 8 insertions(+), 4 deletions(-) diffs (36 lines): diff -r 9653092a79fd -r e370c5fdf4c8 src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c Tue Mar 17 00:26:24 2015 +0300 +++ b/src/http/ngx_http_parse.c Tue Mar 17 00:26:27 2015 +0300 @@ -2155,6 +2155,10 @@ ngx_http_parse_chunked(ngx_http_request_ goto invalid; case sw_chunk_size: + if (ctx->size > NGX_MAX_OFF_T_VALUE / 16) { + goto invalid; + } + if (ch >= '0' && ch <= '9') { ctx->size = ctx->size * 16 + (ch - '0'); break; @@ -2304,6 +2308,10 @@ data: ctx->state = state; b->pos = pos; + if (ctx->size > NGX_MAX_OFF_T_VALUE - 5) { + goto invalid; + } + switch (state) { case sw_chunk_start: @@ -2340,10 +2348,6 @@ data: } - if (ctx->size < 0 || ctx->length < 0) { - goto invalid; - } - return rc; done: From mdounin at mdounin.ru Tue Mar 17 13:46:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 16:46:48 +0300 Subject: Couple questions about module behaviour In-Reply-To: References: <20150212135252.GV19012@mdounin.ru> <20150213131536.GE19012@mdounin.ru> <686D71EE-06F4-4AB4-BDB3-209F26D9FD67@opensource.com.vn> Message-ID: <20150317134648.GM88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 02:29:21PM +0700, hungnv at opensource.com.vn wrote: > Hello, > > No, it?s not. If you take a look at ngx_http_log_module.c, > r->connection->sent is use by function ngx_http_log_bytes_sent, > which is later logged as $byte_sents variable, and it?s actually > file size. No. The $bytes_sent variable is a number of bytes nginx wrote to a socket, not a file size. Note though, that as long socket buffers are big enough the whole response will fit into the send buffer, and you won't see the difference. -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Tue Mar 17 18:22:38 2015 From: kyprizel at gmail.com (kyprizel) Date: Tue, 17 Mar 2015 21:22:38 +0300 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 Message-ID: Hi, Sorry for spamming - previous message was sent to wrong mailing list and possibly included broken patch. This patch is mostly finishing of Rob Stradlings patch discussed in thread http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html Multi certificate support works only for OpenSSL >= 1.0.2. Only certificates with different crypto algorithms (ECC/RSA/DSA) can be used b/c of OpenSSL limitations, otherwise (RSA+SHA-256 / RSA-SHA-1 for example) only last specified in the config will be used. Can you please review it. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_multicert1.patch Type: application/octet-stream Size: 48228 bytes Desc: not available URL: From albertcasademont at gmail.com Tue Mar 17 18:27:20 2015 From: albertcasademont at gmail.com (Albert Casademont Filella) Date: Tue, 17 Mar 2015 19:27:20 +0100 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: References: Message-ID: This would be a very nice addition indeed, thanks!! I guess it needs quite a lot of testing though, ECC certs are still not really common these days. BTW and before some of the core devs says it patches should be sent in the email body, not as an attachment. It is much more convenient for reviewing it ;) On Tue, Mar 17, 2015 at 7:22 PM, kyprizel wrote: > Hi, > Sorry for spamming - previous message was sent to wrong mailing list and > possibly included broken patch. > > This patch is mostly finishing of Rob Stradlings patch discussed in thread > http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html > > Multi certificate support works only for OpenSSL >= 1.0.2. > Only certificates with different crypto algorithms (ECC/RSA/DSA) can be > used b/c of OpenSSL limitations, otherwise (RSA+SHA-256 / RSA-SHA-1 for > example) only last specified in the config will be used. > Can you please review it. > > Thank you. > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Tue Mar 17 18:38:42 2015 From: kyprizel at gmail.com (kyprizel) Date: Tue, 17 Mar 2015 21:38:42 +0300 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: References: Message-ID: Sure it should be tested (there are can be some memory leaks). Need to know if it's idologically acceptable. Nginx with dual cert support can be tested at https://ctftime.org. Patch in body inline: # HG changeset patch # User Eldar Zaitov # Date 1426616118 -10800 # Node ID 83b0f57fbcb514ffd74bb89070580473bacd286e # Parent e370c5fdf4c8edc2e8d33d7170c1b1cc74a2ecb6 Multiple SSL certificate support with OpenSSL >= 1.0.2 diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Tue Mar 17 00:26:27 2015 +0300 +++ b/src/event/ngx_event_openssl.c Tue Mar 17 21:15:18 2015 +0300 @@ -286,198 +286,282 @@ ngx_int_t -ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, - ngx_str_t *key, ngx_array_t *passwords) +ngx_ssl_certificates(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_array_t *certs, + ngx_array_t *keys, ngx_array_t *passwords) { - BIO *bio; - X509 *x509; - u_long n; - ngx_str_t *pwd; - ngx_uint_t tries; - - if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { + ngx_str_t *pwd; + ngx_uint_t tries; + ngx_str_t *cert; + ngx_str_t *key; + ngx_uint_t i, j; + u_long n; + BIO *bio; + EVP_PKEY *pkey; + X509 *x509; + X509 *x509_ca; + STACK_OF(X509) *chain; + ngx_array_t *certificates; + ngx_ssl_certificate_t *cert_info; + + bio = NULL; + pkey = NULL; + x509 = NULL; + x509_ca = NULL; + + cert = certs->elts; + key = keys->elts; + + certificates = ngx_array_create(cf->pool, certs->nelts, + sizeof(ngx_ssl_certificate_t)); + if (certificates == NULL) { return NGX_ERROR; } - /* - * we can't use SSL_CTX_use_certificate_chain_file() as it doesn't - * allow to access certificate later from SSL_CTX, so we reimplement - * it here - */ - - bio = BIO_new_file((char *) cert->data, "r"); - if (bio == NULL) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "BIO_new_file(\"%s\") failed", cert->data); - return NGX_ERROR; - } - - x509 = PEM_read_bio_X509_AUX(bio, NULL, NULL, NULL); - if (x509 == NULL) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "PEM_read_bio_X509_AUX(\"%s\") failed", cert->data); - BIO_free(bio); - return NGX_ERROR; - } - - if (SSL_CTX_use_certificate(ssl->ctx, x509) == 0) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "SSL_CTX_use_certificate(\"%s\") failed", cert->data); - X509_free(x509); - BIO_free(bio); - return NGX_ERROR; - } - - if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_certificate_index, x509) - == 0) - { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "SSL_CTX_set_ex_data() failed"); - X509_free(x509); - BIO_free(bio); - return NGX_ERROR; - } - - X509_free(x509); - - /* read rest of the chain */ - - for ( ;; ) { - - x509 = PEM_read_bio_X509(bio, NULL, NULL, NULL); + for (i = 0; i < certs->nelts; i++) { + + /* load server certificate */ + + if (ngx_conf_full_name(cf->cycle, &cert[i], 1) != NGX_OK) { + goto failed; + } + + /* + * we can't use SSL_CTX_use_certificate_chain_file() as it doesn't + * allow to access certificate later from SSL_CTX, so we reimplement + * it here + */ + + bio = BIO_new_file((char *) cert[i].data, "r"); + if (bio == NULL) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "BIO_new_file(\"%V\") failed", &cert[i]); + goto failed; + } + + x509 = PEM_read_bio_X509_AUX(bio, NULL, NULL, NULL); if (x509 == NULL) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "PEM_read_bio_X509_AUX(\"%V\") failed", &cert[i]); + goto failed; + } + + if (SSL_CTX_use_certificate(ssl->ctx, x509) == 0) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_use_certificate(\"%V\") failed", &cert[i]); + goto failed; + } + + + /* read rest of the chain */ + + for (j = 0; ; j++) { + + x509_ca = PEM_read_bio_X509(bio, NULL, NULL, NULL); n = ERR_peek_last_error(); - if (ERR_GET_LIB(n) == ERR_LIB_PEM - && ERR_GET_REASON(n) == PEM_R_NO_START_LINE) - { - /* end of file */ - ERR_clear_error(); + if (x509_ca == NULL) { + + if (ERR_GET_LIB(n) == ERR_LIB_PEM + && ERR_GET_REASON(n) == PEM_R_NO_START_LINE) + { + /* end of file */ + ERR_clear_error(); + break; + } + + /* some real error */ + + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "PEM_read_bio_X509(\"%V\") failed", &cert[i]); + goto failed; + } + +#ifdef SSL_CTX_add0_chain_cert + /* OpenSSL >=1.0.2 allows multiple server certificates in a single + * SSL_CTX to each have a different chain + */ + if (SSL_CTX_add0_chain_cert(ssl->ctx, x509_ca) == 0) { + + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_add0_chain_cert(\"%V\") failed", + &cert[i]); + goto failed; + } +#else + if (i == 0) { + if (SSL_CTX_add_extra_chain_cert(ssl->ctx, x509_ca) == 0) { + + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_add_extra_chain_cert() failed"); + goto failed; + } break; } - - /* some real error */ +#endif + + } + + BIO_free(bio); + bio = NULL; + + + /* load private key */ + + if (ngx_strncmp(key[i].data, "engine:", sizeof("engine:") - 1) == 0) { +#ifndef OPENSSL_NO_ENGINE + + u_char *p, *last; + ENGINE *engine; + + p = key[i].data + sizeof("engine:") - 1; + last = (u_char *) ngx_strchr(p, ':'); + + if (last == NULL) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid syntax in \"%V\"", &(key[i])); + goto failed; + } + + + *last = '\0'; + + engine = ENGINE_by_id((char *) p); + + if (engine == NULL) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "ENGINE_by_id(\"%s\") failed", p); + goto failed; + } + + *last++ = ':'; + + pkey = ENGINE_load_private_key(engine, (char *) last, 0, 0); + + if (pkey == NULL) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "ENGINE_load_private_key(\"%s\") failed", last); + ENGINE_free(engine); + goto failed; + } + + ENGINE_free(engine); + + if (SSL_CTX_use_PrivateKey(ssl->ctx, pkey) == 0) { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_use_PrivateKey(\"%s\") failed", last); + goto failed; + } + + EVP_PKEY_free(pkey); + pkey = NULL; + + continue; +#else + + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "loading \"engine:...\" certificate keys " + "is not supported"); + goto failed; + +#endif + } + + if (ngx_conf_full_name(cf->cycle, &key[i], 1) != NGX_OK) { + goto failed; + } + + if (passwords) { + tries = passwords->nelts; + pwd = passwords->elts; + + SSL_CTX_set_default_passwd_cb(ssl->ctx, ngx_ssl_password_callback); + SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, pwd); + } else { + tries = 1; +#if (NGX_SUPPRESS_WARN) + pwd = NULL; +#endif + } + + + for ( ;; ) { + if (SSL_CTX_use_PrivateKey_file(ssl->ctx, (char *) key[i].data, + SSL_FILETYPE_PEM) + != 0) + { + break; + } + + if (--tries) { + ERR_clear_error(); + SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, ++pwd); + continue; + } ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "PEM_read_bio_X509(\"%s\") failed", cert->data); - BIO_free(bio); - return NGX_ERROR; + "SSL_CTX_use_PrivateKey_file(\"%s\") failed", + key[i].data); + goto failed; } - if (SSL_CTX_add_extra_chain_cert(ssl->ctx, x509) == 0) { + if (SSL_CTX_check_private_key(ssl->ctx) < 1) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "SSL_CTX_add_extra_chain_cert(\"%s\") failed", - cert->data); - X509_free(x509); - BIO_free(bio); - return NGX_ERROR; + "PrivateKey \"%V\" does not match \"%V\" failed", + &key[i], &cert[i]); + goto failed; } - } - - BIO_free(bio); - - if (ngx_strncmp(key->data, "engine:", sizeof("engine:") - 1) == 0) { - -#ifndef OPENSSL_NO_ENGINE - - u_char *p, *last; - ENGINE *engine; - EVP_PKEY *pkey; - - p = key->data + sizeof("engine:") - 1; - last = (u_char *) ngx_strchr(p, ':'); - - if (last == NULL) { - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "invalid syntax in \"%V\"", key); - return NGX_ERROR; + + SSL_CTX_set_default_passwd_cb(ssl->ctx, NULL); + + +#ifdef SSL_CTX_get0_chain_certs + SSL_CTX_get0_chain_certs(ssl->ctx, &chain); +#else +#if OPENSSL_VERSION_NUMBER >= 0x10001000L + SSL_CTX_get_extra_chain_certs(ssl->ctx, &chain); +#else + chain = ssl->ctx->extra_certs; +#endif +#endif + + cert_info = ngx_array_push(certificates); + if (cert_info == NULL) { + goto failed; } - *last = '\0'; - - engine = ENGINE_by_id((char *) p); - - if (engine == NULL) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "ENGINE_by_id(\"%s\") failed", p); - return NGX_ERROR; - } - - *last++ = ':'; - - pkey = ENGINE_load_private_key(engine, (char *) last, 0, 0); - - if (pkey == NULL) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "ENGINE_load_private_key(\"%s\") failed", last); - ENGINE_free(engine); - return NGX_ERROR; - } - - ENGINE_free(engine); - - if (SSL_CTX_use_PrivateKey(ssl->ctx, pkey) == 0) { - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "SSL_CTX_use_PrivateKey(\"%s\") failed", last); - EVP_PKEY_free(pkey); - return NGX_ERROR; - } - + cert_info->issuer = NULL; + cert_info->cert = x509; + CRYPTO_add(&x509->references, 1, CRYPTO_LOCK_X509); + + cert_info->chain = chain; + + + } + + /* store cert info for future use in stapling and sessions */ + + if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_certificate_index, certificates) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_set_ex_data() failed"); + goto failed; + } + + return NGX_OK; + +failed: + + if (bio) + BIO_free(bio); + if (pkey) EVP_PKEY_free(pkey); - - return NGX_OK; - -#else - - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "loading \"engine:...\" certificate keys " - "is not supported"); - return NGX_ERROR; - -#endif - } - - if (ngx_conf_full_name(cf->cycle, key, 1) != NGX_OK) { - return NGX_ERROR; - } - - if (passwords) { - tries = passwords->nelts; - pwd = passwords->elts; - - SSL_CTX_set_default_passwd_cb(ssl->ctx, ngx_ssl_password_callback); - SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, pwd); - - } else { - tries = 1; -#if (NGX_SUPPRESS_WARN) - pwd = NULL; -#endif - } - - for ( ;; ) { - - if (SSL_CTX_use_PrivateKey_file(ssl->ctx, (char *) key->data, - SSL_FILETYPE_PEM) - != 0) - { - break; - } - - if (--tries) { - ERR_clear_error(); - SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, ++pwd); - continue; - } - - ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, - "SSL_CTX_use_PrivateKey_file(\"%s\") failed", key->data); - return NGX_ERROR; - } - - SSL_CTX_set_default_passwd_cb(ssl->ctx, NULL); - - return NGX_OK; + if (x509) + X509_free(x509); + if (x509_ca) + X509_free(x509_ca); + + return NGX_ERROR; } @@ -2111,13 +2195,14 @@ static ngx_int_t ngx_ssl_session_id_context(ngx_ssl_t *ssl, ngx_str_t *sess_ctx) { - int n, i; - X509 *cert; - X509_NAME *name; - EVP_MD_CTX md; - unsigned int len; - STACK_OF(X509_NAME) *list; - u_char buf[EVP_MAX_MD_SIZE]; + int n, i; + X509_NAME *name; + EVP_MD_CTX md; + unsigned int len; + STACK_OF(X509_NAME) *list; + u_char buf[EVP_MAX_MD_SIZE]; + ngx_array_t *certs; + ngx_ssl_certificate_t *cert_info; /* * Session ID context is set based on the string provided, @@ -2138,9 +2223,14 @@ goto failed; } - cert = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_index); - - if (X509_digest(cert, EVP_sha1(), buf, &len) == 0) { + certs = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_index); + if (!certs || certs->nelts == 0) { + goto failed; + } + + cert_info = certs->elts; + + if (X509_digest((&cert_info[0])->cert, EVP_sha1(), buf, &len) == 0) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "X509_digest() failed"); goto failed; diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Tue Mar 17 00:26:27 2015 +0300 +++ b/src/event/ngx_event_openssl.h Tue Mar 17 21:15:18 2015 +0300 @@ -45,6 +45,13 @@ typedef struct { + X509 *cert; + X509 *issuer; + STACK_OF(X509) *chain; +} ngx_ssl_certificate_t; + + +typedef struct { ngx_ssl_conn_t *connection; ngx_int_t last; @@ -122,15 +129,15 @@ ngx_int_t ngx_ssl_init(ngx_log_t *log); ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); -ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *cert, ngx_str_t *key, ngx_array_t *passwords); +ngx_int_t ngx_ssl_certificates(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_array_t *certs, ngx_array_t *keys, ngx_array_t *passwords); ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_crl(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *crl); ngx_int_t ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *file, ngx_str_t *responder, ngx_uint_t verify); + ngx_array_t *files, ngx_array_t *responders, ngx_uint_t verify); ngx_int_t ngx_ssl_stapling_resolver(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_resolver_t *resolver, ngx_msec_t resolver_timeout); RSA *ngx_ssl_rsa512_key_callback(ngx_ssl_conn_t *ssl_conn, int is_export, diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c Tue Mar 17 00:26:27 2015 +0300 +++ b/src/event/ngx_event_openssl_stapling.c Tue Mar 17 21:15:18 2015 +0300 @@ -28,8 +28,7 @@ SSL_CTX *ssl_ctx; - X509 *cert; - X509 *issuer; + ngx_ssl_certificate_t *cert_info; time_t valid; @@ -83,10 +82,11 @@ static ngx_int_t ngx_ssl_stapling_file(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *file); -static ngx_int_t ngx_ssl_stapling_issuer(ngx_conf_t *cf, ngx_ssl_t *ssl); + ngx_ssl_stapling_t *staple, ngx_str_t *file); +static ngx_int_t ngx_ssl_stapling_issuer(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_stapling_t *staple); static ngx_int_t ngx_ssl_stapling_responder(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *responder); + ngx_ssl_stapling_t *staple, ngx_str_t *responder); static int ngx_ssl_certificate_status_callback(ngx_ssl_conn_t *ssl_conn, void *data); @@ -115,15 +115,29 @@ ngx_int_t -ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file, - ngx_str_t *responder, ngx_uint_t verify) +ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_array_t *files, + ngx_array_t *responders, ngx_uint_t verify) { - ngx_int_t rc; - ngx_pool_cleanup_t *cln; - ngx_ssl_stapling_t *staple; + ngx_uint_t i; + ngx_array_t *staples; + ngx_array_t *certificates; + ngx_pool_cleanup_t *cln; + ngx_str_t *responder, *file; + ngx_str_t empty_responder = ngx_null_string; + ngx_ssl_stapling_t *staple; + ngx_ssl_certificate_t *cert_info; - staple = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_stapling_t)); - if (staple == NULL) { + responder = NULL; + file = NULL; + + certificates = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_index); + if (certificates == NULL || (certificates->nelts == 0)) { + return NGX_ERROR; + } + + staples = ngx_array_create(cf->pool, certificates->nelts, + sizeof(ngx_ssl_stapling_t)); + if (staples == NULL) { return NGX_ERROR; } @@ -133,9 +147,66 @@ } cln->handler = ngx_ssl_stapling_cleanup; - cln->data = staple; + cln->data = staples; - if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_stapling_index, staple) + + cert_info = certificates->elts; + staple = staples->elts; + + if (responders) + responder = responders->elts; + + if (files) + file = files->elts; + + for (i = 0; i < certificates->nelts; i++) { + + staple = ngx_array_push(staples); + staple->timeout = 60000; + staple->ssl_ctx = ssl->ctx; + CRYPTO_add(&ssl->ctx->references, 1, CRYPTO_LOCK_X509); + + staple->cert_info = &cert_info[i]; + if (ngx_ssl_stapling_issuer(cf, ssl, staple) == NGX_ERROR) { + return NGX_ERROR; + } + staple->verify = verify; + + if (responder && responders->nelts > i) { + + if (ngx_ssl_stapling_responder(cf, ssl, staple, &responder[i]) + != NGX_OK) + { + return NGX_ERROR; + } + + } else { + + if (ngx_ssl_stapling_responder(cf, ssl, staple, &empty_responder) + == NGX_ERROR) + { + return NGX_ERROR; + } + empty_responder.len = 0; + empty_responder.data = NULL; + + } + + if (!file || files->nelts <= i) + continue; + + if (file[i].len) { + + /* use OCSP response from the file */ + + if (ngx_ssl_stapling_file(cf, ssl, staple, &file[i]) != NGX_OK) { + return NGX_ERROR; + } + } + } + + + if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_stapling_index, staples) == 0) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, @@ -143,59 +214,21 @@ return NGX_ERROR; } - staple->ssl_ctx = ssl->ctx; - staple->timeout = 60000; - staple->verify = verify; - - if (file->len) { - /* use OCSP response from the file */ - - if (ngx_ssl_stapling_file(cf, ssl, file) != NGX_OK) { - return NGX_ERROR; - } - - goto done; - } - - rc = ngx_ssl_stapling_issuer(cf, ssl); - - if (rc == NGX_DECLINED) { - return NGX_OK; - } - - if (rc != NGX_OK) { - return NGX_ERROR; - } - - rc = ngx_ssl_stapling_responder(cf, ssl, responder); - - if (rc == NGX_DECLINED) { - return NGX_OK; - } - - if (rc != NGX_OK) { - return NGX_ERROR; - } - -done: - SSL_CTX_set_tlsext_status_cb(ssl->ctx, ngx_ssl_certificate_status_callback); - SSL_CTX_set_tlsext_status_arg(ssl->ctx, staple); + SSL_CTX_set_tlsext_status_arg(ssl->ctx, staples); return NGX_OK; } static ngx_int_t -ngx_ssl_stapling_file(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file) +ngx_ssl_stapling_file(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_stapling_t *staple, ngx_str_t *file) { BIO *bio; int len; u_char *p, *buf; OCSP_RESPONSE *response; - ngx_ssl_stapling_t *staple; - - staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { return NGX_ERROR; @@ -255,23 +288,32 @@ static ngx_int_t -ngx_ssl_stapling_issuer(ngx_conf_t *cf, ngx_ssl_t *ssl) +ngx_ssl_stapling_issuer(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_stapling_t *staple) { - int i, n, rc; - X509 *cert, *issuer; - X509_STORE *store; - X509_STORE_CTX *store_ctx; - STACK_OF(X509) *chain; - ngx_ssl_stapling_t *staple; + int i, n, rc; + X509 *issuer; + X509_STORE *store; + X509_STORE_CTX *store_ctx; + ngx_ssl_certificate_t *cert_info = staple->cert_info; + X509 *cert; + STACK_OF(X509) *chain; - staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); - cert = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_certificate_index); -#if OPENSSL_VERSION_NUMBER >= 0x10001000L - SSL_CTX_get_extra_chain_certs(ssl->ctx, &chain); -#else - chain = ssl->ctx->extra_certs; -#endif + if (!cert_info || !cert_info->cert) { + ngx_log_error(NGX_LOG_WARN, ssl->log, 0, + "\"ssl_stapling\" ignored, no certificate info found"); + return NGX_ERROR; + } + + if (!cert_info->chain) { + ngx_log_error(NGX_LOG_WARN, ssl->log, 0, + "\"ssl_stapling\" ignored, no certificate chain found"); + return NGX_ERROR; + } + + cert = cert_info->cert; + chain = cert_info->chain; n = sk_X509_num(chain); @@ -286,8 +328,7 @@ ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ssl->log, 0, "SSL get issuer: found %p in extra certs", issuer); - staple->cert = cert; - staple->issuer = issuer; + cert_info->issuer = issuer; return NGX_OK; } @@ -334,28 +375,25 @@ ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ssl->log, 0, "SSL get issuer: found %p in cert store", issuer); - staple->cert = cert; - staple->issuer = issuer; + cert_info->issuer = issuer; return NGX_OK; } static ngx_int_t -ngx_ssl_stapling_responder(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *responder) +ngx_ssl_stapling_responder(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_stapling_t *staple, ngx_str_t *responder) { ngx_url_t u; char *s; - ngx_ssl_stapling_t *staple; STACK_OF(OPENSSL_STRING) *aia; - staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); - if (responder->len == 0) { /* extract OCSP responder URL from certificate */ - aia = X509_get1_ocsp(staple->cert); + aia = X509_get1_ocsp(staple->cert_info->cert); if (aia == NULL) { ngx_log_error(NGX_LOG_WARN, ssl->log, 0, "\"ssl_stapling\" ignored, " @@ -434,12 +472,21 @@ ngx_ssl_stapling_resolver(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_resolver_t *resolver, ngx_msec_t resolver_timeout) { + + ngx_uint_t i; + ngx_array_t *staples; ngx_ssl_stapling_t *staple; - staple = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); + staples = SSL_CTX_get_ex_data(ssl->ctx, ngx_ssl_stapling_index); + if (staples == NULL) { + return NGX_ERROR; + } - staple->resolver = resolver; - staple->resolver_timeout = resolver_timeout; + staple = staples->elts; + for (i = 0; i < staples->nelts; i++) { + staple[i].resolver = resolver; + staple[i].resolver_timeout = resolver_timeout; + } return NGX_OK; } @@ -448,19 +495,46 @@ static int ngx_ssl_certificate_status_callback(ngx_ssl_conn_t *ssl_conn, void *data) { - int rc; - u_char *p; - ngx_connection_t *c; - ngx_ssl_stapling_t *staple; + int rc; + ngx_uint_t i; + u_char *p; + ngx_connection_t *c; + X509 *cert; + ngx_array_t *staples = data; + ngx_ssl_stapling_t *staple, *staples_elm; + ngx_ssl_certificate_t *cert_info; + + staple = NULL; c = ngx_ssl_get_connection(ssl_conn); ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL certificate status callback"); - staple = data; + + cert = SSL_get_certificate(ssl_conn); + rc = SSL_TLSEXT_ERR_NOACK; + /* find a staple for current certificate */ + + staples_elm = staples->elts; + for (i = 0; i < staples->nelts; i++) { + + cert_info = staples_elm[i].cert_info; + if (!cert_info) + continue; + + if (cert == cert_info->cert) { + staple = &staples_elm[i]; + break; + } + } + + if (staple == NULL) { + return rc; + } + if (staple->staple.len) { /* we have to copy ocsp response as OpenSSL will free it by itself */ @@ -486,7 +560,8 @@ static void ngx_ssl_stapling_update(ngx_ssl_stapling_t *staple) { - ngx_ssl_ocsp_ctx_t *ctx; + ngx_ssl_ocsp_ctx_t *ctx; + ngx_ssl_certificate_t *cert_info; if (staple->host.len == 0 || staple->loading || staple->valid >= ngx_time()) @@ -494,6 +569,11 @@ return; } + cert_info = staple->cert_info; + if (!cert_info) { + return; + } + staple->loading = 1; ctx = ngx_ssl_ocsp_start(); @@ -501,8 +581,8 @@ return; } - ctx->cert = staple->cert; - ctx->issuer = staple->issuer; + ctx->cert = cert_info->cert; + ctx->issuer = cert_info->issuer; ctx->addrs = staple->addrs; ctx->host = staple->host; @@ -528,17 +608,17 @@ #if OPENSSL_VERSION_NUMBER >= 0x0090707fL const #endif - u_char *p; - int n; - size_t len; - ngx_str_t response; - X509_STORE *store; - STACK_OF(X509) *chain; - OCSP_CERTID *id; - OCSP_RESPONSE *ocsp; - OCSP_BASICRESP *basic; - ngx_ssl_stapling_t *staple; - ASN1_GENERALIZEDTIME *thisupdate, *nextupdate; + u_char *p; + int n; + size_t len; + ngx_str_t response; + X509_STORE *store; + OCSP_CERTID *id; + OCSP_RESPONSE *ocsp; + OCSP_BASICRESP *basic; + ngx_ssl_stapling_t *staple; + ASN1_GENERALIZEDTIME *thisupdate, *nextupdate; + ngx_ssl_certificate_t *cert_info; staple = ctx->data; ocsp = NULL; @@ -584,22 +664,28 @@ goto error; } -#if OPENSSL_VERSION_NUMBER >= 0x10001000L - SSL_CTX_get_extra_chain_certs(staple->ssl_ctx, &chain); -#else - chain = staple->ssl_ctx->extra_certs; + cert_info = staple->cert_info; + if (!cert_info) { + ngx_ssl_error(NGX_LOG_ERR, ctx->log, 0, + "No certificate information found"); + goto error; + } + + if (OCSP_basic_verify(basic, cert_info->chain, store, (staple->verify) + ? OCSP_TRUSTOTHER + : OCSP_NOVERIFY +#if OPENSSL_VERSION_NUMBER < 0x10000000L + /* ECDSA/SHA-2 signature verification not supported */ + | OCSP_NOSIGS #endif - - if (OCSP_basic_verify(basic, chain, store, - staple->verify ? OCSP_TRUSTOTHER : OCSP_NOVERIFY) - != 1) + ) != 1) { ngx_ssl_error(NGX_LOG_ERR, ctx->log, 0, "OCSP_basic_verify() failed"); goto error; } - id = OCSP_cert_to_id(NULL, ctx->cert, ctx->issuer); + id = OCSP_cert_to_id(NULL, cert_info->cert, ctx->issuer); if (id == NULL) { ngx_ssl_error(NGX_LOG_CRIT, ctx->log, 0, "OCSP_cert_to_id() failed"); @@ -685,14 +771,28 @@ static void ngx_ssl_stapling_cleanup(void *data) { - ngx_ssl_stapling_t *staple = data; + ngx_uint_t i; + ngx_ssl_stapling_t *staple; + ngx_ssl_certificate_t *cert_info; + ngx_array_t *staples = data; - if (staple->issuer) { - X509_free(staple->issuer); - } + staple = staples->elts; + for (i = 0; i < staples->nelts; i++) { + if (staple[i].staple.data) { + ngx_free(staple[i].staple.data); + } - if (staple->staple.data) { - ngx_free(staple->staple.data); + cert_info = staple[i].cert_info; + if (!cert_info) + continue; + + if (cert_info->issuer) { + X509_free(cert_info->issuer); + } + + if (cert_info->chain) { + sk_X509_free(cert_info->chain); + } } } @@ -1742,8 +1842,8 @@ ngx_int_t -ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file, - ngx_str_t *responder, ngx_uint_t verify) +ngx_ssl_stapling(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_array_t *files, + ngx_array_t *responders, ngx_uint_t verify) { ngx_log_error(NGX_LOG_WARN, ssl->log, 0, "\"ssl_stapling\" ignored, not supported"); diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Tue Mar 17 00:26:27 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Tue Mar 17 21:15:18 2015 +0300 @@ -97,8 +97,8 @@ ngx_uint_t ssl_verify_depth; ngx_str_t ssl_trusted_certificate; ngx_str_t ssl_crl; - ngx_str_t ssl_certificate; - ngx_str_t ssl_certificate_key; + ngx_array_t *ssl_certificates; + ngx_array_t *ssl_certificate_keys; ngx_array_t *ssl_passwords; #endif } ngx_http_proxy_loc_conf_t; @@ -657,16 +657,16 @@ { ngx_string("proxy_ssl_certificate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, - offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate), + offsetof(ngx_http_proxy_loc_conf_t, ssl_certificates), NULL }, { ngx_string("proxy_ssl_certificate_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, - offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_key), + offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_keys), NULL }, { ngx_string("proxy_ssl_password_file"), @@ -2625,6 +2625,8 @@ conf->upstream.ssl_verify = NGX_CONF_UNSET; conf->ssl_verify_depth = NGX_CONF_UNSET_UINT; conf->ssl_passwords = NGX_CONF_UNSET_PTR; + conf->ssl_certificates = NGX_CONF_UNSET_PTR; + conf->ssl_certificate_keys = NGX_CONF_UNSET_PTR; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2953,10 +2955,11 @@ prev->ssl_trusted_certificate, ""); ngx_conf_merge_str_value(conf->ssl_crl, prev->ssl_crl, ""); - ngx_conf_merge_str_value(conf->ssl_certificate, - prev->ssl_certificate, ""); - ngx_conf_merge_str_value(conf->ssl_certificate_key, - prev->ssl_certificate_key, ""); + ngx_conf_merge_ptr_value(conf->ssl_certificates, + prev->ssl_certificates, NULL); + ngx_conf_merge_ptr_value(conf->ssl_certificate_keys, + prev->ssl_certificate_keys, NULL); + ngx_conf_merge_ptr_value(conf->ssl_passwords, prev->ssl_passwords, NULL); if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { @@ -4043,6 +4046,7 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *plcf) { ngx_pool_cleanup_t *cln; + ngx_str_t *oddkey; plcf->upstream.ssl = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_t)); if (plcf->upstream.ssl == NULL) { @@ -4065,18 +4069,43 @@ cln->handler = ngx_ssl_cleanup_ctx; cln->data = plcf->upstream.ssl; - if (plcf->ssl_certificate.len) { - - if (plcf->ssl_certificate_key.len == 0) { + if (plcf->ssl_certificates && (plcf->ssl_certificates->nelts > 0)) { + + if ((!plcf->ssl_certificate_keys) + || (plcf->ssl_certificate_keys->nelts + < plcf->ssl_certificates->nelts)) + { + + oddkey = plcf->ssl_certificates->elts; + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, - "no \"proxy_ssl_certificate_key\" is defined " - "for certificate \"%V\"", &plcf->ssl_certificate); + "no \"proxy_ssl_certificate_key\" is defined for " + "ssl certificate \"%V\"", + oddkey[(plcf->ssl_certificate_keys) + ? plcf->ssl_certificate_keys->nelts + : 0]); + return NGX_ERROR; } - if (ngx_ssl_certificate(cf, plcf->upstream.ssl, &plcf->ssl_certificate, - &plcf->ssl_certificate_key, plcf->ssl_passwords) - != NGX_OK) +#ifndef SSL_CTX_add0_chain_cert + if (plcf->ssl_certificates->nelts > 1) { + /* + * no multiple certificates support for OpenSSL < 1.0.2, + * so we need to alarm user + */ + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Multiple certificate configured " + "in \"proxy_ssl_certificate\", " + "but OpenSSL version < 1.0.2 used"); + return NGX_ERROR; + } +#endif + + if (ngx_ssl_certificates(cf, plcf->upstream.ssl, plcf->ssl_certificates, + plcf->ssl_certificate_keys, + plcf->ssl_passwords) + != NGX_OK) { return NGX_ERROR; } diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Tue Mar 17 00:26:27 2015 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Tue Mar 17 21:15:18 2015 +0300 @@ -81,16 +81,16 @@ { ngx_string("ssl_certificate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_SRV_CONF_OFFSET, - offsetof(ngx_http_ssl_srv_conf_t, certificate), + offsetof(ngx_http_ssl_srv_conf_t, certificates), NULL }, { ngx_string("ssl_certificate_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_SRV_CONF_OFFSET, - offsetof(ngx_http_ssl_srv_conf_t, certificate_key), + offsetof(ngx_http_ssl_srv_conf_t, certificate_keys), NULL }, { ngx_string("ssl_password_file"), @@ -214,16 +214,16 @@ { ngx_string("ssl_stapling_file"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_SRV_CONF_OFFSET, - offsetof(ngx_http_ssl_srv_conf_t, stapling_file), + offsetof(ngx_http_ssl_srv_conf_t, stapling_files), NULL }, { ngx_string("ssl_stapling_responder"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_SRV_CONF_OFFSET, - offsetof(ngx_http_ssl_srv_conf_t, stapling_responder), + offsetof(ngx_http_ssl_srv_conf_t, stapling_responders), NULL }, { ngx_string("ssl_stapling_verify"), @@ -505,8 +505,6 @@ * set by ngx_pcalloc(): * * sscf->protocols = 0; - * sscf->certificate = { 0, NULL }; - * sscf->certificate_key = { 0, NULL }; * sscf->dhparam = { 0, NULL }; * sscf->ecdh_curve = { 0, NULL }; * sscf->client_certificate = { 0, NULL }; @@ -514,12 +512,12 @@ * sscf->crl = { 0, NULL }; * sscf->ciphers = { 0, NULL }; * sscf->shm_zone = NULL; - * sscf->stapling_file = { 0, NULL }; - * sscf->stapling_responder = { 0, NULL }; */ sscf->enable = NGX_CONF_UNSET; sscf->prefer_server_ciphers = NGX_CONF_UNSET; + sscf->certificates = NGX_CONF_UNSET_PTR; + sscf->certificate_keys = NGX_CONF_UNSET_PTR; sscf->buffer_size = NGX_CONF_UNSET_SIZE; sscf->verify = NGX_CONF_UNSET_UINT; sscf->verify_depth = NGX_CONF_UNSET_UINT; @@ -530,6 +528,8 @@ sscf->session_ticket_keys = NGX_CONF_UNSET_PTR; sscf->stapling = NGX_CONF_UNSET; sscf->stapling_verify = NGX_CONF_UNSET; + sscf->stapling_files = NGX_CONF_UNSET_PTR; + sscf->stapling_responders = NGX_CONF_UNSET_PTR; return sscf; } @@ -570,8 +570,10 @@ ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); - ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); - ngx_conf_merge_str_value(conf->certificate_key, prev->certificate_key, ""); + ngx_conf_merge_ptr_value(conf->certificates, prev->certificates, + NULL); + ngx_conf_merge_ptr_value(conf->certificate_keys, prev->certificate_keys, + NULL); ngx_conf_merge_ptr_value(conf->passwords, prev->passwords, NULL); @@ -590,15 +592,18 @@ ngx_conf_merge_value(conf->stapling, prev->stapling, 0); ngx_conf_merge_value(conf->stapling_verify, prev->stapling_verify, 0); - ngx_conf_merge_str_value(conf->stapling_file, prev->stapling_file, ""); - ngx_conf_merge_str_value(conf->stapling_responder, - prev->stapling_responder, ""); + ngx_conf_merge_ptr_value(conf->stapling_files, prev->stapling_files, + NULL); + ngx_conf_merge_ptr_value(conf->stapling_responders, + prev->stapling_responders, NULL); conf->ssl.log = cf->log; if (conf->enable) { - if (conf->certificate.len == 0) { + if ((!conf->certificates) + || (conf->certificates->nelts == 0)) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "no \"ssl_certificate\" is defined for " "the \"ssl\" directive in %s:%ui", @@ -606,7 +611,9 @@ return NGX_CONF_ERROR; } - if (conf->certificate_key.len == 0) { + if ((!conf->certificate_keys) + || (conf->certificate_keys->nelts == 0)) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "no \"ssl_certificate_key\" is defined for " "the \"ssl\" directive in %s:%ui", @@ -616,18 +623,39 @@ } else { - if (conf->certificate.len == 0) { + if ((!conf->certificates) + || (conf->certificates->nelts == 0)) { + return NGX_CONF_OK; } - if (conf->certificate_key.len == 0) { + if ((!conf->certificate_keys) + || (conf->certificate_keys->nelts < conf->certificates->nelts)) + { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, - "no \"ssl_certificate_key\" is defined " - "for certificate \"%V\"", &conf->certificate); + "no \"ssl_certificate_key\" is defined for " + "ssl_certificate \"%V\"", + &conf->certificates[(conf->certificate_keys) + ? conf->certificate_keys->nelts + : 0]); return NGX_CONF_ERROR; } } +#ifndef SSL_CTX_add0_chain_cert + if (conf->certificates->nelts > 1) { + /* + * no multiple certificates support for OpenSSL < 1.0.2, + * so we need to alarm user + */ + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Multiple certificate configured in " + "\"ssl_certificate\", but OpenSSL < 1.0.2 used"); + return NGX_CONF_ERROR; + } +#endif + if (ngx_ssl_create(&conf->ssl, conf->protocols, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -663,8 +691,8 @@ cln->handler = ngx_ssl_cleanup_ctx; cln->data = &conf->ssl; - if (ngx_ssl_certificate(cf, &conf->ssl, &conf->certificate, - &conf->certificate_key, conf->passwords) + if (ngx_ssl_certificates(cf, &conf->ssl, conf->certificates, + conf->certificate_keys, conf->passwords) != NGX_OK) { return NGX_CONF_ERROR; @@ -760,8 +788,8 @@ if (conf->stapling) { - if (ngx_ssl_stapling(cf, &conf->ssl, &conf->stapling_file, - &conf->stapling_responder, conf->stapling_verify) + if (ngx_ssl_stapling(cf, &conf->ssl, conf->stapling_files, + conf->stapling_responders, conf->stapling_verify) != NGX_OK) { return NGX_CONF_ERROR; diff -r e370c5fdf4c8 -r 83b0f57fbcb5 src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c Tue Mar 17 00:26:27 2015 +0300 +++ b/src/http/modules/ngx_http_uwsgi_module.c Tue Mar 17 21:15:18 2015 +0300 @@ -54,8 +54,8 @@ ngx_uint_t ssl_verify_depth; ngx_str_t ssl_trusted_certificate; ngx_str_t ssl_crl; - ngx_str_t ssl_certificate; - ngx_str_t ssl_certificate_key; + ngx_array_t *ssl_certificates; + ngx_array_t *ssl_certificate_keys; ngx_array_t *ssl_passwords; #endif } ngx_http_uwsgi_loc_conf_t; @@ -510,16 +510,16 @@ { ngx_string("uwsgi_ssl_certificate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, - offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificate), + offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificates), NULL }, { ngx_string("uwsgi_ssl_certificate_key"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, - ngx_conf_set_str_slot, + ngx_conf_set_str_array_slot, NGX_HTTP_LOC_CONF_OFFSET, - offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificate_key), + offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificate_keys), NULL }, { ngx_string("uwsgi_ssl_password_file"), @@ -1412,6 +1412,8 @@ conf->upstream.ssl_verify = NGX_CONF_UNSET; conf->ssl_verify_depth = NGX_CONF_UNSET_UINT; conf->ssl_passwords = NGX_CONF_UNSET_PTR; + conf->ssl_certificates = NGX_CONF_UNSET_PTR; + conf->ssl_certificate_keys = NGX_CONF_UNSET_PTR; #endif /* "uwsgi_cyclic_temp_file" is disabled */ @@ -1723,11 +1725,10 @@ ngx_conf_merge_str_value(conf->ssl_trusted_certificate, prev->ssl_trusted_certificate, ""); ngx_conf_merge_str_value(conf->ssl_crl, prev->ssl_crl, ""); - - ngx_conf_merge_str_value(conf->ssl_certificate, - prev->ssl_certificate, ""); - ngx_conf_merge_str_value(conf->ssl_certificate_key, - prev->ssl_certificate_key, ""); + ngx_conf_merge_ptr_value(conf->ssl_certificates, + prev->ssl_certificates, NULL); + ngx_conf_merge_ptr_value(conf->ssl_certificate_keys, + prev->ssl_certificate_keys, NULL); ngx_conf_merge_ptr_value(conf->ssl_passwords, prev->ssl_passwords, NULL); if (conf->ssl && ngx_http_uwsgi_set_ssl(cf, conf) != NGX_OK) { @@ -2264,6 +2265,7 @@ ngx_http_uwsgi_set_ssl(ngx_conf_t *cf, ngx_http_uwsgi_loc_conf_t *uwcf) { ngx_pool_cleanup_t *cln; + ngx_str_t *oddkey; uwcf->upstream.ssl = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_t)); if (uwcf->upstream.ssl == NULL) { @@ -2286,17 +2288,42 @@ cln->handler = ngx_ssl_cleanup_ctx; cln->data = uwcf->upstream.ssl; - if (uwcf->ssl_certificate.len) { - - if (uwcf->ssl_certificate_key.len == 0) { + if (uwcf->ssl_certificates && (uwcf->ssl_certificates->nelts > 0)) { + + if ((!uwcf->ssl_certificate_keys) + || (uwcf->ssl_certificate_keys->nelts + < uwcf->ssl_certificates->nelts)) + { + + oddkey = &uwcf->ssl_certificates->elts; + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, - "no \"uwsgi_ssl_certificate_key\" is defined " - "for certificate \"%V\"", &uwcf->ssl_certificate); + "no \"uwsgi_ssl_certificate_key\" is defined for " + "ssl certificate \"%V\"", + oddkey[(uwcf->ssl_certificate_keys) + ? uwcf->ssl_certificate_keys->nelts + : 0]); + return NGX_ERROR; } - if (ngx_ssl_certificate(cf, uwcf->upstream.ssl, &uwcf->ssl_certificate, - &uwcf->ssl_certificate_key, uwcf->ssl_passwords) +#ifndef SSL_CTX_add0_chain_cert + if (uwcf->ssl_certificates->nelts > 1) { + /* + * no multiple certificates support for OpenSSL < 1.0.2, + * so we need to alarm user + */ + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "Multiple certificate configured " + "in "\"uwsgi_ssl_certificate\", but " + "OpenSSL < 1.0.2 used"); + return NGX_ERROR; + } +#endif + + if (ngx_ssl_certificates(cf, uwcf->upstream.ssl, uwcf->ssl_certificates, + uwcf->ssl_certificate_keys, + uwcf->ssl_passwords) != NGX_OK) { return NGX_ERROR; On Tue, Mar 17, 2015 at 9:27 PM, Albert Casademont Filella < albertcasademont at gmail.com> wrote: > This would be a very nice addition indeed, thanks!! I guess it needs quite > a lot of testing though, ECC certs are still not really common these days. > > BTW and before some of the core devs says it patches should be sent in the > email body, not as an attachment. It is much more convenient for reviewing > it ;) > > On Tue, Mar 17, 2015 at 7:22 PM, kyprizel wrote: > >> Hi, >> Sorry for spamming - previous message was sent to wrong mailing list and >> possibly included broken patch. >> >> This patch is mostly finishing of Rob Stradlings patch discussed in thread >> http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html >> >> Multi certificate support works only for OpenSSL >= 1.0.2. >> Only certificates with different crypto algorithms (ECC/RSA/DSA) can be >> used b/c of OpenSSL limitations, otherwise (RSA+SHA-256 / RSA-SHA-1 for >> example) only last specified in the config will be used. >> Can you please review it. >> >> Thank you. >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 17 19:20:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Mar 2015 22:20:06 +0300 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: References: Message-ID: <20150317192006.GX88631@mdounin.ru> Hello! On Tue, Mar 17, 2015 at 09:38:42PM +0300, kyprizel wrote: > Sure it should be tested (there are can be some memory leaks). > Need to know if it's idologically acceptable. I've provided some comments in the reply to your off-list message. -- Maxim Dounin http://nginx.org/ From ru at nginx.com Wed Mar 18 05:08:48 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Mar 2015 05:08:48 +0000 Subject: [nginx] Configure: fixed type max value detection. Message-ID: details: http://hg.nginx.org/nginx/rev/e11a8e7e8e0c branches: changeset: 6015:e11a8e7e8e0c user: Ruslan Ermilov date: Wed Mar 18 02:04:39 2015 +0300 description: Configure: fixed type max value detection. The code tried to use suffixes for "long" and "long long" types, but it never worked as intended due to the bug in the shell code. Also, the max value for any 64-bit type other than "long long" on platforms with 32-bit "long" would be incorrect if the bug was fixed. So instead of fixing the bug in the shell code, always use the "int" constant for 32-bit types, and "long long" constant for 64-bit types. diffstat: auto/types/sizeof | 14 ++------------ 1 files changed, 2 insertions(+), 12 deletions(-) diffs (28 lines): diff -r e370c5fdf4c8 -r e11a8e7e8e0c auto/types/sizeof --- a/auto/types/sizeof Tue Mar 17 00:26:27 2015 +0300 +++ b/auto/types/sizeof Wed Mar 18 02:04:39 2015 +0300 @@ -50,22 +50,12 @@ rm -rf $NGX_AUTOTEST* case $ngx_size in 4) - if [ "$ngx_type"="long" ]; then - ngx_max_value=2147483647L - else - ngx_max_value=2147483647 - fi - + ngx_max_value=2147483647 ngx_max_len='(sizeof("-2147483648") - 1)' ;; 8) - if [ "$ngx_type"="long long" ]; then - ngx_max_value=9223372036854775807LL - else - ngx_max_value=9223372036854775807L - fi - + ngx_max_value=9223372036854775807LL ngx_max_len='(sizeof("-9223372036854775808") - 1)' ;; From vbart at nginx.com Wed Mar 18 15:57:45 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:57:45 +0000 Subject: [nginx] Renamed NGX_THREADS to NGX_OLD_THREADS because of deprec... Message-ID: details: http://hg.nginx.org/nginx/rev/457ec43dd8d5 branches: changeset: 6016:457ec43dd8d5 user: Ruslan Ermilov date: Wed Mar 04 18:26:25 2015 +0300 description: Renamed NGX_THREADS to NGX_OLD_THREADS because of deprecation. It's mostly dead code and the original idea of worker threads has been rejected. diffstat: src/core/nginx.c | 6 +++--- src/core/ngx_connection.c | 2 +- src/core/ngx_connection.h | 2 +- src/core/ngx_cycle.c | 2 +- src/core/ngx_cycle.h | 8 ++++++-- src/core/ngx_regex.c | 6 +++--- src/core/ngx_spinlock.c | 2 +- src/event/modules/ngx_kqueue_module.c | 6 +++--- src/event/modules/ngx_poll_module.c | 2 +- src/event/modules/ngx_select_module.c | 2 +- src/event/ngx_event.c | 4 ++-- src/event/ngx_event_busy_lock.h | 2 +- src/event/ngx_event_connect.h | 2 +- src/event/ngx_event_mutex.c | 2 +- src/http/ngx_http_upstream.c | 2 +- src/os/unix/ngx_process_cycle.c | 10 +++++----- src/os/unix/ngx_thread.h | 4 ++-- src/os/unix/ngx_user.c | 8 ++++---- src/os/win32/ngx_win32_config.h | 2 +- 19 files changed, 39 insertions(+), 35 deletions(-) diffs (truncated from 386 to 300 lines): diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/nginx.c --- a/src/core/nginx.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/nginx.c Wed Mar 04 18:26:25 2015 +0300 @@ -139,7 +139,7 @@ static ngx_command_t ngx_core_commands[ 0, NULL }, -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) { ngx_string("worker_threads"), NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, @@ -959,7 +959,7 @@ ngx_core_module_create_conf(ngx_cycle_t ccf->user = (ngx_uid_t) NGX_CONF_UNSET_UINT; ccf->group = (ngx_gid_t) NGX_CONF_UNSET_UINT; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ccf->worker_threads = NGX_CONF_UNSET; ccf->thread_stack_size = NGX_CONF_UNSET_SIZE; #endif @@ -1000,7 +1000,7 @@ ngx_core_module_init_conf(ngx_cycle_t *c #endif -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_conf_init_value(ccf->worker_threads, 0); ngx_threads_n = ccf->worker_threads; diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_connection.c Wed Mar 04 18:26:25 2015 +0300 @@ -943,7 +943,7 @@ ngx_close_connection(ngx_connection_t *c } } -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) /* * we have to clean the connection information before the closing diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_connection.h Wed Mar 04 18:26:25 2015 +0300 @@ -184,7 +184,7 @@ struct ngx_connection_s { unsigned busy_count:2; #endif -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_atomic_t lock; #endif }; diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_cycle.c --- a/src/core/ngx_cycle.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_cycle.c Wed Mar 04 18:26:25 2015 +0300 @@ -26,7 +26,7 @@ static ngx_event_t ngx_cleaner_event ngx_uint_t ngx_test_config; ngx_uint_t ngx_quiet_mode; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_tls_key_t ngx_core_tls_key; #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_cycle.h --- a/src/core/ngx_cycle.h Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_cycle.h Wed Mar 04 18:26:25 2015 +0300 @@ -103,7 +103,7 @@ typedef struct { ngx_array_t env; char **environment; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_int_t worker_threads; size_t thread_stack_size; #endif @@ -111,10 +111,14 @@ typedef struct { } ngx_core_conf_t; +#if (NGX_OLD_THREADS) + typedef struct { ngx_pool_t *pool; /* pcre's malloc() pool */ } ngx_core_tls_t; +#endif + #define ngx_is_init_cycle(cycle) (cycle->conf_ctx == NULL) @@ -136,7 +140,7 @@ extern ngx_array_t ngx_old_cy extern ngx_module_t ngx_core_module; extern ngx_uint_t ngx_test_config; extern ngx_uint_t ngx_quiet_mode; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) extern ngx_tls_key_t ngx_core_tls_key; #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_regex.c --- a/src/core/ngx_regex.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_regex.c Wed Mar 04 18:26:25 2015 +0300 @@ -80,7 +80,7 @@ ngx_regex_init(void) static ngx_inline void ngx_regex_malloc_init(ngx_pool_t *pool) { -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_core_tls_t *tls; if (ngx_threaded) { @@ -98,7 +98,7 @@ ngx_regex_malloc_init(ngx_pool_t *pool) static ngx_inline void ngx_regex_malloc_done(void) { -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_core_tls_t *tls; if (ngx_threaded) { @@ -253,7 +253,7 @@ static void * ngx_libc_cdecl ngx_regex_malloc(size_t size) { ngx_pool_t *pool; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_core_tls_t *tls; if (ngx_threaded) { diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/core/ngx_spinlock.c --- a/src/core/ngx_spinlock.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/core/ngx_spinlock.c Wed Mar 04 18:26:25 2015 +0300 @@ -42,7 +42,7 @@ ngx_spinlock(ngx_atomic_t *lock, ngx_ato #else -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) #error ngx_spinlock() or ngx_atomic_cmp_set() are not defined ! diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/modules/ngx_kqueue_module.c --- a/src/event/modules/ngx_kqueue_module.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/modules/ngx_kqueue_module.c Wed Mar 04 18:26:25 2015 +0300 @@ -48,7 +48,7 @@ static struct kevent *change_list, *cha static struct kevent *event_list; static ngx_uint_t max_changes, nchanges, nevents; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) static ngx_mutex_t *list_mutex; static ngx_mutex_t *kevent_mutex; #endif @@ -133,7 +133,7 @@ ngx_kqueue_init(ngx_cycle_t *cycle, ngx_ return NGX_ERROR; } -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) list_mutex = ngx_mutex_init(cycle->log, 0); if (list_mutex == NULL) { @@ -257,7 +257,7 @@ ngx_kqueue_done(ngx_cycle_t *cycle) ngx_kqueue = -1; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_mutex_destroy(kevent_mutex); ngx_mutex_destroy(list_mutex); #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/modules/ngx_poll_module.c --- a/src/event/modules/ngx_poll_module.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/modules/ngx_poll_module.c Wed Mar 04 18:26:25 2015 +0300 @@ -413,7 +413,7 @@ ngx_poll_init_conf(ngx_cycle_t *cycle, v return NGX_CONF_OK; } -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "poll() is not supported in the threaded mode"); diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/modules/ngx_select_module.c --- a/src/event/modules/ngx_select_module.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/modules/ngx_select_module.c Wed Mar 04 18:26:25 2015 +0300 @@ -419,7 +419,7 @@ ngx_select_init_conf(ngx_cycle_t *cycle, return NGX_CONF_ERROR; } -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "select() is not supported in the threaded mode"); diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/ngx_event.c --- a/src/event/ngx_event.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/ngx_event.c Wed Mar 04 18:26:25 2015 +0300 @@ -212,7 +212,7 @@ ngx_process_events_and_timers(ngx_cycle_ timer = ngx_event_find_timer(); flags = NGX_UPDATE_TIME; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) if (timer == NGX_TIMER_INFINITE || timer > 500) { timer = 500; @@ -722,7 +722,7 @@ ngx_event_process_init(ngx_cycle_t *cycl next = &c[i]; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) c[i].lock = 0; #endif } while (i); diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/ngx_event_busy_lock.h --- a/src/event/ngx_event_busy_lock.h Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/ngx_event_busy_lock.h Wed Mar 04 18:26:25 2015 +0300 @@ -46,7 +46,7 @@ typedef struct { ngx_event_busy_lock_ctx_t *events; ngx_event_busy_lock_ctx_t *last; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_mutex_t *mutex; #endif } ngx_event_busy_lock_t; diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/ngx_event_connect.h --- a/src/event/ngx_event_connect.h Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/ngx_event_connect.h Wed Mar 04 18:26:25 2015 +0300 @@ -53,7 +53,7 @@ struct ngx_peer_connection_s { ngx_event_save_peer_session_pt save_session; #endif -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) ngx_atomic_t *lock; #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/event/ngx_event_mutex.c --- a/src/event/ngx_event_mutex.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/event/ngx_event_mutex.c Wed Mar 04 18:26:25 2015 +0300 @@ -28,7 +28,7 @@ ngx_int_t ngx_event_mutex_timedlock(ngx_ m->last = ev; ev->next = NULL; -#if (NGX_THREADS0) +#if (NGX_OLD_THREADS0) ev->light = 1; #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/http/ngx_http_upstream.c Wed Mar 04 18:26:25 2015 +0300 @@ -446,7 +446,7 @@ ngx_http_upstream_create(ngx_http_reques u->peer.log = r->connection->log; u->peer.log_error = NGX_ERROR_ERR; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) u->peer.lock = &r->connection->lock; #endif diff -r e11a8e7e8e0c -r 457ec43dd8d5 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Wed Mar 18 02:04:39 2015 +0300 +++ b/src/os/unix/ngx_process_cycle.c Wed Mar 04 18:26:25 2015 +0300 @@ -23,7 +23,7 @@ static void ngx_worker_process_cycle(ngx static void ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker); static void ngx_worker_process_exit(ngx_cycle_t *cycle); static void ngx_channel_handler(ngx_event_t *ev); -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) static void ngx_wakeup_worker_threads(ngx_cycle_t *cycle); static ngx_thread_value_t ngx_worker_thread_cycle(void *data); #endif @@ -56,7 +56,7 @@ ngx_uint_t ngx_noaccepting; ngx_uint_t ngx_restart; -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) volatile ngx_thread_t ngx_threads[NGX_MAX_THREADS]; ngx_int_t ngx_threads_n; #endif @@ -747,7 +747,7 @@ ngx_worker_process_cycle(ngx_cycle_t *cy ngx_setproctitle("worker process"); -#if (NGX_THREADS) +#if (NGX_OLD_THREADS) { ngx_int_t n; ngx_err_t err; @@ -1032,7 +1032,7 @@ ngx_worker_process_exit(ngx_cycle_t *cyc ngx_uint_t i; ngx_connection_t *c; From vbart at nginx.com Wed Mar 18 15:57:48 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:57:48 +0000 Subject: [nginx] Configure: removed obsolete threads bits. Message-ID: details: http://hg.nginx.org/nginx/rev/83d54192e97b branches: changeset: 6017:83d54192e97b user: Ruslan Ermilov date: Fri Mar 13 19:08:27 2015 +0300 description: Configure: removed obsolete threads bits. diffstat: auto/lib/openssl/make | 5 ----- auto/options | 3 --- auto/os/freebsd | 19 ------------------- auto/summary | 25 ------------------------- 4 files changed, 0 insertions(+), 52 deletions(-) diffs (96 lines): diff -r 457ec43dd8d5 -r 83d54192e97b auto/lib/openssl/make --- a/auto/lib/openssl/make Wed Mar 04 18:26:25 2015 +0300 +++ b/auto/lib/openssl/make Fri Mar 13 19:08:27 2015 +0300 @@ -41,11 +41,6 @@ END ;; *) - case $USE_THREADS in - NO) OPENSSL_OPT="$OPENSSL_OPT no-threads" ;; - *) OPENSSL_OPT="$OPENSSL_OPT threads" ;; - esac - case $OPENSSL in /*) ngx_prefix="$OPENSSL/.openssl" ;; *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; diff -r 457ec43dd8d5 -r 83d54192e97b auto/options --- a/auto/options Wed Mar 04 18:26:25 2015 +0300 +++ b/auto/options Fri Mar 13 19:08:27 2015 +0300 @@ -190,9 +190,6 @@ do --without-poll_module) EVENT_POLL=NONE ;; --with-aio_module) EVENT_AIO=YES ;; - #--with-threads=*) USE_THREADS="$value" ;; - #--with-threads) USE_THREADS="pthreads" ;; - --with-file-aio) NGX_FILE_AIO=YES ;; --with-ipv6) NGX_IPV6=YES ;; diff -r 457ec43dd8d5 -r 83d54192e97b auto/os/freebsd --- a/auto/os/freebsd Wed Mar 04 18:26:25 2015 +0300 +++ b/auto/os/freebsd Fri Mar 13 19:08:27 2015 +0300 @@ -99,25 +99,6 @@ then fi -if [ $USE_THREADS = "rfork" ]; then - - echo " + using rfork()" - -# # kqueue's EVFILT_SIGNAL is safe -# -# if [ $version -gt 460101 ]; then -# echo " + kqueue's EVFILT_SIGNAL is safe" -# have=NGX_HAVE_SAFE_EVFILT_SIGNAL . auto/have -# else -# echo "$0: error: the kqueue's EVFILT_SIGNAL is unsafe on this" -# echo "FreeBSD version, so --with-threads=rfork could not be used" -# echo -# -# exit 1 -# fi -fi - - if [ $EVENT_AIO = YES ]; then if [ \( $version -lt 500000 -a $version -ge 430000 \) \ -o $version -ge 500014 ] diff -r 457ec43dd8d5 -r 83d54192e97b auto/summary --- a/auto/summary Wed Mar 04 18:26:25 2015 +0300 +++ b/auto/summary Fri Mar 13 19:08:27 2015 +0300 @@ -3,35 +3,10 @@ # Copyright (C) Nginx, Inc. -### STUB - -if [ $USE_THREADS != NO ]; then - -cat << END - -$0: error: the threads support is broken now. - -END - exit 1 - fi - -### - - echo echo "Configuration summary" -#case $USE_THREADS in -# rfork) echo " + using rfork()ed threads" ;; -# pthreads) echo " + using libpthread threads library" ;; -# libthr) echo " + using FreeBSD libthr threads library" ;; -# libc_r) echo " + using FreeBSD libc_r threads library" ;; -# linuxthreads) echo " + using FreeBSD LinuxThreads port library" ;; -# NO) echo " + threads are not used" ;; -# *) echo " + using lib$USE_THREADS threads library" ;; -#esac - if [ $USE_PCRE = DISABLED ]; then echo " + PCRE library is disabled" From vbart at nginx.com Wed Mar 18 15:57:51 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:57:51 +0000 Subject: [nginx] Thread pools implementation. Message-ID: details: http://hg.nginx.org/nginx/rev/466bd63b63d1 branches: changeset: 6018:466bd63b63d1 user: Valentin Bartenev date: Sat Mar 14 17:37:07 2015 +0300 description: Thread pools implementation. diffstat: auto/configure | 1 + auto/modules | 6 + auto/options | 4 + auto/sources | 7 + auto/summary | 4 + auto/threads | 20 + src/core/ngx_core.h | 4 + src/core/ngx_thread_pool.c | 631 ++++++++++++++++++++++++++++ src/core/ngx_thread_pool.h | 36 + src/event/modules/ngx_aio_module.c | 1 + src/event/modules/ngx_devpoll_module.c | 1 + src/event/modules/ngx_epoll_module.c | 1 + src/event/modules/ngx_eventport_module.c | 1 + src/event/modules/ngx_iocp_module.c | 1 + src/event/modules/ngx_kqueue_module.c | 1 + src/event/modules/ngx_poll_module.c | 1 + src/event/modules/ngx_rtsig_module.c | 1 + src/event/modules/ngx_select_module.c | 1 + src/event/modules/ngx_win32_select_module.c | 1 + src/event/ngx_event.c | 2 +- src/event/ngx_event.h | 4 + src/os/unix/ngx_linux_config.h | 2 +- src/os/unix/ngx_thread.h | 52 ++ src/os/unix/ngx_thread_cond.c | 87 +++ src/os/unix/ngx_thread_id.c | 70 +++ src/os/unix/ngx_thread_mutex.c | 174 +++++++ 26 files changed, 1112 insertions(+), 2 deletions(-) diffs (truncated from 1359 to 300 lines): diff -r 83d54192e97b -r 466bd63b63d1 auto/configure --- a/auto/configure Fri Mar 13 19:08:27 2015 +0300 +++ b/auto/configure Sat Mar 14 17:37:07 2015 +0300 @@ -58,6 +58,7 @@ if [ "$NGX_PLATFORM" != win32 ]; then . auto/unix fi +. auto/threads . auto/modules . auto/lib/conf diff -r 83d54192e97b -r 466bd63b63d1 auto/modules --- a/auto/modules Fri Mar 13 19:08:27 2015 +0300 +++ b/auto/modules Sat Mar 14 17:37:07 2015 +0300 @@ -432,6 +432,12 @@ fi modules="$CORE_MODULES $EVENT_MODULES" +# thread pool module should be initialized after events +if [ $USE_THREADS = YES ]; then + modules="$modules $THREAD_POOL_MODULE" +fi + + if [ $USE_OPENSSL = YES ]; then modules="$modules $OPENSSL_MODULE" CORE_DEPS="$CORE_DEPS $OPENSSL_DEPS" diff -r 83d54192e97b -r 466bd63b63d1 auto/options --- a/auto/options Fri Mar 13 19:08:27 2015 +0300 +++ b/auto/options Sat Mar 14 17:37:07 2015 +0300 @@ -190,6 +190,8 @@ do --without-poll_module) EVENT_POLL=NONE ;; --with-aio_module) EVENT_AIO=YES ;; + --with-threads) USE_THREADS=YES ;; + --with-file-aio) NGX_FILE_AIO=YES ;; --with-ipv6) NGX_IPV6=YES ;; @@ -351,6 +353,8 @@ cat << END --with-poll_module enable poll module --without-poll_module disable poll module + --with-threads enable thread pool support + --with-file-aio enable file AIO support --with-ipv6 enable IPv6 support diff -r 83d54192e97b -r 466bd63b63d1 auto/sources --- a/auto/sources Fri Mar 13 19:08:27 2015 +0300 +++ b/auto/sources Sat Mar 14 17:37:07 2015 +0300 @@ -193,6 +193,13 @@ UNIX_SRCS="$CORE_SRCS $EVENT_SRCS \ POSIX_DEPS=src/os/unix/ngx_posix_config.h +THREAD_POOL_MODULE=ngx_thread_pool_module +THREAD_POOL_DEPS=src/core/ngx_thread_pool.h +THREAD_POOL_SRCS="src/core/ngx_thread_pool.c + src/os/unix/ngx_thread_cond.c + src/os/unix/ngx_thread_mutex.c + src/os/unix/ngx_thread_id.c" + FREEBSD_DEPS="src/os/unix/ngx_freebsd_config.h src/os/unix/ngx_freebsd.h" FREEBSD_SRCS=src/os/unix/ngx_freebsd_init.c FREEBSD_SENDFILE_SRCS=src/os/unix/ngx_freebsd_sendfile_chain.c diff -r 83d54192e97b -r 466bd63b63d1 auto/summary --- a/auto/summary Fri Mar 13 19:08:27 2015 +0300 +++ b/auto/summary Sat Mar 14 17:37:07 2015 +0300 @@ -7,6 +7,10 @@ echo echo "Configuration summary" +if [ $USE_THREADS = YES ]; then + echo " + using threads" +fi + if [ $USE_PCRE = DISABLED ]; then echo " + PCRE library is disabled" diff -r 83d54192e97b -r 466bd63b63d1 auto/threads --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/auto/threads Sat Mar 14 17:37:07 2015 +0300 @@ -0,0 +1,20 @@ + +# Copyright (C) Nginx, Inc. + + +if [ $USE_THREADS = YES ]; then + + if [ "$NGX_PLATFORM" = win32 ]; then + cat << END + +$0: --with-threads is not supported on Windows + +END + exit 1 + fi + + have=NGX_THREADS . auto/have + CORE_DEPS="$CORE_DEPS $THREAD_POOL_DEPS" + CORE_SRCS="$CORE_SRCS $THREAD_POOL_SRCS" + CORE_LIBS="$CORE_LIBS -lpthread" +fi diff -r 83d54192e97b -r 466bd63b63d1 src/core/ngx_core.h --- a/src/core/ngx_core.h Fri Mar 13 19:08:27 2015 +0300 +++ b/src/core/ngx_core.h Sat Mar 14 17:37:07 2015 +0300 @@ -22,6 +22,10 @@ typedef struct ngx_event_s ngx_eve typedef struct ngx_event_aio_s ngx_event_aio_t; typedef struct ngx_connection_s ngx_connection_t; +#if (NGX_THREADS) +typedef struct ngx_thread_task_s ngx_thread_task_t; +#endif + typedef void (*ngx_event_handler_pt)(ngx_event_t *ev); typedef void (*ngx_connection_handler_pt)(ngx_connection_t *c); diff -r 83d54192e97b -r 466bd63b63d1 src/core/ngx_thread_pool.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/core/ngx_thread_pool.c Sat Mar 14 17:37:07 2015 +0300 @@ -0,0 +1,631 @@ + +/* + * Copyright (C) Nginx, Inc. + * Copyright (C) Valentin V. Bartenev + * Copyright (C) Ruslan Ermilov + */ + + +#include +#include +#include + + +typedef struct { + ngx_array_t pools; +} ngx_thread_pool_conf_t; + + +typedef struct { + ngx_thread_mutex_t mtx; + ngx_uint_t count; + ngx_thread_task_t *first; + ngx_thread_task_t **last; +} ngx_thread_pool_queue_t; + + +struct ngx_thread_pool_s { + ngx_thread_cond_t cond; + + ngx_thread_pool_queue_t queue; + + ngx_log_t *log; + ngx_pool_t *pool; + + ngx_str_t name; + ngx_uint_t threads; + ngx_uint_t max_queue; + + u_char *file; + ngx_uint_t line; +}; + + +static ngx_int_t ngx_thread_pool_init(ngx_thread_pool_t *tp, ngx_log_t *log, + ngx_pool_t *pool); +static ngx_int_t ngx_thread_pool_queue_init(ngx_thread_pool_queue_t *queue, + ngx_log_t *log); +static ngx_int_t ngx_thread_pool_queue_destroy(ngx_thread_pool_queue_t *queue, + ngx_log_t *log); +static void ngx_thread_pool_destroy(ngx_thread_pool_t *tp); + +static void *ngx_thread_pool_cycle(void *data); +static void ngx_thread_pool_handler(ngx_event_t *ev); + +static char *ngx_thread_pool(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); + +static void *ngx_thread_pool_create_conf(ngx_cycle_t *cycle); +static char *ngx_thread_pool_init_conf(ngx_cycle_t *cycle, void *conf); + +static ngx_int_t ngx_thread_pool_init_worker(ngx_cycle_t *cycle); +static void ngx_thread_pool_exit_worker(ngx_cycle_t *cycle); + + +static ngx_command_t ngx_thread_pool_commands[] = { + + { ngx_string("thread_pool"), + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE23, + ngx_thread_pool, + 0, + 0, + NULL }, + + ngx_null_command +}; + + +static ngx_core_module_t ngx_thread_pool_module_ctx = { + ngx_string("thread_pool"), + ngx_thread_pool_create_conf, + ngx_thread_pool_init_conf +}; + + +ngx_module_t ngx_thread_pool_module = { + NGX_MODULE_V1, + &ngx_thread_pool_module_ctx, /* module context */ + ngx_thread_pool_commands, /* module directives */ + NGX_CORE_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + ngx_thread_pool_init_worker, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + ngx_thread_pool_exit_worker, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_str_t ngx_thread_pool_default = ngx_string("default"); + +static ngx_uint_t ngx_thread_pool_task_id; +static ngx_thread_pool_queue_t ngx_thread_pool_done; + + +static ngx_int_t +ngx_thread_pool_init(ngx_thread_pool_t *tp, ngx_log_t *log, ngx_pool_t *pool) +{ + int err; + pthread_t tid; + ngx_uint_t n; + pthread_attr_t attr; + + if (ngx_notify == NULL) { + ngx_log_error(NGX_LOG_ALERT, log, 0, + "the configured event method cannot be used with thread pools"); + return NGX_ERROR; + } + + if (ngx_thread_pool_queue_init(&tp->queue, log) != NGX_OK) { + return NGX_ERROR; + } + + if (ngx_thread_cond_create(&tp->cond, log) != NGX_OK) { + (void) ngx_thread_pool_queue_destroy(&tp->queue, log); + return NGX_ERROR; + } + + tp->log = log; + tp->pool = pool; + + err = pthread_attr_init(&attr); + if (err) { + ngx_log_error(NGX_LOG_ALERT, log, err, + "pthread_attr_init() failed"); + return NGX_ERROR; + } + +#if 0 + err = pthread_attr_setstacksize(&attr, PTHREAD_STACK_MIN); + if (err) { + ngx_log_error(NGX_LOG_ALERT, log, err, + "pthread_attr_setstacksize() failed"); + return NGX_ERROR; + } +#endif + + for (n = 0; n < tp->threads; n++) { + err = pthread_create(&tid, &attr, ngx_thread_pool_cycle, tp); + if (err) { + ngx_log_error(NGX_LOG_ALERT, log, err, + "pthread_create() failed"); + return NGX_ERROR; + } + } + + (void) pthread_attr_destroy(&attr); + + return NGX_OK; +} + + +static ngx_int_t +ngx_thread_pool_queue_init(ngx_thread_pool_queue_t *queue, ngx_log_t *log) +{ + queue->count = 0; + queue->first = NULL; + queue->last = &queue->first; + + return ngx_thread_mutex_create(&queue->mtx, log); +} + + +static ngx_int_t +ngx_thread_pool_queue_destroy(ngx_thread_pool_queue_t *queue, ngx_log_t *log) +{ + return ngx_thread_mutex_destroy(&queue->mtx, log); +} + From vbart at nginx.com Wed Mar 18 15:57:58 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:57:58 +0000 Subject: [nginx] Events: implemented epoll notification mechanism. Message-ID: details: http://hg.nginx.org/nginx/rev/40e244e042a7 branches: changeset: 6019:40e244e042a7 user: Valentin Bartenev date: Sat Mar 14 17:37:13 2015 +0300 description: Events: implemented epoll notification mechanism. diffstat: auto/unix | 23 +++++ src/event/modules/ngx_epoll_module.c | 141 ++++++++++++++++++++++++++++++++++- 2 files changed, 162 insertions(+), 2 deletions(-) diffs (253 lines): diff -r 466bd63b63d1 -r 40e244e042a7 auto/unix --- a/auto/unix Sat Mar 14 17:37:07 2015 +0300 +++ b/auto/unix Sat Mar 14 17:37:13 2015 +0300 @@ -450,6 +450,29 @@ Currently file AIO is supported on FreeB END exit 1 fi + +else + + ngx_feature="eventfd()" + ngx_feature_name="NGX_HAVE_EVENTFD" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs= + ngx_feature_test="(void) eventfd(0, 0)" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_HAVE_SYS_EVENTFD_H . auto/have + fi + + if [ $ngx_found = no ]; then + + ngx_feature="eventfd() (SYS_eventfd)" + ngx_feature_incs="#include " + ngx_feature_test="int n = SYS_eventfd" + . auto/feature + fi fi diff -r 466bd63b63d1 -r 40e244e042a7 src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Sat Mar 14 17:37:07 2015 +0300 +++ b/src/event/modules/ngx_epoll_module.c Sat Mar 14 17:37:13 2015 +0300 @@ -70,12 +70,15 @@ int epoll_wait(int epfd, struct epoll_ev return -1; } +#if (NGX_HAVE_EVENTFD) +#define SYS_eventfd 323 +#endif + #if (NGX_HAVE_FILE_AIO) #define SYS_io_setup 245 #define SYS_io_destroy 246 #define SYS_io_getevents 247 -#define SYS_eventfd 323 typedef u_int aio_context_t; @@ -88,7 +91,7 @@ struct io_event { #endif -#endif +#endif /* NGX_TEST_BUILD_EPOLL */ typedef struct { @@ -98,6 +101,10 @@ typedef struct { static ngx_int_t ngx_epoll_init(ngx_cycle_t *cycle, ngx_msec_t timer); +#if (NGX_HAVE_EVENTFD) +static ngx_int_t ngx_epoll_notify_init(ngx_log_t *log); +static void ngx_epoll_notify_handler(ngx_event_t *ev); +#endif static void ngx_epoll_done(ngx_cycle_t *cycle); static ngx_int_t ngx_epoll_add_event(ngx_event_t *ev, ngx_int_t event, ngx_uint_t flags); @@ -106,6 +113,9 @@ static ngx_int_t ngx_epoll_del_event(ngx static ngx_int_t ngx_epoll_add_connection(ngx_connection_t *c); static ngx_int_t ngx_epoll_del_connection(ngx_connection_t *c, ngx_uint_t flags); +#if (NGX_HAVE_EVENTFD) +static ngx_int_t ngx_epoll_notify(ngx_event_handler_pt handler); +#endif static ngx_int_t ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); @@ -120,6 +130,12 @@ static int ep = -1; static struct epoll_event *event_list; static ngx_uint_t nevents; +#if (NGX_HAVE_EVENTFD) +static int notify_fd = -1; +static ngx_event_t notify_event; +static ngx_connection_t notify_conn; +#endif + #if (NGX_HAVE_FILE_AIO) int ngx_eventfd = -1; @@ -164,7 +180,11 @@ ngx_event_module_t ngx_epoll_module_ctx ngx_epoll_del_event, /* disable an event */ ngx_epoll_add_connection, /* add an connection */ ngx_epoll_del_connection, /* delete an connection */ +#if (NGX_HAVE_EVENTFD) + ngx_epoll_notify, /* trigger a notify */ +#else NULL, /* trigger a notify */ +#endif NULL, /* process the changes */ ngx_epoll_process_events, /* process the events */ ngx_epoll_init, /* init the events */ @@ -308,6 +328,12 @@ ngx_epoll_init(ngx_cycle_t *cycle, ngx_m return NGX_ERROR; } +#if (NGX_HAVE_EVENTFD) + if (ngx_epoll_notify_init(cycle->log) != NGX_OK) { + return NGX_ERROR; + } +#endif + #if (NGX_HAVE_FILE_AIO) ngx_epoll_aio_init(cycle, epcf); @@ -345,6 +371,85 @@ ngx_epoll_init(ngx_cycle_t *cycle, ngx_m } +#if (NGX_HAVE_EVENTFD) + +static ngx_int_t +ngx_epoll_notify_init(ngx_log_t *log) +{ + struct epoll_event ee; + +#if (NGX_HAVE_SYS_EVENTFD_H) + notify_fd = eventfd(0, 0); +#else + notify_fd = syscall(SYS_eventfd, 0); +#endif + + if (notify_fd == -1) { + ngx_log_error(NGX_LOG_EMERG, log, ngx_errno, "eventfd() failed"); + return NGX_ERROR; + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, log, 0, + "notify eventfd: %d", notify_fd); + + notify_event.handler = ngx_epoll_notify_handler; + notify_event.log = log; + notify_event.active = 1; + + notify_conn.fd = notify_fd; + notify_conn.read = ¬ify_event; + notify_conn.log = log; + + ee.events = EPOLLIN|EPOLLET; + ee.data.ptr = ¬ify_conn; + + if (epoll_ctl(ep, EPOLL_CTL_ADD, notify_fd, &ee) == -1) { + ngx_log_error(NGX_LOG_EMERG, log, ngx_errno, + "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); + + if (close(notify_fd) == -1) { + ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, + "eventfd close() failed"); + } + + return NGX_ERROR; + } + + return NGX_OK; +} + + +static void +ngx_epoll_notify_handler(ngx_event_t *ev) +{ + ssize_t n; + uint64_t count; + ngx_err_t err; + ngx_event_handler_pt handler; + + if (++ev->index == NGX_MAX_UINT32_VALUE) { + ev->index = 0; + + n = read(notify_fd, &count, sizeof(uint64_t)); + + err = ngx_errno; + + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "read() eventfd %d: %z count:%uL", notify_fd, n, count); + + if ((size_t) n != sizeof(uint64_t)) { + ngx_log_error(NGX_LOG_ALERT, ev->log, err, + "read() eventfd %d failed", notify_fd); + } + } + + handler = ev->data; + handler(ev); +} + +#endif + + static void ngx_epoll_done(ngx_cycle_t *cycle) { @@ -355,6 +460,17 @@ ngx_epoll_done(ngx_cycle_t *cycle) ep = -1; +#if (NGX_HAVE_EVENTFD) + + if (close(notify_fd) == -1) { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, + "eventfd close() failed"); + } + + notify_fd = -1; + +#endif + #if (NGX_HAVE_FILE_AIO) if (ngx_eventfd != -1) { @@ -561,6 +677,27 @@ ngx_epoll_del_connection(ngx_connection_ } +#if (NGX_HAVE_EVENTFD) + +static ngx_int_t +ngx_epoll_notify(ngx_event_handler_pt handler) +{ + static uint64_t inc = 1; + + if ((size_t) write(notify_fd, &inc, sizeof(uint64_t)) != sizeof(uint64_t)) { + ngx_log_error(NGX_LOG_ALERT, notify_event.log, ngx_errno, + "write() to eventfd %d failed", notify_fd); + return NGX_ERROR; + } + + notify_event.data = handler; + + return NGX_OK; +} + +#endif + + static ngx_int_t ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) { From vbart at nginx.com Wed Mar 18 15:58:00 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:58:00 +0000 Subject: [nginx] Events: implemented kqueue notification mechanism. Message-ID: details: http://hg.nginx.org/nginx/rev/e5f1d83360ef branches: changeset: 6020:e5f1d83360ef user: Valentin Bartenev date: Sat Mar 14 17:37:16 2015 +0300 description: Events: implemented kqueue notification mechanism. diffstat: src/event/modules/ngx_kqueue_module.c | 76 +++++++++++++++++++++++++++++++++++ 1 files changed, 76 insertions(+), 0 deletions(-) diffs (136 lines): diff -r 40e244e042a7 -r e5f1d83360ef src/event/modules/ngx_kqueue_module.c --- a/src/event/modules/ngx_kqueue_module.c Sat Mar 14 17:37:13 2015 +0300 +++ b/src/event/modules/ngx_kqueue_module.c Sat Mar 14 17:37:16 2015 +0300 @@ -17,6 +17,9 @@ typedef struct { static ngx_int_t ngx_kqueue_init(ngx_cycle_t *cycle, ngx_msec_t timer); +#ifdef EVFILT_USER +static ngx_int_t ngx_kqueue_notify_init(ngx_log_t *log); +#endif static void ngx_kqueue_done(ngx_cycle_t *cycle); static ngx_int_t ngx_kqueue_add_event(ngx_event_t *ev, ngx_int_t event, ngx_uint_t flags); @@ -24,6 +27,9 @@ static ngx_int_t ngx_kqueue_del_event(ng ngx_uint_t flags); static ngx_int_t ngx_kqueue_set_event(ngx_event_t *ev, ngx_int_t filter, ngx_uint_t flags); +#ifdef EVFILT_USER +static ngx_int_t ngx_kqueue_notify(ngx_event_handler_pt handler); +#endif static ngx_int_t ngx_kqueue_process_changes(ngx_cycle_t *cycle, ngx_uint_t try); static ngx_int_t ngx_kqueue_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); @@ -48,6 +54,11 @@ static struct kevent *change_list, *cha static struct kevent *event_list; static ngx_uint_t max_changes, nchanges, nevents; +#ifdef EVFILT_USER +static ngx_event_t notify_event; +static struct kevent notify_kev; +#endif + #if (NGX_OLD_THREADS) static ngx_mutex_t *list_mutex; static ngx_mutex_t *kevent_mutex; @@ -89,7 +100,11 @@ ngx_event_module_t ngx_kqueue_module_ct ngx_kqueue_del_event, /* disable an event */ NULL, /* add an connection */ NULL, /* delete an connection */ +#ifdef EVFILT_USER + ngx_kqueue_notify, /* trigger a notify */ +#else NULL, /* trigger a notify */ +#endif ngx_kqueue_process_changes, /* process the changes */ ngx_kqueue_process_events, /* process the events */ ngx_kqueue_init, /* init the events */ @@ -134,6 +149,12 @@ ngx_kqueue_init(ngx_cycle_t *cycle, ngx_ return NGX_ERROR; } +#ifdef EVFILT_USER + if (ngx_kqueue_notify_init(cycle->log) != NGX_OK) { + return NGX_ERROR; + } +#endif + #if (NGX_OLD_THREADS) list_mutex = ngx_mutex_init(cycle->log, 0); @@ -248,6 +269,37 @@ ngx_kqueue_init(ngx_cycle_t *cycle, ngx_ } +#ifdef EVFILT_USER + +static ngx_int_t +ngx_kqueue_notify_init(ngx_log_t *log) +{ + notify_kev.ident = 0; + notify_kev.filter = EVFILT_USER; + notify_kev.data = 0; + notify_kev.flags = EV_ADD|EV_CLEAR; + notify_kev.fflags = 0; + notify_kev.udata = 0; + + if (kevent(ngx_kqueue, ¬ify_kev, 1, NULL, 0, NULL) == -1) { + ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, + "kevent(EVFILT_USER, EV_ADD) failed"); + return NGX_ERROR; + } + + notify_event.active = 1; + notify_event.log = log; + + notify_kev.flags = 0; + notify_kev.fflags = NOTE_TRIGGER; + notify_kev.udata = NGX_KQUEUE_UDATA_T ((uintptr_t) ¬ify_event); + + return NGX_OK; +} + +#endif + + static void ngx_kqueue_done(ngx_cycle_t *cycle) { @@ -488,6 +540,25 @@ ngx_kqueue_set_event(ngx_event_t *ev, ng } +#ifdef EVFILT_USER + +static ngx_int_t +ngx_kqueue_notify(ngx_event_handler_pt handler) +{ + notify_event.handler = handler; + + if (kevent(ngx_kqueue, ¬ify_kev, 1, NULL, 0, NULL) == -1) { + ngx_log_error(NGX_LOG_ALERT, notify_event.log, ngx_errno, + "kevent(EVFILT_USER, NOTE_TRIGGER) failed"); + return NGX_ERROR; + } + + return NGX_OK; +} + +#endif + + static ngx_int_t ngx_kqueue_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) @@ -648,6 +719,11 @@ ngx_kqueue_process_events(ngx_cycle_t *c break; +#ifdef EVFILT_USER + case EVFILT_USER: + break; +#endif + default: ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, "unexpected kevent() filter %d", From vbart at nginx.com Wed Mar 18 15:58:03 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:58:03 +0000 Subject: [nginx] Events: implemented eventport notification mechanism. Message-ID: details: http://hg.nginx.org/nginx/rev/117c77b22db1 branches: changeset: 6021:117c77b22db1 user: Ruslan Ermilov date: Sat Mar 14 17:37:21 2015 +0300 description: Events: implemented eventport notification mechanism. diffstat: src/event/modules/ngx_eventport_module.c | 35 +++++++++++++++++++++++++++++++- 1 files changed, 34 insertions(+), 1 deletions(-) diffs (87 lines): diff -r e5f1d83360ef -r 117c77b22db1 src/event/modules/ngx_eventport_module.c --- a/src/event/modules/ngx_eventport_module.c Sat Mar 14 17:37:16 2015 +0300 +++ b/src/event/modules/ngx_eventport_module.c Sat Mar 14 17:37:21 2015 +0300 @@ -93,6 +93,13 @@ int port_getn(int port, port_event_t lis return -1; } +int port_send(int port, int events, void *user); + +int port_send(int port, int events, void *user) +{ + return -1; +} + int timer_create(clockid_t clock_id, struct sigevent *evp, timer_t *timerid); @@ -133,6 +140,7 @@ static ngx_int_t ngx_eventport_add_event ngx_uint_t flags); static ngx_int_t ngx_eventport_del_event(ngx_event_t *ev, ngx_int_t event, ngx_uint_t flags); +static ngx_int_t ngx_eventport_notify(ngx_event_handler_pt handler); static ngx_int_t ngx_eventport_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); @@ -143,6 +151,7 @@ static int ep = -1; static port_event_t *event_list; static ngx_uint_t nevents; static timer_t event_timer = (timer_t) -1; +static ngx_event_t notify_event; static ngx_str_t eventport_name = ngx_string("eventport"); @@ -172,7 +181,7 @@ ngx_event_module_t ngx_eventport_module ngx_eventport_del_event, /* disable an event */ NULL, /* add an connection */ NULL, /* delete an connection */ - NULL, /* trigger a notify */ + ngx_eventport_notify, /* trigger a notify */ NULL, /* process the changes */ ngx_eventport_process_events, /* process the events */ ngx_eventport_init, /* init the events */ @@ -215,6 +224,9 @@ ngx_eventport_init(ngx_cycle_t *cycle, n "port_create() failed"); return NGX_ERROR; } + + notify_event.active = 1; + notify_event.log = cycle->log; } if (nevents < epcf->events) { @@ -406,6 +418,21 @@ ngx_eventport_del_event(ngx_event_t *ev, } +static ngx_int_t +ngx_eventport_notify(ngx_event_handler_pt handler) +{ + notify_event.handler = handler; + + if (port_send(ep, 0, ¬ify_event) != 0) { + ngx_log_error(NGX_LOG_ALERT, notify_event.log, ngx_errno, + "port_send() failed"); + return NGX_ERROR; + } + + return NGX_OK; +} + + ngx_int_t ngx_eventport_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) @@ -580,6 +607,12 @@ ngx_eventport_process_events(ngx_cycle_t continue; + case PORT_SOURCE_USER: + + ev->handler(ev); + + continue; + default: ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, "unexpected eventport object %d", From vbart at nginx.com Wed Mar 18 15:58:05 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:58:05 +0000 Subject: [nginx] Added support for offloading read() in thread pools. Message-ID: details: http://hg.nginx.org/nginx/rev/1fdba317ee6d branches: changeset: 6022:1fdba317ee6d user: Valentin Bartenev date: Sat Mar 14 17:37:25 2015 +0300 description: Added support for offloading read() in thread pools. diffstat: src/core/ngx_buf.h | 12 +++- src/core/ngx_file.h | 6 + src/core/ngx_output_chain.c | 32 ++++++-- src/http/ngx_http_copy_filter_module.c | 76 ++++++++++++++++++++++- src/http/ngx_http_core_module.c | 74 ++++++++++++++++++++++ src/http/ngx_http_core_module.h | 10 +++ src/http/ngx_http_file_cache.c | 2 +- src/os/unix/ngx_files.c | 109 +++++++++++++++++++++++++++++++++ src/os/unix/ngx_files.h | 5 + 9 files changed, 312 insertions(+), 14 deletions(-) diffs (truncated from 522 to 300 lines): diff -r 117c77b22db1 -r 1fdba317ee6d src/core/ngx_buf.h --- a/src/core/ngx_buf.h Sat Mar 14 17:37:21 2015 +0300 +++ b/src/core/ngx_buf.h Sat Mar 14 17:37:25 2015 +0300 @@ -90,15 +90,23 @@ struct ngx_output_chain_ctx_s { #endif unsigned need_in_memory:1; unsigned need_in_temp:1; +#if (NGX_HAVE_FILE_AIO || NGX_THREADS) + unsigned aio:1; +#endif + #if (NGX_HAVE_FILE_AIO) - unsigned aio:1; - ngx_output_chain_aio_pt aio_handler; #if (NGX_HAVE_AIO_SENDFILE) ssize_t (*aio_preload)(ngx_buf_t *file); #endif #endif +#if (NGX_THREADS) + ngx_int_t (*thread_handler)(ngx_thread_task_t *task, + ngx_file_t *file); + ngx_thread_task_t *thread_task; +#endif + off_t alignment; ngx_pool_t *pool; diff -r 117c77b22db1 -r 1fdba317ee6d src/core/ngx_file.h --- a/src/core/ngx_file.h Sat Mar 14 17:37:21 2015 +0300 +++ b/src/core/ngx_file.h Sat Mar 14 17:37:25 2015 +0300 @@ -23,6 +23,12 @@ struct ngx_file_s { ngx_log_t *log; +#if (NGX_THREADS) + ngx_int_t (*thread_handler)(ngx_thread_task_t *task, + ngx_file_t *file); + void *thread_ctx; +#endif + #if (NGX_HAVE_FILE_AIO) ngx_event_aio_t *aio; #endif diff -r 117c77b22db1 -r 1fdba317ee6d src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Sat Mar 14 17:37:21 2015 +0300 +++ b/src/core/ngx_output_chain.c Sat Mar 14 17:37:25 2015 +0300 @@ -50,7 +50,7 @@ ngx_output_chain(ngx_output_chain_ctx_t ngx_chain_t *cl, *out, **last_out; if (ctx->in == NULL && ctx->busy == NULL -#if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_AIO || NGX_THREADS) && !ctx->aio #endif ) @@ -89,7 +89,7 @@ ngx_output_chain(ngx_output_chain_ctx_t for ( ;; ) { -#if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_AIO || NGX_THREADS) if (ctx->aio) { return NGX_AGAIN; } @@ -233,6 +233,13 @@ ngx_output_chain_as_is(ngx_output_chain_ return 1; } +#if (NGX_THREADS) + if (buf->in_file) { + buf->file->thread_handler = ctx->thread_handler; + buf->file->thread_ctx = ctx->filter_ctx; + } +#endif + if (buf->in_file && buf->file->directio) { return 0; } @@ -559,7 +566,6 @@ ngx_output_chain_copy_buf(ngx_output_cha #endif #if (NGX_HAVE_FILE_AIO) - if (ctx->aio_handler) { n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, src->file_pos, ctx->pool); @@ -568,15 +574,23 @@ ngx_output_chain_copy_buf(ngx_output_cha return NGX_AGAIN; } - } else { + } else +#endif +#if (NGX_THREADS) + if (src->file->thread_handler) { + n = ngx_thread_read(&ctx->thread_task, src->file, dst->pos, + (size_t) size, src->file_pos, ctx->pool); + if (n == NGX_AGAIN) { + ctx->aio = 1; + return NGX_AGAIN; + } + + } else +#endif + { n = ngx_read_file(src->file, dst->pos, (size_t) size, src->file_pos); } -#else - - n = ngx_read_file(src->file, dst->pos, (size_t) size, src->file_pos); - -#endif #if (NGX_HAVE_ALIGNED_DIRECTIO) diff -r 117c77b22db1 -r 1fdba317ee6d src/http/ngx_http_copy_filter_module.c --- a/src/http/ngx_http_copy_filter_module.c Sat Mar 14 17:37:21 2015 +0300 +++ b/src/http/ngx_http_copy_filter_module.c Sat Mar 14 17:37:25 2015 +0300 @@ -24,6 +24,11 @@ static ssize_t ngx_http_copy_aio_sendfil static void ngx_http_copy_aio_sendfile_event_handler(ngx_event_t *ev); #endif #endif +#if (NGX_THREADS) +static ngx_int_t ngx_http_copy_thread_handler(ngx_thread_task_t *task, + ngx_file_t *file); +static void ngx_http_copy_thread_event_handler(ngx_event_t *ev); +#endif static void *ngx_http_copy_filter_create_conf(ngx_conf_t *cf); static char *ngx_http_copy_filter_merge_conf(ngx_conf_t *cf, @@ -121,7 +126,7 @@ ngx_http_copy_filter(ngx_http_request_t ctx->filter_ctx = r; #if (NGX_HAVE_FILE_AIO) - if (ngx_file_aio && clcf->aio) { + if (ngx_file_aio && clcf->aio == NGX_HTTP_AIO_ON) { ctx->aio_handler = ngx_http_copy_aio_handler; #if (NGX_HAVE_AIO_SENDFILE) ctx->aio_preload = ngx_http_copy_aio_sendfile_preload; @@ -129,12 +134,18 @@ ngx_http_copy_filter(ngx_http_request_t } #endif +#if (NGX_THREADS) + if (clcf->aio == NGX_HTTP_AIO_THREADS) { + ctx->thread_handler = ngx_http_copy_thread_handler; + } +#endif + if (in && in->buf && ngx_buf_size(in->buf)) { r->request_output = 1; } } -#if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_AIO || NGX_THREADS) ctx->aio = r->aio; #endif @@ -233,6 +244,67 @@ ngx_http_copy_aio_sendfile_event_handler #endif +#if (NGX_THREADS) + +static ngx_int_t +ngx_http_copy_thread_handler(ngx_thread_task_t *task, ngx_file_t *file) +{ + ngx_str_t name; + ngx_thread_pool_t *tp; + ngx_http_request_t *r; + ngx_http_core_loc_conf_t *clcf; + + r = file->thread_ctx; + + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + tp = clcf->thread_pool; + + if (tp == NULL) { + if (ngx_http_complex_value(r, clcf->thread_pool_value, &name) + != NGX_OK) + { + return NGX_ERROR; + } + + tp = ngx_thread_pool_get((ngx_cycle_t *) ngx_cycle, &name); + + if (tp == NULL) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "thread pool \"%V\" not found", &name); + return NGX_ERROR; + } + } + + task->event.data = r; + task->event.handler = ngx_http_copy_thread_event_handler; + + if (ngx_thread_task_post(tp, task) != NGX_OK) { + return NGX_ERROR; + } + + r->main->blocked++; + r->aio = 1; + + return NGX_OK; +} + + +static void +ngx_http_copy_thread_event_handler(ngx_event_t *ev) +{ + ngx_http_request_t *r; + + r = ev->data; + + r->main->blocked--; + r->aio = 0; + + r->connection->write->handler(r->connection->write); +} + +#endif + + static void * ngx_http_copy_filter_create_conf(ngx_conf_t *cf) { diff -r 117c77b22db1 -r 1fdba317ee6d src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Sat Mar 14 17:37:21 2015 +0300 +++ b/src/http/ngx_http_core_module.c Sat Mar 14 17:37:25 2015 +0300 @@ -3624,6 +3624,10 @@ ngx_http_core_create_loc_conf(ngx_conf_t clcf->sendfile = NGX_CONF_UNSET; clcf->sendfile_max_chunk = NGX_CONF_UNSET_SIZE; clcf->aio = NGX_CONF_UNSET; +#if (NGX_THREADS) + clcf->thread_pool = NGX_CONF_UNSET_PTR; + clcf->thread_pool_value = NGX_CONF_UNSET_PTR; +#endif clcf->read_ahead = NGX_CONF_UNSET_SIZE; clcf->directio = NGX_CONF_UNSET; clcf->directio_alignment = NGX_CONF_UNSET; @@ -3839,7 +3843,14 @@ ngx_http_core_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0); ngx_conf_merge_size_value(conf->sendfile_max_chunk, prev->sendfile_max_chunk, 0); +#if (NGX_HAVE_FILE_AIO || NGX_THREADS) ngx_conf_merge_value(conf->aio, prev->aio, NGX_HTTP_AIO_OFF); +#endif +#if (NGX_THREADS) + ngx_conf_merge_ptr_value(conf->thread_pool, prev->thread_pool, NULL); + ngx_conf_merge_ptr_value(conf->thread_pool_value, prev->thread_pool_value, + NULL); +#endif ngx_conf_merge_size_value(conf->read_ahead, prev->read_ahead, 0); ngx_conf_merge_off_value(conf->directio, prev->directio, NGX_OPEN_FILE_DIRECTIO_OFF); @@ -4644,6 +4655,11 @@ ngx_http_core_set_aio(ngx_conf_t *cf, ng return "is duplicate"; } +#if (NGX_THREADS) + clcf->thread_pool = NULL; + clcf->thread_pool_value = NULL; +#endif + value = cf->args->elts; if (ngx_strcmp(value[1].data, "off") == 0) { @@ -4676,6 +4692,64 @@ ngx_http_core_set_aio(ngx_conf_t *cf, ng #endif + if (ngx_strncmp(value[1].data, "threads", 7) == 0 + && (value[1].len == 7 || value[1].data[7] == '=')) + { +#if (NGX_THREADS) + ngx_str_t name; + ngx_thread_pool_t *tp; + ngx_http_complex_value_t cv; + ngx_http_compile_complex_value_t ccv; + + clcf->aio = NGX_HTTP_AIO_THREADS; + + if (value[1].len >= 8) { + name.len = value[1].len - 8; + name.data = value[1].data + 8; + + ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = &name; + ccv.complex_value = &cv; + + if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + From vbart at nginx.com Wed Mar 18 15:58:13 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 15:58:13 +0000 Subject: [nginx] Added support for offloading Linux sendfile() in thread ... Message-ID: details: http://hg.nginx.org/nginx/rev/b550563ef96e branches: changeset: 6023:b550563ef96e user: Valentin Bartenev date: Sat Mar 14 17:37:30 2015 +0300 description: Added support for offloading Linux sendfile() in thread pools. diffstat: src/core/ngx_connection.h | 4 + src/os/unix/ngx_linux_sendfile_chain.c | 191 +++++++++++++++++++++++++++++++- 2 files changed, 187 insertions(+), 8 deletions(-) diffs (257 lines): diff -r 1fdba317ee6d -r b550563ef96e src/core/ngx_connection.h --- a/src/core/ngx_connection.h Sat Mar 14 17:37:25 2015 +0300 +++ b/src/core/ngx_connection.h Sat Mar 14 17:37:30 2015 +0300 @@ -184,6 +184,10 @@ struct ngx_connection_s { unsigned busy_count:2; #endif +#if (NGX_THREADS) + ngx_thread_task_t *sendfile_task; +#endif + #if (NGX_OLD_THREADS) ngx_atomic_t lock; #endif diff -r 1fdba317ee6d -r b550563ef96e src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Sat Mar 14 17:37:25 2015 +0300 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Sat Mar 14 17:37:30 2015 +0300 @@ -13,6 +13,18 @@ static ssize_t ngx_linux_sendfile(ngx_connection_t *c, ngx_buf_t *file, size_t size); +#if (NGX_THREADS) +#include + +#if !(NGX_HAVE_SENDFILE64) +#error sendfile64() is required! +#endif + +static ngx_int_t ngx_linux_sendfile_thread(ngx_connection_t *c, ngx_buf_t *file, + size_t size, size_t *sent); +static void ngx_linux_sendfile_thread_handler(void *data, ngx_log_t *log); +#endif + /* * On Linux up to 2.4.21 sendfile() (syscall #187) works with 32-bit @@ -35,8 +47,8 @@ ngx_chain_t * ngx_linux_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int tcp_nodelay; - off_t send, prev_send, sent; - size_t file_size; + off_t send, prev_send; + size_t file_size, sent; ssize_t n; ngx_err_t err; ngx_buf_t *file; @@ -44,6 +56,10 @@ ngx_linux_sendfile_chain(ngx_connection_ ngx_chain_t *cl; ngx_iovec_t header; struct iovec headers[NGX_IOVS_PREALLOCATE]; +#if (NGX_THREADS) + ngx_int_t rc; + ngx_uint_t thread_handled, thread_complete; +#endif wev = c->write; @@ -66,6 +82,10 @@ ngx_linux_sendfile_chain(ngx_connection_ for ( ;; ) { prev_send = send; +#if (NGX_THREADS) + thread_handled = 0; + thread_complete = 0; +#endif /* create the iovec and coalesce the neighbouring bufs */ @@ -158,14 +178,39 @@ ngx_linux_sendfile_chain(ngx_connection_ return NGX_CHAIN_ERROR; } #endif - n = ngx_linux_sendfile(c, file, file_size); - if (n == NGX_ERROR) { - return NGX_CHAIN_ERROR; +#if (NGX_THREADS) + if (file->file->thread_handler) { + rc = ngx_linux_sendfile_thread(c, file, file_size, &sent); + + switch (rc) { + case NGX_OK: + thread_handled = 1; + break; + + case NGX_DONE: + thread_complete = 1; + break; + + case NGX_AGAIN: + break; + + default: /* NGX_ERROR */ + return NGX_CHAIN_ERROR; + } + + } else +#endif + { + n = ngx_linux_sendfile(c, file, file_size); + + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; + } + + sent = (n == NGX_AGAIN) ? 0 : n; } - sent = (n == NGX_AGAIN) ? 0 : n; - } else { n = ngx_writev(c, &header); @@ -180,7 +225,17 @@ ngx_linux_sendfile_chain(ngx_connection_ in = ngx_chain_update_sent(in, sent); - if (send - prev_send != sent) { + if ((size_t) (send - prev_send) != sent) { +#if (NGX_THREADS) + if (thread_handled) { + return in; + } + + if (thread_complete) { + send = prev_send + sent; + continue; + } +#endif wev->ready = 0; return in; } @@ -242,3 +297,123 @@ eintr: return n; } + + +#if (NGX_THREADS) + +typedef struct { + ngx_buf_t *file; + ngx_socket_t socket; + size_t size; + + size_t sent; + ngx_err_t err; +} ngx_linux_sendfile_ctx_t; + + +static ngx_int_t +ngx_linux_sendfile_thread(ngx_connection_t *c, ngx_buf_t *file, size_t size, + size_t *sent) +{ + ngx_uint_t flags; + ngx_event_t *wev; + ngx_thread_task_t *task; + ngx_linux_sendfile_ctx_t *ctx; + + ngx_log_debug3(NGX_LOG_DEBUG_CORE, c->log, 0, + "linux sendfile thread: %d, %uz, %O", + file->file->fd, size, file->file_pos); + + task = c->sendfile_task; + + if (task == NULL) { + task = ngx_thread_task_alloc(c->pool, sizeof(ngx_linux_sendfile_ctx_t)); + if (task == NULL) { + return NGX_ERROR; + } + + task->handler = ngx_linux_sendfile_thread_handler; + + c->sendfile_task = task; + } + + ctx = task->ctx; + wev = c->write; + + if (task->event.complete) { + task->event.complete = 0; + + if (ctx->err && ctx->err != NGX_EAGAIN) { + wev->error = 1; + ngx_connection_error(c, ctx->err, "sendfile() failed"); + return NGX_ERROR; + } + + *sent = ctx->sent; + + return (ctx->sent == ctx->size) ? NGX_DONE : NGX_AGAIN; + } + + ctx->file = file; + ctx->socket = c->fd; + ctx->size = size; + + if (wev->active) { + flags = (ngx_event_flags & NGX_USE_CLEAR_EVENT) ? NGX_CLEAR_EVENT + : NGX_LEVEL_EVENT; + + if (ngx_del_event(wev, NGX_WRITE_EVENT, flags) == NGX_ERROR) { + return NGX_ERROR; + } + } + + if (file->file->thread_handler(task, file->file) != NGX_OK) { + return NGX_ERROR; + } + + *sent = 0; + + return NGX_OK; +} + + +static void +ngx_linux_sendfile_thread_handler(void *data, ngx_log_t *log) +{ + ngx_linux_sendfile_ctx_t *ctx = data; + + off_t offset; + ssize_t n; + ngx_buf_t *file; + + ngx_log_debug0(NGX_LOG_DEBUG_CORE, log, 0, "linux sendfile thread handler"); + + file = ctx->file; + offset = file->file_pos; + +again: + + n = sendfile(ctx->socket, file->file->fd, &offset, ctx->size); + + if (n == -1) { + ctx->err = ngx_errno; + + } else { + ctx->sent = n; + ctx->err = 0; + } + +#if 0 + ngx_time_update(); +#endif + + ngx_log_debug4(NGX_LOG_DEBUG_EVENT, log, 0, + "sendfile: %z (err: %i) of %uz @%O", + n, ctx->err, ctx->size, file->file_pos); + + if (ctx->err == NGX_EINTR) { + goto again; + } +} + +#endif /* NGX_THREADS */ From mdounin at mdounin.ru Wed Mar 18 16:22:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Mar 2015 19:22:02 +0300 Subject: [PATCH] Added support for client_scheme_in_redirect directive In-Reply-To: References: Message-ID: <20150318162202.GP88631@mdounin.ru> Hello! On Sun, Mar 15, 2015 at 04:07:11AM -0700, Kyle Ibrahim wrote: > Currently, there is no way way to control the scheme which will be used in > nginx-issued redirects. This is a problem when the client is potentially > using a different scheme than nginx due to a SSL terminating load balancer. > As some client requests may have started over http and some over https, > we'd like to way to dynamically set the proper client scheme. > > This is a patch which adds a directive `client_scheme_in_redirect` to > complement `server_name_in_redirect` and `port_in_redirect`. Have you considered doing something like "relative_redirects on|off" instead, as previously suggested[1]? I believe it will resolve the problem as well, and will address additional problems too. It is also expected to require simplier and more effective code. [1] http://mailman.nginx.org/pipermail/nginx-devel/2015-March/006608.html -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Mar 18 17:31:20 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 18 Mar 2015 17:31:20 +0000 Subject: [nginx] SPDY: fixed format specifier in logging. Message-ID: details: http://hg.nginx.org/nginx/rev/199c0dd313ea branches: changeset: 6024:199c0dd313ea user: Xiaochen Wang date: Sun Mar 15 21:46:21 2015 +0800 description: SPDY: fixed format specifier in logging. diffstat: src/http/ngx_http_spdy.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r b550563ef96e -r 199c0dd313ea src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Sat Mar 14 17:37:30 2015 +0300 +++ b/src/http/ngx_http_spdy.c Sun Mar 15 21:46:21 2015 +0800 @@ -1353,7 +1353,7 @@ ngx_http_spdy_state_window_update(ngx_ht pos += NGX_SPDY_DELTA_SIZE; ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, - "spdy WINDOW_UPDATE sid:%ui delta:%ui", sid, delta); + "spdy WINDOW_UPDATE sid:%ui delta:%uz", sid, delta); if (sid) { stream = ngx_http_spdy_get_stream_by_id(sc, sid); From vbart at nginx.com Wed Mar 18 17:32:50 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Mar 2015 20:32:50 +0300 Subject: [PATCH] SPDY: fixed format specifiers in logging. In-Reply-To: <20150315135314.GB60773@gmail.com> References: <20150315135314.GB60773@gmail.com> Message-ID: <2425049.eHzbdM5TyE@vbart-workstation> On Sunday 15 March 2015 21:53:14 Xiaochen Wang wrote: > > # HG changeset patch > # User Xiaochen Wang > # Date 1426427181 -28800 > # Node ID ec3b9c4277e33bfc9b25bbee67b74d5ee528366a > # Parent 79b473d5381d85f79ab71b7aa85ecf9be1caf9fb > SPDY: fixed format specifiers in logging. > > diff -r 79b473d5381d -r ec3b9c4277e3 src/http/ngx_http_spdy.c > --- a/src/http/ngx_http_spdy.c Fri Mar 13 16:43:01 2015 +0300 > +++ b/src/http/ngx_http_spdy.c Sun Mar 15 21:46:21 2015 +0800 > @@ -1353,7 +1353,7 @@ ngx_http_spdy_state_window_update(ngx_ht > pos += NGX_SPDY_DELTA_SIZE; > > ngx_log_debug2(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, > - "spdy WINDOW_UPDATE sid:%ui delta:%ui", sid, delta); > + "spdy WINDOW_UPDATE sid:%ui delta:%uz", sid, delta); > > if (sid) { > stream = ngx_http_spdy_get_stream_by_id(sc, sid); > Committed. Thanks! http://hg.nginx.org/nginx/rev/199c0dd313ea wbr, Valentin V. Bartenev From kibrahim at getpantheon.com Wed Mar 18 18:06:46 2015 From: kibrahim at getpantheon.com (Kyle Ibrahim) Date: Wed, 18 Mar 2015 11:06:46 -0700 Subject: [PATCH] Added support for client_scheme_in_redirect directive In-Reply-To: <20150318162202.GP88631@mdounin.ru> References: <20150318162202.GP88631@mdounin.ru> Message-ID: Hi Maxim, I have considered something like `relative_redirects`. It would also be a good directive to have, but it wouldn't allow 301 redirects from nginx to always use the same hostname, e.g. www.example.com e.g. Currently, I need to set `server_name_in_redirect www.example.com` and MUST have nginx terminate SSL. `relative_redirects` wouldn't let me have nginx behind a SSL terminator AND use `server_name_in_redirect`, because the relative 301s wouldn't force people to move to www.example.com I have also considered the option of: `scheme_in_redirect http|https` and not letting it have a variable. This would simplify the implementation by a lot. It wouldn't solve my exact problem, but it would solve the other two problems I linked in the beginning of my last email. Thanks for getting back to me (and all your hard work)! Kyle P.S. Instead of `client_scheme_in_redirect` I should have just called it `scheme_in_redirect` On Wed, Mar 18, 2015 at 9:22 AM, Maxim Dounin wrote: > Hello! > > On Sun, Mar 15, 2015 at 04:07:11AM -0700, Kyle Ibrahim wrote: > > > Currently, there is no way way to control the scheme which will be used > in > > nginx-issued redirects. This is a problem when the client is potentially > > using a different scheme than nginx due to a SSL terminating load > balancer. > > As some client requests may have started over http and some over https, > > we'd like to way to dynamically set the proper client scheme. > > > > This is a patch which adds a directive `client_scheme_in_redirect` to > > complement `server_name_in_redirect` and `port_in_redirect`. > > Have you considered doing something like "relative_redirects on|off" > instead, as previously suggested[1]? > > I believe it will resolve the problem as well, and will address > additional problems too. It is also expected to require simplier > and more effective code. > > [1] http://mailman.nginx.org/pipermail/nginx-devel/2015-March/006608.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 19 18:41:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Mar 2015 21:41:39 +0300 Subject: [PATCH] Added support for client_scheme_in_redirect directive In-Reply-To: References: <20150318162202.GP88631@mdounin.ru> Message-ID: <20150319184139.GG88631@mdounin.ru> Hello! On Wed, Mar 18, 2015 at 11:06:46AM -0700, Kyle Ibrahim wrote: > Hi Maxim, > > I have considered something like `relative_redirects`. It would also be a > good directive to have, but it wouldn't allow 301 redirects from nginx to > always use the same hostname, e.g. www.example.com > > e.g. > Currently, I need to set `server_name_in_redirect www.example.com` and MUST > have nginx terminate SSL. > `relative_redirects` wouldn't let me have nginx behind a SSL terminator AND > use `server_name_in_redirect`, because the relative 301s wouldn't force > people to move to www.example.com Well, the question here is - what are you trying to achieve by such a setup? It looks very strange, especially in a combination with SSL. If the goal is to use one canonical name for a site, it should be much better solution to explicitly do redirects to a canonical name with an additional server{} block. > I have also considered the option of: `scheme_in_redirect http|https` and > not letting it have a variable. This would simplify the implementation by a > lot. > > It wouldn't solve my exact problem, but it would solve the other two > problems I linked in the beginning of my last email. What email do you mean? I see no links in this thread. -- Maxim Dounin http://nginx.org/ From ru at nginx.com Thu Mar 19 20:22:00 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Mar 2015 20:22:00 +0000 Subject: [nginx] Thread pools: keep waiting tasks counter in ngx_thread_p... Message-ID: details: http://hg.nginx.org/nginx/rev/32099b107191 branches: changeset: 6025:32099b107191 user: Ruslan Ermilov date: Thu Mar 19 23:19:35 2015 +0300 description: Thread pools: keep waiting tasks counter in ngx_thread_pool_t. It's not needed for completed tasks queue. No functional changes. diffstat: src/core/ngx_thread_pool.c | 16 +++++++--------- 1 files changed, 7 insertions(+), 9 deletions(-) diffs (73 lines): diff -r 199c0dd313ea -r 32099b107191 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Sun Mar 15 21:46:21 2015 +0800 +++ b/src/core/ngx_thread_pool.c Thu Mar 19 23:19:35 2015 +0300 @@ -18,17 +18,16 @@ typedef struct { typedef struct { ngx_thread_mutex_t mtx; - ngx_uint_t count; ngx_thread_task_t *first; ngx_thread_task_t **last; } ngx_thread_pool_queue_t; struct ngx_thread_pool_s { + ngx_thread_pool_queue_t queue; + ngx_uint_t waiting; ngx_thread_cond_t cond; - ngx_thread_pool_queue_t queue; - ngx_log_t *log; ngx_pool_t *pool; @@ -163,7 +162,6 @@ ngx_thread_pool_init(ngx_thread_pool_t * static ngx_int_t ngx_thread_pool_queue_init(ngx_thread_pool_queue_t *queue, ngx_log_t *log) { - queue->count = 0; queue->first = NULL; queue->last = &queue->first; @@ -217,12 +215,12 @@ ngx_thread_task_post(ngx_thread_pool_t * return NGX_ERROR; } - if (tp->queue.count >= tp->max_queue) { + if (tp->waiting >= tp->max_queue) { (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); ngx_log_error(NGX_LOG_ERR, tp->log, 0, "thread pool \"%V\" queue overflow: %ui tasks waiting", - &tp->name, tp->queue.count); + &tp->name, tp->waiting); return NGX_ERROR; } @@ -239,7 +237,7 @@ ngx_thread_task_post(ngx_thread_pool_t * *tp->queue.last = task; tp->queue.last = &task->next; - tp->queue.count++; + tp->waiting++; (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); @@ -285,7 +283,7 @@ ngx_thread_pool_cycle(void *data) return NULL; } - while (tp->queue.count == 0) { + while (tp->waiting == 0) { if (ngx_thread_cond_wait(&tp->cond, &tp->queue.mtx, tp->log) != NGX_OK) { @@ -294,7 +292,7 @@ ngx_thread_pool_cycle(void *data) } } - tp->queue.count--; + tp->waiting--; task = tp->queue.first; tp->queue.first = task->next; From ru at nginx.com Thu Mar 19 20:22:03 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Mar 2015 20:22:03 +0000 Subject: [nginx] Thread pools: fixed the waiting tasks accounting. Message-ID: details: http://hg.nginx.org/nginx/rev/25fda43e3bcb branches: changeset: 6026:25fda43e3bcb user: Ruslan Ermilov date: Thu Mar 19 13:00:48 2015 +0300 description: Thread pools: fixed the waiting tasks accounting. Behave like POSIX semaphores. If N worker threads are waiting for tasks, at least that number of tasks should be allowed to be put into the queue. diffstat: src/core/ngx_thread_pool.c | 15 ++++++++------- 1 files changed, 8 insertions(+), 7 deletions(-) diffs (60 lines): diff -r 32099b107191 -r 25fda43e3bcb src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Thu Mar 19 23:19:35 2015 +0300 +++ b/src/core/ngx_thread_pool.c Thu Mar 19 13:00:48 2015 +0300 @@ -25,7 +25,7 @@ typedef struct { struct ngx_thread_pool_s { ngx_thread_pool_queue_t queue; - ngx_uint_t waiting; + ngx_int_t waiting; ngx_thread_cond_t cond; ngx_log_t *log; @@ -33,7 +33,7 @@ struct ngx_thread_pool_s { ngx_str_t name; ngx_uint_t threads; - ngx_uint_t max_queue; + ngx_int_t max_queue; u_char *file; ngx_uint_t line; @@ -219,7 +219,7 @@ ngx_thread_task_post(ngx_thread_pool_t * (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); ngx_log_error(NGX_LOG_ERR, tp->log, 0, - "thread pool \"%V\" queue overflow: %ui tasks waiting", + "thread pool \"%V\" queue overflow: %i tasks waiting", &tp->name, tp->waiting); return NGX_ERROR; } @@ -283,7 +283,10 @@ ngx_thread_pool_cycle(void *data) return NULL; } - while (tp->waiting == 0) { + /* the number may become negative */ + tp->waiting--; + + while (tp->queue.first == NULL) { if (ngx_thread_cond_wait(&tp->cond, &tp->queue.mtx, tp->log) != NGX_OK) { @@ -292,8 +295,6 @@ ngx_thread_pool_cycle(void *data) } } - tp->waiting--; - task = tp->queue.first; tp->queue.first = task->next; @@ -476,7 +477,7 @@ ngx_thread_pool(ngx_conf_t *cf, ngx_comm tp->max_queue = ngx_atoi(value[i].data + 10, value[i].len - 10); - if (tp->max_queue == (ngx_uint_t) NGX_ERROR) { + if (tp->max_queue == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid max_queue value \"%V\"", &value[i]); return NGX_CONF_ERROR; From ru at nginx.com Thu Mar 19 20:22:06 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Mar 2015 20:22:06 +0000 Subject: [nginx] Thread pools: silence warning on process exit. Message-ID: details: http://hg.nginx.org/nginx/rev/67717d4e4f47 branches: changeset: 6027:67717d4e4f47 user: Ruslan Ermilov date: Thu Mar 19 23:20:18 2015 +0300 description: Thread pools: silence warning on process exit. Work around pthread_cond_destroy() and pthread_mutex_destroy() returning EBUSY. A proper solution would be to ensure all threads are terminated. diffstat: src/core/ngx_thread_pool.c | 7 +++++++ 1 files changed, 7 insertions(+), 0 deletions(-) diffs (26 lines): diff -r 25fda43e3bcb -r 67717d4e4f47 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Thu Mar 19 13:00:48 2015 +0300 +++ b/src/core/ngx_thread_pool.c Thu Mar 19 23:20:18 2015 +0300 @@ -172,7 +172,11 @@ ngx_thread_pool_queue_init(ngx_thread_po static ngx_int_t ngx_thread_pool_queue_destroy(ngx_thread_pool_queue_t *queue, ngx_log_t *log) { +#if 0 return ngx_thread_mutex_destroy(&queue->mtx, log); +#else + return NGX_OK; +#endif } @@ -181,7 +185,10 @@ ngx_thread_pool_destroy(ngx_thread_pool_ { /* TODO: exit threads */ +#if 0 (void) ngx_thread_cond_destroy(&tp->cond, tp->log); +#endif + (void) ngx_thread_pool_queue_destroy(&tp->queue, tp->log); } From ru at nginx.com Fri Mar 20 03:46:26 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Mar 2015 03:46:26 +0000 Subject: [nginx] Removed old FreeBSD rfork() thread implementation. Message-ID: details: http://hg.nginx.org/nginx/rev/fa77496b1df2 branches: changeset: 6028:fa77496b1df2 user: Ruslan Ermilov date: Fri Mar 20 06:43:19 2015 +0300 description: Removed old FreeBSD rfork() thread implementation. diffstat: auto/sources | 3 - src/os/unix/ngx_freebsd_config.h | 6 - src/os/unix/ngx_freebsd_rfork_thread.c | 756 --------------------------------- src/os/unix/ngx_freebsd_rfork_thread.h | 122 ----- src/os/unix/ngx_thread.h | 8 - src/os/unix/rfork_thread.S | 73 --- 6 files changed, 0 insertions(+), 968 deletions(-) diffs (truncated from 1017 to 300 lines): diff -r 67717d4e4f47 -r fa77496b1df2 auto/sources --- a/auto/sources Thu Mar 19 23:20:18 2015 +0300 +++ b/auto/sources Fri Mar 20 06:43:19 2015 +0300 @@ -203,9 +203,6 @@ THREAD_POOL_SRCS="src/core/ngx_thread_po FREEBSD_DEPS="src/os/unix/ngx_freebsd_config.h src/os/unix/ngx_freebsd.h" FREEBSD_SRCS=src/os/unix/ngx_freebsd_init.c FREEBSD_SENDFILE_SRCS=src/os/unix/ngx_freebsd_sendfile_chain.c -FREEBSD_RFORK_DEPS="src/os/unix/ngx_freebsd_rfork_thread.h" -FREEBSD_RFORK_SRCS="src/os/unix/ngx_freebsd_rfork_thread.c" -FREEBSD_RFORK_THREAD_SRCS="src/os/unix/rfork_thread.S" PTHREAD_SRCS="src/os/unix/ngx_pthread_thread.c" diff -r 67717d4e4f47 -r fa77496b1df2 src/os/unix/ngx_freebsd_config.h --- a/src/os/unix/ngx_freebsd_config.h Thu Mar 19 23:20:18 2015 +0300 +++ b/src/os/unix/ngx_freebsd_config.h Fri Mar 20 06:43:19 2015 +0300 @@ -100,12 +100,6 @@ typedef struct aiocb ngx_aiocb_t; #endif -#if (__FreeBSD_version < 430000 || __FreeBSD_version < 500012) - -pid_t rfork_thread(int flags, void *stack, int (*func)(void *arg), void *arg); - -#endif - #ifndef IOV_MAX #define IOV_MAX 1024 #endif diff -r 67717d4e4f47 -r fa77496b1df2 src/os/unix/ngx_freebsd_rfork_thread.c --- a/src/os/unix/ngx_freebsd_rfork_thread.c Thu Mar 19 23:20:18 2015 +0300 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,756 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) Nginx, Inc. - */ - - -#include -#include - -/* - * The threads implementation uses the rfork(RFPROC|RFTHREAD|RFMEM) syscall - * to create threads. All threads use the stacks of the same size mmap()ed - * below the main stack. Thus the current thread id is determined via - * the stack pointer value. - * - * The mutex implementation uses the ngx_atomic_cmp_set() operation - * to acquire a mutex and the SysV semaphore to wait on a mutex and to wake up - * the waiting threads. The light mutex does not use semaphore, so after - * spinning in the lock the thread calls sched_yield(). However the light - * mutexes are intended to be used with the "trylock" operation only. - * The SysV semop() is a cheap syscall, particularly if it has little sembuf's - * and does not use SEM_UNDO. - * - * The condition variable implementation uses the signal #64. - * The signal handler is SIG_IGN so the kill() is a cheap syscall. - * The thread waits a signal in kevent(). The use of the EVFILT_SIGNAL - * is safe since FreeBSD 4.10-STABLE. - * - * This threads implementation currently works on i386 (486+) and amd64 - * platforms only. - */ - - -char *ngx_freebsd_kern_usrstack; -size_t ngx_thread_stack_size; - - -static size_t rz_size; -static size_t usable_stack_size; -static char *last_stack; - -static ngx_uint_t nthreads; -static ngx_uint_t max_threads; - -static ngx_uint_t nkeys; -static ngx_tid_t *tids; /* the threads tids array */ -void **ngx_tls; /* the threads tls's array */ - -/* the thread-safe libc errno */ - -static int errno0; /* the main thread's errno */ -static int *errnos; /* the threads errno's array */ - -int * -__error() -{ - int tid; - - tid = ngx_gettid(); - - return tid ? &errnos[tid - 1] : &errno0; -} - - -/* - * __isthreaded enables the spinlocks in some libc functions, i.e. in malloc() - * and some other places. Nevertheless we protect our malloc()/free() calls - * by own mutex that is more efficient than the spinlock. - * - * _spinlock() is a weak referenced stub in src/lib/libc/gen/_spinlock_stub.c - * that does nothing. - */ - -extern int __isthreaded; - -void -_spinlock(ngx_atomic_t *lock) -{ - ngx_int_t tries; - - tries = 0; - - for ( ;; ) { - - if (*lock) { - if (ngx_ncpu > 1 && tries++ < 1000) { - continue; - } - - sched_yield(); - tries = 0; - - } else { - if (ngx_atomic_cmp_set(lock, 0, 1)) { - return; - } - } - } -} - - -/* - * Before FreeBSD 5.1 _spinunlock() is a simple #define in - * src/lib/libc/include/spinlock.h that zeroes lock. - * - * Since FreeBSD 5.1 _spinunlock() is a weak referenced stub in - * src/lib/libc/gen/_spinlock_stub.c that does nothing. - */ - -#ifndef _spinunlock - -void -_spinunlock(ngx_atomic_t *lock) -{ - *lock = 0; -} - -#endif - - -ngx_err_t -ngx_create_thread(ngx_tid_t *tid, ngx_thread_value_t (*func)(void *arg), - void *arg, ngx_log_t *log) -{ - ngx_pid_t id; - ngx_err_t err; - char *stack, *stack_top; - - if (nthreads >= max_threads) { - ngx_log_error(NGX_LOG_CRIT, log, 0, - "no more than %ui threads can be created", max_threads); - return NGX_ERROR; - } - - last_stack -= ngx_thread_stack_size; - - stack = mmap(last_stack, usable_stack_size, PROT_READ|PROT_WRITE, - MAP_STACK, -1, 0); - - if (stack == MAP_FAILED) { - ngx_log_error(NGX_LOG_ALERT, log, ngx_errno, - "mmap(%p:%uz, MAP_STACK) thread stack failed", - last_stack, usable_stack_size); - return NGX_ERROR; - } - - if (stack != last_stack) { - ngx_log_error(NGX_LOG_ALERT, log, 0, - "stack %p address was changed to %p", last_stack, stack); - return NGX_ERROR; - } - - stack_top = stack + usable_stack_size; - - ngx_log_debug2(NGX_LOG_DEBUG_CORE, log, 0, - "thread stack: %p-%p", stack, stack_top); - - ngx_set_errno(0); - - id = rfork_thread(RFPROC|RFTHREAD|RFMEM, stack_top, - (ngx_rfork_thread_func_pt) func, arg); - - err = ngx_errno; - - if (id == -1) { - ngx_log_error(NGX_LOG_ALERT, log, err, "rfork() failed"); - - } else { - *tid = id; - nthreads = (ngx_freebsd_kern_usrstack - stack_top) - / ngx_thread_stack_size; - tids[nthreads] = id; - - ngx_log_debug1(NGX_LOG_DEBUG_CORE, log, 0, "rfork()ed thread: %P", id); - } - - return err; -} - - -ngx_int_t -ngx_init_threads(int n, size_t size, ngx_cycle_t *cycle) -{ - char *red_zone, *zone; - size_t len; - ngx_int_t i; - struct sigaction sa; - - max_threads = n + 1; - - for (i = 0; i < n; i++) { - ngx_memzero(&sa, sizeof(struct sigaction)); - sa.sa_handler = SIG_IGN; - sigemptyset(&sa.sa_mask); - if (sigaction(NGX_CV_SIGNAL, &sa, NULL) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "sigaction(%d, SIG_IGN) failed", NGX_CV_SIGNAL); - return NGX_ERROR; - } - } - - len = sizeof(ngx_freebsd_kern_usrstack); - if (sysctlbyname("kern.usrstack", &ngx_freebsd_kern_usrstack, &len, - NULL, 0) == -1) - { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "sysctlbyname(kern.usrstack) failed"); - return NGX_ERROR; - } - - /* the main thread stack red zone */ - rz_size = ngx_pagesize; - red_zone = ngx_freebsd_kern_usrstack - (size + rz_size); - - ngx_log_debug2(NGX_LOG_DEBUG_CORE, cycle->log, 0, - "usrstack: %p red zone: %p", - ngx_freebsd_kern_usrstack, red_zone); - - zone = mmap(red_zone, rz_size, PROT_NONE, MAP_ANON, -1, 0); - if (zone == MAP_FAILED) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "mmap(%p:%uz, PROT_NONE, MAP_ANON) red zone failed", - red_zone, rz_size); - return NGX_ERROR; - } - - if (zone != red_zone) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, - "red zone %p address was changed to %p", red_zone, zone); - return NGX_ERROR; - } - - /* create the thread errno' array */ - - errnos = ngx_calloc(n * sizeof(int), cycle->log); - if (errnos == NULL) { - return NGX_ERROR; - } - - /* create the thread tids array */ - - tids = ngx_calloc((n + 1) * sizeof(ngx_tid_t), cycle->log); - if (tids == NULL) { - return NGX_ERROR; - } - - tids[0] = ngx_pid; - - /* create the thread tls' array */ - - ngx_tls = ngx_calloc(NGX_THREAD_KEYS_MAX * (n + 1) * sizeof(void *), - cycle->log); - if (ngx_tls == NULL) { - return NGX_ERROR; - } - - nthreads = 1; - - last_stack = zone + rz_size; - usable_stack_size = size; - ngx_thread_stack_size = size + rz_size; - - /* allow the spinlock in libc malloc() */ - __isthreaded = 1; - - ngx_threaded = 1; From ru at nginx.com Fri Mar 20 03:46:29 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Mar 2015 03:46:29 +0000 Subject: [nginx] Removed old pthread implementation. Message-ID: details: http://hg.nginx.org/nginx/rev/e284f3ff6831 branches: changeset: 6029:e284f3ff6831 user: Ruslan Ermilov date: Fri Mar 20 06:43:19 2015 +0300 description: Removed old pthread implementation. diffstat: auto/sources | 2 - src/event/modules/ngx_kqueue_module.c | 50 ------ src/os/unix/ngx_process_cycle.c | 188 ---------------------- src/os/unix/ngx_pthread_thread.c | 278 ---------------------------------- src/os/unix/ngx_thread.h | 98 ----------- src/os/unix/ngx_user.c | 20 -- 6 files changed, 0 insertions(+), 636 deletions(-) diffs (truncated from 835 to 300 lines): diff -r fa77496b1df2 -r e284f3ff6831 auto/sources --- a/auto/sources Fri Mar 20 06:43:19 2015 +0300 +++ b/auto/sources Fri Mar 20 06:43:19 2015 +0300 @@ -204,8 +204,6 @@ FREEBSD_DEPS="src/os/unix/ngx_freebsd_co FREEBSD_SRCS=src/os/unix/ngx_freebsd_init.c FREEBSD_SENDFILE_SRCS=src/os/unix/ngx_freebsd_sendfile_chain.c -PTHREAD_SRCS="src/os/unix/ngx_pthread_thread.c" - LINUX_DEPS="src/os/unix/ngx_linux_config.h src/os/unix/ngx_linux.h" LINUX_SRCS=src/os/unix/ngx_linux_init.c LINUX_SENDFILE_SRCS=src/os/unix/ngx_linux_sendfile_chain.c diff -r fa77496b1df2 -r e284f3ff6831 src/event/modules/ngx_kqueue_module.c --- a/src/event/modules/ngx_kqueue_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_kqueue_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -59,12 +59,6 @@ static ngx_event_t notify_event; static struct kevent notify_kev; #endif -#if (NGX_OLD_THREADS) -static ngx_mutex_t *list_mutex; -static ngx_mutex_t *kevent_mutex; -#endif - - static ngx_str_t kqueue_name = ngx_string("kqueue"); @@ -154,20 +148,6 @@ ngx_kqueue_init(ngx_cycle_t *cycle, ngx_ return NGX_ERROR; } #endif - -#if (NGX_OLD_THREADS) - - list_mutex = ngx_mutex_init(cycle->log, 0); - if (list_mutex == NULL) { - return NGX_ERROR; - } - - kevent_mutex = ngx_mutex_init(cycle->log, 0); - if (kevent_mutex == NULL) { - return NGX_ERROR; - } - -#endif } if (max_changes < kcf->changes) { @@ -310,11 +290,6 @@ ngx_kqueue_done(ngx_cycle_t *cycle) ngx_kqueue = -1; -#if (NGX_OLD_THREADS) - ngx_mutex_destroy(kevent_mutex); - ngx_mutex_destroy(list_mutex); -#endif - ngx_free(change_list1); ngx_free(change_list0); ngx_free(event_list); @@ -342,8 +317,6 @@ ngx_kqueue_add_event(ngx_event_t *ev, ng ev->disabled = 0; ev->oneshot = (flags & NGX_ONESHOT_EVENT) ? 1 : 0; - ngx_mutex_lock(list_mutex); - #if 0 if (ev->index < nchanges @@ -368,8 +341,6 @@ ngx_kqueue_add_event(ngx_event_t *ev, ng e->index = ev->index; } - ngx_mutex_unlock(list_mutex); - return NGX_OK; } @@ -378,8 +349,6 @@ ngx_kqueue_add_event(ngx_event_t *ev, ng ngx_log_error(NGX_LOG_ALERT, ev->log, 0, "previous event on #%d were not passed in kernel", c->fd); - ngx_mutex_unlock(list_mutex); - return NGX_ERROR; } @@ -387,8 +356,6 @@ ngx_kqueue_add_event(ngx_event_t *ev, ng rc = ngx_kqueue_set_event(ev, event, EV_ADD|EV_ENABLE|flags); - ngx_mutex_unlock(list_mutex); - return rc; } @@ -402,8 +369,6 @@ ngx_kqueue_del_event(ngx_event_t *ev, ng ev->active = 0; ev->disabled = 0; - ngx_mutex_lock(list_mutex); - if (ev->index < nchanges && ((uintptr_t) change_list[ev->index].udata & (uintptr_t) ~1) == (uintptr_t) ev) @@ -423,8 +388,6 @@ ngx_kqueue_del_event(ngx_event_t *ev, ng e->index = ev->index; } - ngx_mutex_unlock(list_mutex); - return NGX_OK; } @@ -435,7 +398,6 @@ ngx_kqueue_del_event(ngx_event_t *ev, ng */ if (flags & NGX_CLOSE_EVENT) { - ngx_mutex_unlock(list_mutex); return NGX_OK; } @@ -448,8 +410,6 @@ ngx_kqueue_del_event(ngx_event_t *ev, ng rc = ngx_kqueue_set_event(ev, event, flags); - ngx_mutex_unlock(list_mutex); - return rc; } @@ -756,13 +716,7 @@ ngx_kqueue_process_changes(ngx_cycle_t * struct timespec ts; struct kevent *changes; - ngx_mutex_lock(kevent_mutex); - - ngx_mutex_lock(list_mutex); - if (nchanges == 0) { - ngx_mutex_unlock(list_mutex); - ngx_mutex_unlock(kevent_mutex); return NGX_OK; } @@ -776,8 +730,6 @@ ngx_kqueue_process_changes(ngx_cycle_t * n = (int) nchanges; nchanges = 0; - ngx_mutex_unlock(list_mutex); - ts.tv_sec = 0; ts.tv_nsec = 0; @@ -794,8 +746,6 @@ ngx_kqueue_process_changes(ngx_cycle_t * rc = NGX_OK; } - ngx_mutex_unlock(kevent_mutex); - return rc; } diff -r fa77496b1df2 -r e284f3ff6831 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/os/unix/ngx_process_cycle.c Fri Mar 20 06:43:19 2015 +0300 @@ -23,10 +23,6 @@ static void ngx_worker_process_cycle(ngx static void ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker); static void ngx_worker_process_exit(ngx_cycle_t *cycle); static void ngx_channel_handler(ngx_event_t *ev); -#if (NGX_OLD_THREADS) -static void ngx_wakeup_worker_threads(ngx_cycle_t *cycle); -static ngx_thread_value_t ngx_worker_thread_cycle(void *data); -#endif static void ngx_cache_manager_process_cycle(ngx_cycle_t *cycle, void *data); static void ngx_cache_manager_process_handler(ngx_event_t *ev); static void ngx_cache_loader_process_handler(ngx_event_t *ev); @@ -56,12 +52,6 @@ ngx_uint_t ngx_noaccepting; ngx_uint_t ngx_restart; -#if (NGX_OLD_THREADS) -volatile ngx_thread_t ngx_threads[NGX_MAX_THREADS]; -ngx_int_t ngx_threads_n; -#endif - - static u_char master_process[] = "master process"; @@ -747,52 +737,6 @@ ngx_worker_process_cycle(ngx_cycle_t *cy ngx_setproctitle("worker process"); -#if (NGX_OLD_THREADS) - { - ngx_int_t n; - ngx_err_t err; - ngx_core_conf_t *ccf; - - ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module); - - if (ngx_threads_n) { - if (ngx_init_threads(ngx_threads_n, ccf->thread_stack_size, cycle) - == NGX_ERROR) - { - /* fatal */ - exit(2); - } - - err = ngx_thread_key_create(&ngx_core_tls_key); - if (err != 0) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, err, - ngx_thread_key_create_n " failed"); - /* fatal */ - exit(2); - } - - for (n = 0; n < ngx_threads_n; n++) { - - ngx_threads[n].cv = ngx_cond_init(cycle->log); - - if (ngx_threads[n].cv == NULL) { - /* fatal */ - exit(2); - } - - if (ngx_create_thread((ngx_tid_t *) &ngx_threads[n].tid, - ngx_worker_thread_cycle, - (void *) &ngx_threads[n], cycle->log) - != 0) - { - /* fatal */ - exit(2); - } - } - } - } -#endif - for ( ;; ) { if (ngx_exiting) { @@ -1032,12 +976,6 @@ ngx_worker_process_exit(ngx_cycle_t *cyc ngx_uint_t i; ngx_connection_t *c; -#if (NGX_OLD_THREADS) - ngx_terminate = 1; - - ngx_wakeup_worker_threads(cycle); -#endif - for (i = 0; ngx_modules[i]; i++) { if (ngx_modules[i]->exit_process) { ngx_modules[i]->exit_process(cycle); @@ -1181,132 +1119,6 @@ ngx_channel_handler(ngx_event_t *ev) } -#if (NGX_OLD_THREADS) - -static void -ngx_wakeup_worker_threads(ngx_cycle_t *cycle) -{ - ngx_int_t i; - ngx_uint_t live; - - for ( ;; ) { - - live = 0; - - for (i = 0; i < ngx_threads_n; i++) { - if (ngx_threads[i].state < NGX_THREAD_EXIT) { - if (ngx_cond_signal(ngx_threads[i].cv) == NGX_ERROR) { - ngx_threads[i].state = NGX_THREAD_DONE; - - } else { - live = 1; - } - } - - if (ngx_threads[i].state == NGX_THREAD_EXIT) { - ngx_thread_join(ngx_threads[i].tid, NULL); - ngx_threads[i].state = NGX_THREAD_DONE; - } - } - - if (live == 0) { - ngx_log_debug0(NGX_LOG_DEBUG_CORE, cycle->log, 0, - "all worker threads are joined"); - - /* STUB */ - ngx_done_events(cycle); - - return; - } - - ngx_sched_yield(); From ru at nginx.com Fri Mar 20 03:46:37 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Mar 2015 03:46:37 +0000 Subject: [nginx] Removed unix ngx_threaded and related ngx_process_changes. Message-ID: details: http://hg.nginx.org/nginx/rev/4652f8f26b12 branches: changeset: 6030:4652f8f26b12 user: Ruslan Ermilov date: Fri Mar 20 06:43:19 2015 +0300 description: Removed unix ngx_threaded and related ngx_process_changes. diffstat: src/event/modules/ngx_aio_module.c | 1 - src/event/modules/ngx_devpoll_module.c | 1 - src/event/modules/ngx_epoll_module.c | 1 - src/event/modules/ngx_eventport_module.c | 1 - src/event/modules/ngx_iocp_module.c | 1 - src/event/modules/ngx_kqueue_module.c | 95 ++-------------------------- src/event/modules/ngx_poll_module.c | 1 - src/event/modules/ngx_rtsig_module.c | 1 - src/event/modules/ngx_select_module.c | 1 - src/event/modules/ngx_win32_select_module.c | 1 - src/event/ngx_event.c | 2 +- src/event/ngx_event.h | 2 - src/os/unix/ngx_process_cycle.c | 1 - src/os/unix/ngx_process_cycle.h | 1 - 14 files changed, 10 insertions(+), 100 deletions(-) diffs (truncated from 305 to 300 lines): diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_aio_module.c --- a/src/event/modules/ngx_aio_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_aio_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -49,7 +49,6 @@ ngx_event_module_t ngx_aio_module_ctx = NULL, /* add an connection */ ngx_aio_del_connection, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_aio_process_events, /* process the events */ ngx_aio_init, /* init the events */ ngx_aio_done /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_devpoll_module.c --- a/src/event/modules/ngx_devpoll_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_devpoll_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -89,7 +89,6 @@ ngx_event_module_t ngx_devpoll_module_c NULL, /* add an connection */ NULL, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_devpoll_process_events, /* process the events */ ngx_devpoll_init, /* init the events */ ngx_devpoll_done, /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_epoll_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -185,7 +185,6 @@ ngx_event_module_t ngx_epoll_module_ctx #else NULL, /* trigger a notify */ #endif - NULL, /* process the changes */ ngx_epoll_process_events, /* process the events */ ngx_epoll_init, /* init the events */ ngx_epoll_done, /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_eventport_module.c --- a/src/event/modules/ngx_eventport_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_eventport_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -182,7 +182,6 @@ ngx_event_module_t ngx_eventport_module NULL, /* add an connection */ NULL, /* delete an connection */ ngx_eventport_notify, /* trigger a notify */ - NULL, /* process the changes */ ngx_eventport_process_events, /* process the events */ ngx_eventport_init, /* init the events */ ngx_eventport_done, /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_iocp_module.c --- a/src/event/modules/ngx_iocp_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_iocp_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -65,7 +65,6 @@ ngx_event_module_t ngx_iocp_module_ctx NULL, /* add an connection */ ngx_iocp_del_connection, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_iocp_process_events, /* process the events */ ngx_iocp_init, /* init the events */ ngx_iocp_done /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_kqueue_module.c --- a/src/event/modules/ngx_kqueue_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_kqueue_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -30,7 +30,6 @@ static ngx_int_t ngx_kqueue_set_event(ng #ifdef EVFILT_USER static ngx_int_t ngx_kqueue_notify(ngx_event_handler_pt handler); #endif -static ngx_int_t ngx_kqueue_process_changes(ngx_cycle_t *cycle, ngx_uint_t try); static ngx_int_t ngx_kqueue_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); static ngx_inline void ngx_kqueue_dump_event(ngx_log_t *log, @@ -42,15 +41,7 @@ static char *ngx_kqueue_init_conf(ngx_cy int ngx_kqueue = -1; -/* - * The "change_list" should be declared as ngx_thread_volatile. - * However, the use of the change_list is localized in kqueue functions and - * is protected by the mutex so even the "icc -ipo" should not build the code - * with the race condition. Thus we avoid the declaration to make a more - * readable code. - */ - -static struct kevent *change_list, *change_list0, *change_list1; +static struct kevent *change_list; static struct kevent *event_list; static ngx_uint_t max_changes, nchanges, nevents; @@ -99,7 +90,6 @@ ngx_event_module_t ngx_kqueue_module_ct #else NULL, /* trigger a notify */ #endif - ngx_kqueue_process_changes, /* process the changes */ ngx_kqueue_process_events, /* process the events */ ngx_kqueue_init, /* init the events */ ngx_kqueue_done /* done the events */ @@ -165,27 +155,15 @@ ngx_kqueue_init(ngx_cycle_t *cycle, ngx_ nchanges = 0; } - if (change_list0) { - ngx_free(change_list0); + if (change_list) { + ngx_free(change_list); } - change_list0 = ngx_alloc(kcf->changes * sizeof(struct kevent), - cycle->log); - if (change_list0 == NULL) { + change_list = ngx_alloc(kcf->changes * sizeof(struct kevent), + cycle->log); + if (change_list == NULL) { return NGX_ERROR; } - - if (change_list1) { - ngx_free(change_list1); - } - - change_list1 = ngx_alloc(kcf->changes * sizeof(struct kevent), - cycle->log); - if (change_list1 == NULL) { - return NGX_ERROR; - } - - change_list = change_list0; } max_changes = kcf->changes; @@ -290,12 +268,9 @@ ngx_kqueue_done(ngx_cycle_t *cycle) ngx_kqueue = -1; - ngx_free(change_list1); - ngx_free(change_list0); + ngx_free(change_list); ngx_free(event_list); - change_list1 = NULL; - change_list0 = NULL; change_list = NULL; event_list = NULL; max_changes = 0; @@ -531,17 +506,8 @@ ngx_kqueue_process_events(ngx_cycle_t *c ngx_queue_t *queue; struct timespec ts, *tp; - if (ngx_threaded) { - if (ngx_kqueue_process_changes(cycle, 0) == NGX_ERROR) { - return NGX_ERROR; - } - - n = 0; - - } else { - n = (int) nchanges; - nchanges = 0; - } + n = (int) nchanges; + nchanges = 0; if (timer == NGX_TIMER_INFINITE) { tp = NULL; @@ -707,49 +673,6 @@ ngx_kqueue_process_events(ngx_cycle_t *c } -static ngx_int_t -ngx_kqueue_process_changes(ngx_cycle_t *cycle, ngx_uint_t try) -{ - int n; - ngx_int_t rc; - ngx_err_t err; - struct timespec ts; - struct kevent *changes; - - if (nchanges == 0) { - return NGX_OK; - } - - changes = change_list; - if (change_list == change_list0) { - change_list = change_list1; - } else { - change_list = change_list0; - } - - n = (int) nchanges; - nchanges = 0; - - ts.tv_sec = 0; - ts.tv_nsec = 0; - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "kevent changes: %d", n); - - if (kevent(ngx_kqueue, changes, n, NULL, 0, &ts) == -1) { - err = ngx_errno; - ngx_log_error((err == NGX_EINTR) ? NGX_LOG_INFO : NGX_LOG_ALERT, - cycle->log, err, "kevent() failed"); - rc = NGX_ERROR; - - } else { - rc = NGX_OK; - } - - return rc; -} - - static ngx_inline void ngx_kqueue_dump_event(ngx_log_t *log, struct kevent *kev) { diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_poll_module.c --- a/src/event/modules/ngx_poll_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_poll_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -40,7 +40,6 @@ ngx_event_module_t ngx_poll_module_ctx NULL, /* add an connection */ NULL, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_poll_process_events, /* process the events */ ngx_poll_init, /* init the events */ ngx_poll_done /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_rtsig_module.c --- a/src/event/modules/ngx_rtsig_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_rtsig_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -131,7 +131,6 @@ ngx_event_module_t ngx_rtsig_module_ctx ngx_rtsig_add_connection, /* add an connection */ ngx_rtsig_del_connection, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_rtsig_process_events, /* process the events */ ngx_rtsig_init, /* init the events */ ngx_rtsig_done, /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_select_module.c --- a/src/event/modules/ngx_select_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_select_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -48,7 +48,6 @@ ngx_event_module_t ngx_select_module_ct NULL, /* add an connection */ NULL, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_select_process_events, /* process the events */ ngx_select_init, /* init the events */ ngx_select_done /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/modules/ngx_win32_select_module.c --- a/src/event/modules/ngx_win32_select_module.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/modules/ngx_win32_select_module.c Fri Mar 20 06:43:19 2015 +0300 @@ -49,7 +49,6 @@ ngx_event_module_t ngx_select_module_ct NULL, /* add an connection */ NULL, /* delete an connection */ NULL, /* trigger a notify */ - NULL, /* process the changes */ ngx_select_process_events, /* process the events */ ngx_select_init, /* init the events */ ngx_select_done /* done the events */ diff -r e284f3ff6831 -r 4652f8f26b12 src/event/ngx_event.c --- a/src/event/ngx_event.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/ngx_event.c Fri Mar 20 06:43:19 2015 +0300 @@ -178,7 +178,7 @@ ngx_event_module_t ngx_event_core_modul ngx_event_core_create_conf, /* create configuration */ ngx_event_core_init_conf, /* init configuration */ - { NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL } + { NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; diff -r e284f3ff6831 -r 4652f8f26b12 src/event/ngx_event.h --- a/src/event/ngx_event.h Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/ngx_event.h Fri Mar 20 06:43:19 2015 +0300 @@ -202,7 +202,6 @@ typedef struct { ngx_int_t (*notify)(ngx_event_handler_pt handler); - ngx_int_t (*process_changes)(ngx_cycle_t *cycle, ngx_uint_t nowait); ngx_int_t (*process_events)(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); @@ -415,7 +414,6 @@ extern ngx_event_actions_t ngx_event_a #endif -#define ngx_process_changes ngx_event_actions.process_changes #define ngx_process_events ngx_event_actions.process_events #define ngx_done_events ngx_event_actions.done diff -r e284f3ff6831 -r 4652f8f26b12 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/os/unix/ngx_process_cycle.c Fri Mar 20 06:43:19 2015 +0300 @@ -30,7 +30,6 @@ static void ngx_cache_loader_process_han ngx_uint_t ngx_process; ngx_pid_t ngx_pid; -ngx_uint_t ngx_threaded; sig_atomic_t ngx_reap; sig_atomic_t ngx_sigio; diff -r e284f3ff6831 -r 4652f8f26b12 src/os/unix/ngx_process_cycle.h --- a/src/os/unix/ngx_process_cycle.h Fri Mar 20 06:43:19 2015 +0300 +++ b/src/os/unix/ngx_process_cycle.h Fri Mar 20 06:43:19 2015 +0300 @@ -43,7 +43,6 @@ extern ngx_pid_t ngx_pid; extern ngx_pid_t ngx_new_binary; extern ngx_uint_t ngx_inherited; From ru at nginx.com Fri Mar 20 03:46:44 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Mar 2015 03:46:44 +0000 Subject: [nginx] Removed ngx_connection_t.lock. Message-ID: details: http://hg.nginx.org/nginx/rev/c8acea7c7041 branches: changeset: 6031:c8acea7c7041 user: Ruslan Ermilov date: Fri Mar 20 06:43:19 2015 +0300 description: Removed ngx_connection_t.lock. diffstat: src/core/ngx_connection.c | 12 ------------ src/core/ngx_connection.h | 4 ---- src/event/ngx_event.c | 4 ---- src/event/ngx_event_connect.h | 4 ---- src/http/ngx_http_upstream.c | 3 --- 5 files changed, 0 insertions(+), 27 deletions(-) diffs (77 lines): diff -r 4652f8f26b12 -r c8acea7c7041 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/core/ngx_connection.c Fri Mar 20 06:43:19 2015 +0300 @@ -943,18 +943,6 @@ ngx_close_connection(ngx_connection_t *c } } -#if (NGX_OLD_THREADS) - - /* - * we have to clean the connection information before the closing - * because another thread may reopen the same file descriptor - * before we clean the connection - */ - - ngx_unlock(&c->lock); - -#endif - if (c->read->posted) { ngx_delete_posted_event(c->read); } diff -r 4652f8f26b12 -r c8acea7c7041 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Fri Mar 20 06:43:19 2015 +0300 +++ b/src/core/ngx_connection.h Fri Mar 20 06:43:19 2015 +0300 @@ -187,10 +187,6 @@ struct ngx_connection_s { #if (NGX_THREADS) ngx_thread_task_t *sendfile_task; #endif - -#if (NGX_OLD_THREADS) - ngx_atomic_t lock; -#endif }; diff -r 4652f8f26b12 -r c8acea7c7041 src/event/ngx_event.c --- a/src/event/ngx_event.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/ngx_event.c Fri Mar 20 06:43:19 2015 +0300 @@ -721,10 +721,6 @@ ngx_event_process_init(ngx_cycle_t *cycl c[i].fd = (ngx_socket_t) -1; next = &c[i]; - -#if (NGX_OLD_THREADS) - c[i].lock = 0; -#endif } while (i); cycle->free_connections = next; diff -r 4652f8f26b12 -r c8acea7c7041 src/event/ngx_event_connect.h --- a/src/event/ngx_event_connect.h Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/ngx_event_connect.h Fri Mar 20 06:43:19 2015 +0300 @@ -53,10 +53,6 @@ struct ngx_peer_connection_s { ngx_event_save_peer_session_pt save_session; #endif -#if (NGX_OLD_THREADS) - ngx_atomic_t *lock; -#endif - ngx_addr_t *local; int rcvbuf; diff -r 4652f8f26b12 -r c8acea7c7041 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Mar 20 06:43:19 2015 +0300 +++ b/src/http/ngx_http_upstream.c Fri Mar 20 06:43:19 2015 +0300 @@ -446,9 +446,6 @@ ngx_http_upstream_create(ngx_http_reques u->peer.log = r->connection->log; u->peer.log_error = NGX_ERROR_ERR; -#if (NGX_OLD_THREADS) - u->peer.lock = &r->connection->lock; -#endif #if (NGX_HTTP_CACHE) r->cache = NULL; From ru at nginx.com Fri Mar 20 03:46:47 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Mar 2015 03:46:47 +0000 Subject: [nginx] Removed busy locks. Message-ID: details: http://hg.nginx.org/nginx/rev/ac7c7241ed8c branches: changeset: 6032:ac7c7241ed8c user: Ruslan Ermilov date: Fri Mar 20 06:45:32 2015 +0300 description: Removed busy locks. diffstat: auto/sources | 8 +- src/event/ngx_event.h | 9 - src/event/ngx_event_busy_lock.c | 286 ------------------------------------- src/event/ngx_event_busy_lock.h | 65 -------- src/event/ngx_event_mutex.c | 70 --------- src/http/ngx_http.h | 1 - src/http/ngx_http_busy_lock.c | 307 ---------------------------------------- src/http/ngx_http_busy_lock.h | 54 ------- src/os/unix/ngx_thread.h | 3 - 9 files changed, 1 insertions(+), 802 deletions(-) diffs (truncated from 889 to 300 lines): diff -r c8acea7c7041 -r ac7c7241ed8c auto/sources --- a/auto/sources Fri Mar 20 06:43:19 2015 +0300 +++ b/auto/sources Fri Mar 20 06:45:32 2015 +0300 @@ -92,14 +92,12 @@ EVENT_INCS="src/event src/event/modules" EVENT_DEPS="src/event/ngx_event.h \ src/event/ngx_event_timer.h \ src/event/ngx_event_posted.h \ - src/event/ngx_event_busy_lock.h \ src/event/ngx_event_connect.h \ src/event/ngx_event_pipe.h" EVENT_SRCS="src/event/ngx_event.c \ src/event/ngx_event_timer.c \ src/event/ngx_event_posted.c \ - src/event/ngx_event_busy_lock.c \ src/event/ngx_event_accept.c \ src/event/ngx_event_connect.c \ src/event/ngx_event_pipe.c" @@ -297,8 +295,7 @@ HTTP_DEPS="src/http/ngx_http.h \ src/http/ngx_http_variables.h \ src/http/ngx_http_script.h \ src/http/ngx_http_upstream.h \ - src/http/ngx_http_upstream_round_robin.h \ - src/http/ngx_http_busy_lock.h" + src/http/ngx_http_upstream_round_robin.h" HTTP_SRCS="src/http/ngx_http.c \ src/http/ngx_http_core_module.c \ @@ -322,9 +319,6 @@ HTTP_SRCS="src/http/ngx_http.c \ src/http/modules/ngx_http_headers_filter_module.c \ src/http/modules/ngx_http_not_modified_filter_module.c" -# STUB -HTTP_SRCS="$HTTP_SRCS src/http/ngx_http_busy_lock.c" - HTTP_POSTPONE_FILTER_SRCS=src/http/ngx_http_postpone_filter_module.c HTTP_FILE_CACHE_SRCS=src/http/ngx_http_file_cache.c diff -r c8acea7c7041 -r ac7c7241ed8c src/event/ngx_event.h --- a/src/event/ngx_event.h Fri Mar 20 06:43:19 2015 +0300 +++ b/src/event/ngx_event.h Fri Mar 20 06:45:32 2015 +0300 @@ -27,14 +27,6 @@ typedef struct { #endif -typedef struct { - ngx_uint_t lock; - - ngx_event_t *events; - ngx_event_t *last; -} ngx_event_mutex_t; - - struct ngx_event_s { void *data; @@ -533,7 +525,6 @@ ngx_int_t ngx_send_lowat(ngx_connection_ #include #include -#include #if (NGX_WIN32) #include diff -r c8acea7c7041 -r ac7c7241ed8c src/event/ngx_event_busy_lock.c --- a/src/event/ngx_event_busy_lock.c Fri Mar 20 06:43:19 2015 +0300 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,286 +0,0 @@ - -/* - * Copyright (C) Igor Sysoev - * Copyright (C) Nginx, Inc. - */ - - -#include -#include -#include - - -static ngx_int_t ngx_event_busy_lock_look_cacheable(ngx_event_busy_lock_t *bl, - ngx_event_busy_lock_ctx_t *ctx); -static void ngx_event_busy_lock_handler(ngx_event_t *ev); -static void ngx_event_busy_lock_posted_handler(ngx_event_t *ev); - - -/* - * NGX_OK: the busy lock is held - * NGX_AGAIN: the all busy locks are held but we will wait the specified time - * NGX_BUSY: ctx->timer == 0: there are many the busy locks - * ctx->timer != 0: there are many the waiting locks - */ - -ngx_int_t -ngx_event_busy_lock(ngx_event_busy_lock_t *bl, ngx_event_busy_lock_ctx_t *ctx) -{ - ngx_int_t rc; - - ngx_mutex_lock(bl->mutex); - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ctx->event->log, 0, - "event busy lock: b:%d mb:%d", - bl->busy, bl->max_busy); - - if (bl->busy < bl->max_busy) { - bl->busy++; - - rc = NGX_OK; - - } else if (ctx->timer && bl->waiting < bl->max_waiting) { - bl->waiting++; - ngx_add_timer(ctx->event, ctx->timer); - ctx->event->handler = ngx_event_busy_lock_handler; - - if (bl->events) { - bl->last->next = ctx; - - } else { - bl->events = ctx; - } - - bl->last = ctx; - - rc = NGX_AGAIN; - - } else { - rc = NGX_BUSY; - } - - ngx_mutex_unlock(bl->mutex); - - return rc; -} - - -ngx_int_t -ngx_event_busy_lock_cacheable(ngx_event_busy_lock_t *bl, - ngx_event_busy_lock_ctx_t *ctx) -{ - ngx_int_t rc; - - ngx_mutex_lock(bl->mutex); - - rc = ngx_event_busy_lock_look_cacheable(bl, ctx); - - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ctx->event->log, 0, - "event busy lock: %d w:%d mw:%d", - rc, bl->waiting, bl->max_waiting); - - /* - * NGX_OK: no the same request, there is free slot and we locked it - * NGX_BUSY: no the same request and there is no free slot - * NGX_AGAIN: the same request is processing - */ - - if (rc == NGX_AGAIN) { - - if (ctx->timer && bl->waiting < bl->max_waiting) { - bl->waiting++; - ngx_add_timer(ctx->event, ctx->timer); - ctx->event->handler = ngx_event_busy_lock_handler; - - if (bl->events == NULL) { - bl->events = ctx; - } else { - bl->last->next = ctx; - } - bl->last = ctx; - - } else { - rc = NGX_BUSY; - } - } - - ngx_mutex_unlock(bl->mutex); - - return rc; -} - - -void -ngx_event_busy_unlock(ngx_event_busy_lock_t *bl, - ngx_event_busy_lock_ctx_t *ctx) -{ - ngx_event_t *ev; - ngx_event_busy_lock_ctx_t *wakeup; - - ngx_mutex_lock(bl->mutex); - - if (bl->events) { - wakeup = bl->events; - bl->events = bl->events->next; - - } else { - wakeup = NULL; - bl->busy--; - } - - /* - * MP: all ctx's and their queue must be in shared memory, - * each ctx has pid to wake up - */ - - if (wakeup == NULL) { - ngx_mutex_unlock(bl->mutex); - return; - } - - if (ctx->md5) { - for (wakeup = bl->events; wakeup; wakeup = wakeup->next) { - if (wakeup->md5 == NULL || wakeup->slot != ctx->slot) { - continue; - } - - wakeup->handler = ngx_event_busy_lock_posted_handler; - wakeup->cache_updated = 1; - - ev = wakeup->event; - - ngx_post_event(ev, &ngx_posted_events); - } - - ngx_mutex_unlock(bl->mutex); - - } else { - bl->waiting--; - - ngx_mutex_unlock(bl->mutex); - - wakeup->handler = ngx_event_busy_lock_posted_handler; - wakeup->locked = 1; - - ev = wakeup->event; - - if (ev->timer_set) { - ngx_del_timer(ev); - } - - ngx_post_event(ev, &ngx_posted_events); - } -} - - -void -ngx_event_busy_lock_cancel(ngx_event_busy_lock_t *bl, - ngx_event_busy_lock_ctx_t *ctx) -{ - ngx_event_busy_lock_ctx_t *c, *p; - - ngx_mutex_lock(bl->mutex); - - bl->waiting--; - - if (ctx == bl->events) { - bl->events = ctx->next; - - } else { - p = bl->events; - for (c = bl->events->next; c; c = c->next) { - if (c == ctx) { - p->next = ctx->next; - break; - } - p = c; - } - } - - ngx_mutex_unlock(bl->mutex); -} - - -static ngx_int_t -ngx_event_busy_lock_look_cacheable(ngx_event_busy_lock_t *bl, - ngx_event_busy_lock_ctx_t *ctx) -{ - ngx_int_t free; - ngx_uint_t i, bit, cacheable, mask; - - bit = 0; - cacheable = 0; - free = -1; - -#if (NGX_SUPPRESS_WARN) - mask = 0; -#endif - - for (i = 0; i < bl->max_busy; i++) { - - if ((bit & 7) == 0) { - mask = bl->md5_mask[i / 8]; - } - - if (mask & 1) { - if (ngx_memcmp(&bl->md5[i * 16], ctx->md5, 16) == 0) { - ctx->waiting = 1; - ctx->slot = i; - return NGX_AGAIN; - } - cacheable++; - From vbart at nginx.com Fri Mar 20 16:29:07 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 20 Mar 2015 16:29:07 +0000 Subject: [nginx] Core: added cyclic memory buffer support for error_log. Message-ID: details: http://hg.nginx.org/nginx/rev/8e66a83d16ae branches: changeset: 6033:8e66a83d16ae user: Valentin Bartenev date: Thu Mar 19 19:29:43 2015 +0300 description: Core: added cyclic memory buffer support for error_log. Example of usage: error_log memory:16m debug; This allows to configure debug logging with minimum impact on performance. It's especially useful when rare crashes are experienced under high load. The log can be extracted from a coredump using the following gdb script: set $log = ngx_cycle->log while $log->writer != ngx_log_memory_writer set $log = $log->next end set $buf = (ngx_log_memory_buf_t *) $log->wdata dump binary memory debug_log.txt $buf->start $buf->end diffstat: src/core/ngx_log.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 120 insertions(+), 0 deletions(-) diffs (141 lines): diff -r ac7c7241ed8c -r 8e66a83d16ae src/core/ngx_log.c --- a/src/core/ngx_log.c Fri Mar 20 06:45:32 2015 +0300 +++ b/src/core/ngx_log.c Thu Mar 19 19:29:43 2015 +0300 @@ -14,6 +14,23 @@ static char *ngx_log_set_levels(ngx_conf static void ngx_log_insert(ngx_log_t *log, ngx_log_t *new_log); +#if (NGX_DEBUG) + +static void ngx_log_memory_writer(ngx_log_t *log, ngx_uint_t level, + u_char *buf, size_t len); +static void ngx_log_memory_cleanup(void *data); + + +typedef struct { + u_char *start; + u_char *end; + u_char *pos; + ngx_atomic_t written; +} ngx_log_memory_buf_t; + +#endif + + static ngx_command_t ngx_errlog_commands[] = { {ngx_string("error_log"), @@ -568,6 +585,64 @@ ngx_log_set_log(ngx_conf_t *cf, ngx_log_ return NGX_CONF_ERROR; } + } else if (ngx_strncmp(value[1].data, "memory:", 7) == 0) { + +#if (NGX_DEBUG) + size_t size, needed; + ngx_pool_cleanup_t *cln; + ngx_log_memory_buf_t *buf; + + value[1].len -= 7; + value[1].data += 7; + + needed = sizeof("MEMLOG :" NGX_LINEFEED) + + cf->conf_file->file.name.len + + NGX_SIZE_T_LEN + + NGX_INT_T_LEN + + NGX_MAX_ERROR_STR; + + size = ngx_parse_size(&value[1]); + + if (size == (size_t) NGX_ERROR || size < needed) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid buffer size \"%V\"", &value[1]); + return NGX_CONF_ERROR; + } + + buf = ngx_palloc(cf->pool, sizeof(ngx_log_memory_buf_t)); + if (buf == NULL) { + return NGX_CONF_ERROR; + } + + buf->start = ngx_pnalloc(cf->pool, size); + if (buf->start == NULL) { + return NGX_CONF_ERROR; + } + + buf->end = buf->start + size; + + buf->pos = ngx_slprintf(buf->start, buf->end, "MEMLOG %uz %V:%ui%N", + size, &cf->conf_file->file.name, + cf->conf_file->line); + + ngx_memset(buf->pos, ' ', buf->end - buf->pos); + + cln = ngx_pool_cleanup_add(cf->pool, 0); + if (cln == NULL) { + return NGX_CONF_ERROR; + } + + cln->data = new_log; + cln->handler = ngx_log_memory_cleanup; + + new_log->writer = ngx_log_memory_writer; + new_log->wdata = buf; + +#else + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "nginx was built without debug support"); + return NGX_CONF_ERROR; +#endif } else if (ngx_strncmp(value[1].data, "syslog:", 7) == 0) { peer = ngx_pcalloc(cf->pool, sizeof(ngx_syslog_peer_t)); @@ -633,3 +708,48 @@ ngx_log_insert(ngx_log_t *log, ngx_log_t log->next = new_log; } + + +#if (NGX_DEBUG) + +static void +ngx_log_memory_writer(ngx_log_t *log, ngx_uint_t level, u_char *buf, + size_t len) +{ + u_char *p; + size_t avail, written; + ngx_log_memory_buf_t *mem; + + mem = log->wdata; + + if (mem == NULL) { + return; + } + + written = ngx_atomic_fetch_add(&mem->written, len); + + p = mem->pos + written % (mem->end - mem->pos); + + avail = mem->end - p; + + if (avail >= len) { + ngx_memcpy(p, buf, len); + + } else { + ngx_memcpy(p, buf, avail); + ngx_memcpy(mem->pos, buf + avail, len - avail); + } +} + + +static void +ngx_log_memory_cleanup(void *data) +{ + ngx_log_t *log = data; + + ngx_log_debug0(NGX_LOG_DEBUG_CORE, log, 0, "destroy memory log buffer"); + + log->wdata = NULL; +} + +#endif From mdounin at mdounin.ru Sun Mar 22 23:59:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 23:59:49 +0000 Subject: [nginx] SSL: clear protocol options. Message-ID: details: http://hg.nginx.org/nginx/rev/3e847964ab55 branches: changeset: 6034:3e847964ab55 user: Maxim Dounin date: Mon Mar 23 02:42:32 2015 +0300 description: SSL: clear protocol options. LibreSSL 2.1.1+ started to set SSL_OP_NO_SSLv3 option by default on new contexts. This makes sure to clear it to make it possible to use SSLv3 with LibreSSL if enabled in nginx config. Prodded by Kuramoto Eiji. diffstat: src/event/ngx_event_openssl.c | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diffs (30 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -249,6 +249,12 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_options(ssl->ctx, SSL_OP_SINGLE_DH_USE); +#ifdef SSL_CTRL_CLEAR_OPTIONS + /* only in 0.9.8m+ */ + SSL_CTX_clear_options(ssl->ctx, + SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3|SSL_OP_NO_TLSv1); +#endif + if (!(protocols & NGX_SSL_SSLv2)) { SSL_CTX_set_options(ssl->ctx, SSL_OP_NO_SSLv2); } @@ -259,11 +265,13 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_options(ssl->ctx, SSL_OP_NO_TLSv1); } #ifdef SSL_OP_NO_TLSv1_1 + SSL_CTX_clear_options(ssl->ctx, SSL_OP_NO_TLSv1_1); if (!(protocols & NGX_SSL_TLSv1_1)) { SSL_CTX_set_options(ssl->ctx, SSL_OP_NO_TLSv1_1); } #endif #ifdef SSL_OP_NO_TLSv1_2 + SSL_CTX_clear_options(ssl->ctx, SSL_OP_NO_TLSv1_2); if (!(protocols & NGX_SSL_TLSv1_2)) { SSL_CTX_set_options(ssl->ctx, SSL_OP_NO_TLSv1_2); } From mdounin at mdounin.ru Sun Mar 22 23:59:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 23:59:52 +0000 Subject: [nginx] SSL: avoid SSL_CTX_set_tmp_rsa_callback() call with Libr... Message-ID: details: http://hg.nginx.org/nginx/rev/a84267233877 branches: changeset: 6035:a84267233877 user: Maxim Dounin date: Mon Mar 23 02:42:34 2015 +0300 description: SSL: avoid SSL_CTX_set_tmp_rsa_callback() call with LibreSSL. LibreSSL removed support for export ciphers and a call to SSL_CTX_set_tmp_rsa_callback() results in an error left in the error queue. This caused alerts "ignoring stale global SSL error (...called a function you should not call) while SSL handshaking" on a first connection in each worker process. diffstat: src/http/modules/ngx_http_ssl_module.c | 2 ++ src/mail/ngx_mail_ssl_module.c | 2 ++ 2 files changed, 4 insertions(+), 0 deletions(-) diffs (27 lines): diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -715,8 +715,10 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } +#ifndef LIBRESSL_VERSION_NUMBER /* a temporary 512-bit RSA key is required for export versions of MSIE */ SSL_CTX_set_tmp_rsa_callback(conf->ssl.ctx, ngx_ssl_rsa512_key_callback); +#endif if (ngx_ssl_dhparam(cf, &conf->ssl, &conf->dhparam) != NGX_OK) { return NGX_CONF_ERROR; diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -421,7 +421,9 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, SSL_CTX_set_options(conf->ssl.ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); } +#ifndef LIBRESSL_VERSION_NUMBER SSL_CTX_set_tmp_rsa_callback(conf->ssl.ctx, ngx_ssl_rsa512_key_callback); +#endif if (ngx_ssl_dhparam(cf, &conf->ssl, &conf->dhparam) != NGX_OK) { return NGX_CONF_ERROR; From mdounin at mdounin.ru Sun Mar 22 23:59:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Mar 2015 23:59:55 +0000 Subject: [nginx] SSL: use of SSL_MODE_NO_AUTO_CHAIN. Message-ID: details: http://hg.nginx.org/nginx/rev/4e3f87c02cb4 branches: changeset: 6036:4e3f87c02cb4 user: Maxim Dounin date: Mon Mar 23 02:42:35 2015 +0300 description: SSL: use of SSL_MODE_NO_AUTO_CHAIN. The SSL_MODE_NO_AUTO_CHAIN mode prevents OpenSSL from automatically building a certificate chain on the fly if there is no certificate chain explicitly provided. Before this change, certificates provided via the ssl_client_certificate and ssl_trusted_certificate directives were used by OpenSSL to automatically build certificate chains, resulting in unexpected (and in some cases unneeded) chains being sent to clients. diffstat: src/event/ngx_event_openssl.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -285,6 +285,10 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_mode(ssl->ctx, SSL_MODE_RELEASE_BUFFERS); #endif +#ifdef SSL_MODE_NO_AUTO_CHAIN + SSL_CTX_set_mode(ssl->ctx, SSL_MODE_NO_AUTO_CHAIN); +#endif + SSL_CTX_set_read_ahead(ssl->ctx, 1); SSL_CTX_set_info_callback(ssl->ctx, ngx_ssl_info_callback); From mdounin at mdounin.ru Mon Mar 23 00:00:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 00:00:02 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/1a9e25b3f8d1 branches: changeset: 6037:1a9e25b3f8d1 user: Maxim Dounin date: Mon Mar 23 02:44:41 2015 +0300 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -5,7 +5,7 @@ NGINX = nginx-$(VER) TEMP = tmp OBJS = objs.msvc8 -OPENSSL = openssl-1.0.1l +OPENSSL = openssl-1.0.1m ZLIB = zlib-1.2.8 PCRE = pcre-8.35 From rian at thelig.ht Mon Mar 23 00:59:24 2015 From: rian at thelig.ht (rian at thelig.ht) Date: Sun, 22 Mar 2015 17:59:24 -0700 Subject: [PATCH] Use port in Host header for redirects Message-ID: <0ebbf4eb72bce4ba3e99336f9365d84e@thelig.ht> Hi there, Just to explain the motivation for this patch. I run nginx behind a port forwarding tcp proxy at my home under a different port from the one nginx is listening on. When nginx generates redirects for URIs without trailing slashes, the "Location" header is incorrect. For instance, if I navigate to "https://myhome.com:2763/dir" it would send me to "https://myhome.com/dir/". The documentation mentions a configuration option called "server_name_in_redirect" but that only applies to the using host name without the trailing port from the "Host" header to generate the "Location" header. This is due to the combination of ngx_http_validate_host() and logic in ngx_http_header_filter(). I'm not sure how intentional the preexisting logic was but it seemed incomplete. Going further, if nginx is serving http behind a transparent SSL proxy then it will generate incorrect "Location" header in that case as well. I recommend a general overhaul of "Location" header generation logic and/or the server_name_in_redirect / port_in_redirect options. Intuitively, it seems the least brittle option is to not modify the relative URIs in the "Location" header at all but there might be an important reason for it that I'm unaware of. Thanks for reading! Rian HG changeset patch # User Rian Hunter # Date 1427071183 25200 # Sun Mar 22 17:39:43 2015 -0700 # Node ID 389f04c80ffa30078529e73504dbea533d079894 # Parent 8e66a83d16ae07fa1f3a5e93e60749da63175653 Use port in Host header for redirects Before this change, if server_name_in_redirect was false, nginx would use the hostname from the Host header for constructing a URI for the Location header during a redirect. The problem with this was that it would ignore the trailing ":port" in the Host header. This change makes it so nginx respects the trailing ":port" from the Host header if it exists when constructing Location headers during a redirect. diff -r 8e66a83d16ae -r 389f04c80ffa src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Thu Mar 19 19:29:43 2015 +0300 +++ b/src/http/ngx_http_header_filter_module.c Sun Mar 22 17:39:43 2015 -0700 @@ -320,10 +320,10 @@ if (clcf->server_name_in_redirect) { cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module); host = cscf->server_name; - + } else if (r->headers_in.host && r->headers_in.host->value.len) { + host = r->headers_in.host->value; } else if (r->headers_in.server.len) { host = r->headers_in.server; - } else { host.len = NGX_SOCKADDR_STRLEN; host.data = addr; @@ -333,44 +333,50 @@ } } - switch (c->local_sockaddr->sa_family) { + len += sizeof("Location: https://") - 1 + + host.len + + r->headers_out.location->value.len + 2; + + /* only add port if there isn't one in the host str */ + if (!ngx_strnstr(host.data, ":", host.len)) { + switch (c->local_sockaddr->sa_family) { #if (NGX_HAVE_INET6) - case AF_INET6: + case AF_INET6: sin6 = (struct sockaddr_in6 *) c->local_sockaddr; port = ntohs(sin6->sin6_port); break; #endif #if (NGX_HAVE_UNIX_DOMAIN) - case AF_UNIX: + case AF_UNIX: port = 0; break; #endif - default: /* AF_INET */ + default: /* AF_INET */ sin = (struct sockaddr_in *) c->local_sockaddr; port = ntohs(sin->sin_port); break; - } + } - len += sizeof("Location: https://") - 1 - + host.len - + r->headers_out.location->value.len + 2; - - if (clcf->port_in_redirect) { + if (clcf->port_in_redirect) { #if (NGX_HTTP_SSL) if (c->ssl) - port = (port == 443) ? 0 : port; + port = (port == 443) ? 0 : port; else #endif - port = (port == 80) ? 0 : port; + port = (port == 80) ? 0 : port; - } else { + } else { port = 0; + } + + if (port) { + len += sizeof(":65535") - 1; + } } - - if (port) { - len += sizeof(":65535") - 1; + else { + port = 0; } } else { From ru at nginx.com Mon Mar 23 10:53:46 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 23 Mar 2015 10:53:46 +0000 Subject: [nginx] Removed stub implementation of win32 mutexes. Message-ID: details: http://hg.nginx.org/nginx/rev/94ce52db3367 branches: changeset: 6038:94ce52db3367 user: Ruslan Ermilov date: Mon Mar 23 13:52:47 2015 +0300 description: Removed stub implementation of win32 mutexes. diffstat: src/core/ngx_connection.c | 10 ------- src/http/ngx_http_upstream_round_robin.h | 2 - src/os/win32/ngx_thread.c | 41 -------------------------------- src/os/win32/ngx_thread.h | 21 ---------------- 4 files changed, 0 insertions(+), 74 deletions(-) diffs (146 lines): diff -r 1a9e25b3f8d1 -r 94ce52db3367 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Mon Mar 23 02:44:41 2015 +0300 +++ b/src/core/ngx_connection.c Mon Mar 23 13:52:47 2015 +0300 @@ -835,8 +835,6 @@ ngx_get_connection(ngx_socket_t s, ngx_l return NULL; } - /* ngx_mutex_lock */ - c = ngx_cycle->free_connections; if (c == NULL) { @@ -849,16 +847,12 @@ ngx_get_connection(ngx_socket_t s, ngx_l "%ui worker_connections are not enough", ngx_cycle->connection_n); - /* ngx_mutex_unlock */ - return NULL; } ngx_cycle->free_connections = c->data; ngx_cycle->free_connection_n--; - /* ngx_mutex_unlock */ - if (ngx_cycle->files) { ngx_cycle->files[s] = c; } @@ -896,14 +890,10 @@ ngx_get_connection(ngx_socket_t s, ngx_l void ngx_free_connection(ngx_connection_t *c) { - /* ngx_mutex_lock */ - c->data = ngx_cycle->free_connections; ngx_cycle->free_connections = c; ngx_cycle->free_connection_n++; - /* ngx_mutex_unlock */ - if (ngx_cycle->files) { ngx_cycle->files[c->fd] = NULL; } diff -r 1a9e25b3f8d1 -r 94ce52db3367 src/http/ngx_http_upstream_round_robin.h --- a/src/http/ngx_http_upstream_round_robin.h Mon Mar 23 02:44:41 2015 +0300 +++ b/src/http/ngx_http_upstream_round_robin.h Mon Mar 23 13:52:47 2015 +0300 @@ -44,8 +44,6 @@ typedef struct ngx_http_upstream_rr_peer struct ngx_http_upstream_rr_peers_s { ngx_uint_t number; - /* ngx_mutex_t *mutex; */ - ngx_uint_t total_weight; unsigned single:1; diff -r 1a9e25b3f8d1 -r 94ce52db3367 src/os/win32/ngx_thread.c --- a/src/os/win32/ngx_thread.c Mon Mar 23 02:44:41 2015 +0300 +++ b/src/os/win32/ngx_thread.c Mon Mar 23 13:52:47 2015 +0300 @@ -67,44 +67,3 @@ ngx_thread_set_tls(ngx_tls_key_t *key, v return 0; } - - -ngx_mutex_t * -ngx_mutex_init(ngx_log_t *log, ngx_uint_t flags) -{ - ngx_mutex_t *m; - - m = ngx_alloc(sizeof(ngx_mutex_t), log); - if (m == NULL) { - return NULL; - } - - m->log = log; - - /* STUB */ - - return m; -} - - -/* STUB */ - -void -ngx_mutex_lock(ngx_mutex_t *m) { - return; -} - - - -ngx_int_t -ngx_mutex_trylock(ngx_mutex_t *m) { - return NGX_OK; -} - - -void -ngx_mutex_unlock(ngx_mutex_t *m) { - return; -} - -/**/ diff -r 1a9e25b3f8d1 -r 94ce52db3367 src/os/win32/ngx_thread.h --- a/src/os/win32/ngx_thread.h Mon Mar 23 02:44:41 2015 +0300 +++ b/src/os/win32/ngx_thread.h Mon Mar 23 13:52:47 2015 +0300 @@ -18,12 +18,6 @@ typedef DWORD ngx_tls_key_t; typedef DWORD ngx_thread_value_t; -typedef struct { - HANDLE mutex; - ngx_log_t *log; -} ngx_mutex_t; - - ngx_err_t ngx_create_thread(ngx_tid_t *tid, ngx_thread_value_t (__stdcall *func)(void *arg), void *arg, ngx_log_t *log); ngx_int_t ngx_init_threads(int n, size_t size, ngx_cycle_t *cycle); @@ -34,25 +28,10 @@ ngx_err_t ngx_thread_set_tls(ngx_tls_key #define ngx_thread_set_tls_n "TlsSetValue()" #define ngx_thread_get_tls TlsGetValue - -#define ngx_thread_volatile volatile - #define ngx_log_tid GetCurrentThreadId() #define NGX_TID_T_FMT "%ud" -ngx_mutex_t *ngx_mutex_init(ngx_log_t *log, ngx_uint_t flags); - -void ngx_mutex_lock(ngx_mutex_t *m); -ngx_int_t ngx_mutex_trylock(ngx_mutex_t *m); -void ngx_mutex_unlock(ngx_mutex_t *m); - - -/* STUB */ -#define NGX_MUTEX_LIGHT 0 -/**/ - - extern ngx_int_t ngx_threads_n; From vbart at nginx.com Mon Mar 23 14:54:25 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 23 Mar 2015 14:54:25 +0000 Subject: [nginx] Thread pools: replaced completed tasks queue mutex with ... Message-ID: details: http://hg.nginx.org/nginx/rev/fc36690e7f44 branches: changeset: 6039:fc36690e7f44 user: Valentin Bartenev date: Mon Mar 23 17:51:21 2015 +0300 description: Thread pools: replaced completed tasks queue mutex with spinlock. diffstat: src/core/ngx_thread_pool.c | 21 +++++---------------- 1 files changed, 5 insertions(+), 16 deletions(-) diffs (54 lines): diff -r 94ce52db3367 -r fc36690e7f44 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Mon Mar 23 13:52:47 2015 +0300 +++ b/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 @@ -99,6 +99,7 @@ ngx_module_t ngx_thread_pool_module = { static ngx_str_t ngx_thread_pool_default = ngx_string("default"); static ngx_uint_t ngx_thread_pool_task_id; +static ngx_atomic_t ngx_thread_pool_done_lock; static ngx_thread_pool_queue_t ngx_thread_pool_done; @@ -329,20 +330,12 @@ ngx_thread_pool_cycle(void *data) task->next = NULL; - if (ngx_thread_mutex_lock(&ngx_thread_pool_done.mtx, tp->log) - != NGX_OK) - { - return NULL; - } + ngx_spinlock(&ngx_thread_pool_done_lock, 1, 2048); *ngx_thread_pool_done.last = task; ngx_thread_pool_done.last = &task->next; - if (ngx_thread_mutex_unlock(&ngx_thread_pool_done.mtx, tp->log) - != NGX_OK) - { - return NULL; - } + ngx_unlock(&ngx_thread_pool_done_lock); (void) ngx_notify(ngx_thread_pool_handler); } @@ -357,17 +350,13 @@ ngx_thread_pool_handler(ngx_event_t *ev) ngx_log_debug0(NGX_LOG_DEBUG_CORE, ev->log, 0, "thread pool handler"); - if (ngx_thread_mutex_lock(&ngx_thread_pool_done.mtx, ev->log) != NGX_OK) { - return; - } + ngx_spinlock(&ngx_thread_pool_done_lock, 1, 2048); task = ngx_thread_pool_done.first; ngx_thread_pool_done.first = NULL; ngx_thread_pool_done.last = &ngx_thread_pool_done.first; - if (ngx_thread_mutex_unlock(&ngx_thread_pool_done.mtx, ev->log) != NGX_OK) { - return; - } + ngx_unlock(&ngx_thread_pool_done_lock); while (task) { ngx_log_debug1(NGX_LOG_DEBUG_CORE, ev->log, 0, From vbart at nginx.com Mon Mar 23 14:54:29 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 23 Mar 2015 14:54:29 +0000 Subject: [nginx] Thread pools: keep waiting tasks mutex in ngx_thread_poo... Message-ID: details: http://hg.nginx.org/nginx/rev/adaedab1e662 branches: changeset: 6040:adaedab1e662 user: Valentin Bartenev date: Mon Mar 23 17:51:21 2015 +0300 description: Thread pools: keep waiting tasks mutex in ngx_thread_pool_t. It's not needed for completed tasks queue since the previous change. No functional changes. diffstat: src/core/ngx_thread_pool.c | 65 ++++++++++++++------------------------------- 1 files changed, 20 insertions(+), 45 deletions(-) diffs (174 lines): diff -r fc36690e7f44 -r adaedab1e662 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 +++ b/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 @@ -17,13 +17,17 @@ typedef struct { typedef struct { - ngx_thread_mutex_t mtx; ngx_thread_task_t *first; ngx_thread_task_t **last; } ngx_thread_pool_queue_t; +#define ngx_thread_pool_queue_init(q) \ + (q)->first = NULL; \ + (q)->last = &(q)->first + struct ngx_thread_pool_s { + ngx_thread_mutex_t mtx; ngx_thread_pool_queue_t queue; ngx_int_t waiting; ngx_thread_cond_t cond; @@ -42,10 +46,6 @@ struct ngx_thread_pool_s { static ngx_int_t ngx_thread_pool_init(ngx_thread_pool_t *tp, ngx_log_t *log, ngx_pool_t *pool); -static ngx_int_t ngx_thread_pool_queue_init(ngx_thread_pool_queue_t *queue, - ngx_log_t *log); -static ngx_int_t ngx_thread_pool_queue_destroy(ngx_thread_pool_queue_t *queue, - ngx_log_t *log); static void ngx_thread_pool_destroy(ngx_thread_pool_t *tp); static void *ngx_thread_pool_cycle(void *data); @@ -117,12 +117,14 @@ ngx_thread_pool_init(ngx_thread_pool_t * return NGX_ERROR; } - if (ngx_thread_pool_queue_init(&tp->queue, log) != NGX_OK) { + ngx_thread_pool_queue_init(&tp->queue); + + if (ngx_thread_mutex_create(&tp->mtx, log) != NGX_OK) { return NGX_ERROR; } if (ngx_thread_cond_create(&tp->cond, log) != NGX_OK) { - (void) ngx_thread_pool_queue_destroy(&tp->queue, log); + (void) ngx_thread_mutex_destroy(&tp->mtx, log); return NGX_ERROR; } @@ -160,27 +162,6 @@ ngx_thread_pool_init(ngx_thread_pool_t * } -static ngx_int_t -ngx_thread_pool_queue_init(ngx_thread_pool_queue_t *queue, ngx_log_t *log) -{ - queue->first = NULL; - queue->last = &queue->first; - - return ngx_thread_mutex_create(&queue->mtx, log); -} - - -static ngx_int_t -ngx_thread_pool_queue_destroy(ngx_thread_pool_queue_t *queue, ngx_log_t *log) -{ -#if 0 - return ngx_thread_mutex_destroy(&queue->mtx, log); -#else - return NGX_OK; -#endif -} - - static void ngx_thread_pool_destroy(ngx_thread_pool_t *tp) { @@ -188,9 +169,9 @@ ngx_thread_pool_destroy(ngx_thread_pool_ #if 0 (void) ngx_thread_cond_destroy(&tp->cond, tp->log); -#endif - (void) ngx_thread_pool_queue_destroy(&tp->queue, tp->log); + (void) ngx_thread_mutex_destroy(&tp->mtx, tp->log); + #endif } @@ -219,12 +200,12 @@ ngx_thread_task_post(ngx_thread_pool_t * return NGX_ERROR; } - if (ngx_thread_mutex_lock(&tp->queue.mtx, tp->log) != NGX_OK) { + if (ngx_thread_mutex_lock(&tp->mtx, tp->log) != NGX_OK) { return NGX_ERROR; } if (tp->waiting >= tp->max_queue) { - (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); + (void) ngx_thread_mutex_unlock(&tp->mtx, tp->log); ngx_log_error(NGX_LOG_ERR, tp->log, 0, "thread pool \"%V\" queue overflow: %i tasks waiting", @@ -238,7 +219,7 @@ ngx_thread_task_post(ngx_thread_pool_t * task->next = NULL; if (ngx_thread_cond_signal(&tp->cond, tp->log) != NGX_OK) { - (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); + (void) ngx_thread_mutex_unlock(&tp->mtx, tp->log); return NGX_ERROR; } @@ -247,7 +228,7 @@ ngx_thread_task_post(ngx_thread_pool_t * tp->waiting++; - (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); + (void) ngx_thread_mutex_unlock(&tp->mtx, tp->log); ngx_log_debug2(NGX_LOG_DEBUG_CORE, tp->log, 0, "task #%ui added to thread pool \"%V\"", @@ -287,7 +268,7 @@ ngx_thread_pool_cycle(void *data) } for ( ;; ) { - if (ngx_thread_mutex_lock(&tp->queue.mtx, tp->log) != NGX_OK) { + if (ngx_thread_mutex_lock(&tp->mtx, tp->log) != NGX_OK) { return NULL; } @@ -295,10 +276,10 @@ ngx_thread_pool_cycle(void *data) tp->waiting--; while (tp->queue.first == NULL) { - if (ngx_thread_cond_wait(&tp->cond, &tp->queue.mtx, tp->log) + if (ngx_thread_cond_wait(&tp->cond, &tp->mtx, tp->log) != NGX_OK) { - (void) ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log); + (void) ngx_thread_mutex_unlock(&tp->mtx, tp->log); return NULL; } } @@ -310,7 +291,7 @@ ngx_thread_pool_cycle(void *data) tp->queue.last = &tp->queue.first; } - if (ngx_thread_mutex_unlock(&tp->queue.mtx, tp->log) != NGX_OK) { + if (ngx_thread_mutex_unlock(&tp->mtx, tp->log) != NGX_OK) { return NULL; } @@ -578,11 +559,7 @@ ngx_thread_pool_init_worker(ngx_cycle_t return NGX_OK; } - if (ngx_thread_pool_queue_init(&ngx_thread_pool_done, cycle->log) - != NGX_OK) - { - return NGX_ERROR; - } + ngx_thread_pool_queue_init(&ngx_thread_pool_done); tpp = tcf->pools.elts; @@ -621,6 +598,4 @@ ngx_thread_pool_exit_worker(ngx_cycle_t for (i = 0; i < tcf->pools.nelts; i++) { ngx_thread_pool_destroy(tpp[i]); } - - (void) ngx_thread_pool_queue_destroy(&ngx_thread_pool_done, cycle->log); } From vbart at nginx.com Mon Mar 23 14:54:31 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 23 Mar 2015 14:54:31 +0000 Subject: [nginx] Thread pools: removed unused pointer to memory pool. Message-ID: details: http://hg.nginx.org/nginx/rev/2097cd49a158 branches: changeset: 6041:2097cd49a158 user: Valentin Bartenev date: Mon Mar 23 17:51:21 2015 +0300 description: Thread pools: removed unused pointer to memory pool. No functional changes. diffstat: src/core/ngx_thread_pool.c | 2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diffs (19 lines): diff -r adaedab1e662 -r 2097cd49a158 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 +++ b/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 @@ -33,7 +33,6 @@ struct ngx_thread_pool_s { ngx_thread_cond_t cond; ngx_log_t *log; - ngx_pool_t *pool; ngx_str_t name; ngx_uint_t threads; @@ -129,7 +128,6 @@ ngx_thread_pool_init(ngx_thread_pool_t * } tp->log = log; - tp->pool = pool; err = pthread_attr_init(&attr); if (err) { From vbart at nginx.com Mon Mar 23 14:54:34 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 23 Mar 2015 14:54:34 +0000 Subject: [nginx] Thread pools: implemented graceful exiting of threads. Message-ID: details: http://hg.nginx.org/nginx/rev/abde398f34a7 branches: changeset: 6042:abde398f34a7 user: Valentin Bartenev date: Mon Mar 23 17:51:21 2015 +0300 description: Thread pools: implemented graceful exiting of threads. diffstat: src/core/ngx_thread_pool.c | 37 ++++++++++++++++++++++++++++++++++--- 1 files changed, 34 insertions(+), 3 deletions(-) diffs (58 lines): diff -r 2097cd49a158 -r abde398f34a7 src/core/ngx_thread_pool.c --- a/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 +++ b/src/core/ngx_thread_pool.c Mon Mar 23 17:51:21 2015 +0300 @@ -46,6 +46,7 @@ struct ngx_thread_pool_s { static ngx_int_t ngx_thread_pool_init(ngx_thread_pool_t *tp, ngx_log_t *log, ngx_pool_t *pool); static void ngx_thread_pool_destroy(ngx_thread_pool_t *tp); +static void ngx_thread_pool_exit_handler(void *data, ngx_log_t *log); static void *ngx_thread_pool_cycle(void *data); static void ngx_thread_pool_handler(ngx_event_t *ev); @@ -163,13 +164,43 @@ ngx_thread_pool_init(ngx_thread_pool_t * static void ngx_thread_pool_destroy(ngx_thread_pool_t *tp) { - /* TODO: exit threads */ + ngx_uint_t n; + ngx_thread_task_t task; + volatile ngx_uint_t lock; -#if 0 + ngx_memzero(&task, sizeof(ngx_thread_task_t)); + + task.handler = ngx_thread_pool_exit_handler; + task.ctx = (void *) &lock; + + for (n = 0; n < tp->threads; n++) { + lock = 1; + + if (ngx_thread_task_post(tp, &task) != NGX_OK) { + return; + } + + while (lock) { + ngx_sched_yield(); + } + + task.event.active = 0; + } + (void) ngx_thread_cond_destroy(&tp->cond, tp->log); (void) ngx_thread_mutex_destroy(&tp->mtx, tp->log); - #endif +} + + +static void +ngx_thread_pool_exit_handler(void *data, ngx_log_t *log) +{ + ngx_uint_t *lock = data; + + *lock = 0; + + pthread_exit(0); } From mdounin at mdounin.ru Mon Mar 23 18:10:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:10:47 +0000 Subject: [nginx] Proxy: fixed proxy_set_body with proxy_cache. Message-ID: details: http://hg.nginx.org/nginx/rev/613b14b305c7 branches: changeset: 6043:613b14b305c7 user: Maxim Dounin date: Mon Mar 23 19:28:54 2015 +0300 description: Proxy: fixed proxy_set_body with proxy_cache. If the last header evaluation resulted in an empty header, the e.skip flag was set and was not reset when we've switched to evaluation of body_values. This incorrectly resulted in body values being skipped instead of producing some correct body as set by proxy_set_body. Fix is to properly reset the e.skip flag. As the problem only appeared if the last potentially non-empty header happened to be empty, it only manifested itself if proxy_set_body was used with proxy_cache. diffstat: src/http/modules/ngx_http_proxy_module.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -1379,6 +1379,7 @@ ngx_http_proxy_create_request(ngx_http_r if (plcf->body_values) { e.ip = plcf->body_values->elts; e.pos = b->last; + e.skip = 0; while (*(uintptr_t *) e.ip) { code = *(ngx_http_script_code_pt *) e.ip; From mdounin at mdounin.ru Mon Mar 23 18:10:50 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:10:50 +0000 Subject: [nginx] Format specifier fixed for file size of buffers. Message-ID: details: http://hg.nginx.org/nginx/rev/b8926ba4d087 branches: changeset: 6044:b8926ba4d087 user: Maxim Dounin date: Mon Mar 23 19:28:54 2015 +0300 description: Format specifier fixed for file size of buffers. diffstat: src/event/ngx_event_pipe.c | 8 ++++---- src/http/ngx_http_request_body.c | 6 +++--- src/http/ngx_http_write_filter_module.c | 4 ++-- 3 files changed, 9 insertions(+), 9 deletions(-) diffs (90 lines): diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -376,7 +376,7 @@ ngx_event_pipe_read_upstream(ngx_event_p ngx_log_debug8(NGX_LOG_DEBUG_EVENT, p->log, 0, "pipe buf busy s:%d t:%d f:%d " "%p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", (cl->buf->shadow ? 1 : 0), cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, @@ -389,7 +389,7 @@ ngx_event_pipe_read_upstream(ngx_event_p ngx_log_debug8(NGX_LOG_DEBUG_EVENT, p->log, 0, "pipe buf out s:%d t:%d f:%d " "%p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", (cl->buf->shadow ? 1 : 0), cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, @@ -402,7 +402,7 @@ ngx_event_pipe_read_upstream(ngx_event_p ngx_log_debug8(NGX_LOG_DEBUG_EVENT, p->log, 0, "pipe buf in s:%d t:%d f:%d " "%p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", (cl->buf->shadow ? 1 : 0), cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, @@ -415,7 +415,7 @@ ngx_event_pipe_read_upstream(ngx_event_p ngx_log_debug8(NGX_LOG_DEBUG_EVENT, p->log, 0, "pipe buf free s:%d t:%d f:%d " "%p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", (cl->buf->shadow ? 1 : 0), cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -936,7 +936,7 @@ ngx_http_request_body_chunked_filter(ngx ngx_log_debug7(NGX_LOG_DEBUG_EVENT, r->connection->log, 0, "http body chunked buf " - "t:%d f:%d %p, pos %p, size: %z file: %O, size: %z", + "t:%d f:%d %p, pos %p, size: %z file: %O, size: %O", cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, cl->buf->last - cl->buf->pos, @@ -1068,7 +1068,7 @@ ngx_http_request_body_save_filter(ngx_ht for (cl = rb->bufs; cl; cl = cl->next) { ngx_log_debug7(NGX_LOG_DEBUG_EVENT, r->connection->log, 0, "http body old buf t:%d f:%d %p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, cl->buf->last - cl->buf->pos, @@ -1079,7 +1079,7 @@ ngx_http_request_body_save_filter(ngx_ht for (cl = in; cl; cl = cl->next) { ngx_log_debug7(NGX_LOG_DEBUG_EVENT, r->connection->log, 0, "http body new buf t:%d f:%d %p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, cl->buf->last - cl->buf->pos, diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c +++ b/src/http/ngx_http_write_filter_module.c @@ -73,7 +73,7 @@ ngx_http_write_filter(ngx_http_request_t ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0, "write old buf t:%d f:%d %p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, cl->buf->last - cl->buf->pos, @@ -129,7 +129,7 @@ ngx_http_write_filter(ngx_http_request_t ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0, "write new buf t:%d f:%d %p, pos %p, size: %z " - "file: %O, size: %z", + "file: %O, size: %O", cl->buf->temporary, cl->buf->in_file, cl->buf->start, cl->buf->pos, cl->buf->last - cl->buf->pos, From mdounin at mdounin.ru Mon Mar 23 18:10:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:10:53 +0000 Subject: [nginx] Output chain: zero size buf alerts in ngx_chain_writer(). Message-ID: details: http://hg.nginx.org/nginx/rev/6ab301ddf469 branches: changeset: 6045:6ab301ddf469 user: Maxim Dounin date: Mon Mar 23 20:56:58 2015 +0300 description: Output chain: zero size buf alerts in ngx_chain_writer(). Now we log a "zero size buf in chain writer" alert if we encounter a zero sized buffer in ngx_chain_writer(), and skip the buffer. diffstat: src/core/ngx_output_chain.c | 33 ++++++++++++++++++++++++++++++++- 1 files changed, 32 insertions(+), 1 deletions(-) diffs (53 lines): diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c +++ b/src/core/ngx_output_chain.c @@ -663,7 +663,23 @@ ngx_chain_writer(void *data, ngx_chain_t #if 1 if (ngx_buf_size(in->buf) == 0 && !ngx_buf_special(in->buf)) { + + ngx_log_error(NGX_LOG_ALERT, ctx->pool->log, 0, + "zero size buf in chain writer " + "t:%d r:%d f:%d %p %p-%p %p %O-%O", + in->buf->temporary, + in->buf->recycled, + in->buf->in_file, + in->buf->start, + in->buf->pos, + in->buf->last, + in->buf->file, + in->buf->file_pos, + in->buf->file_last); + ngx_debug_point(); + + continue; } #endif @@ -691,9 +707,24 @@ ngx_chain_writer(void *data, ngx_chain_t #if 1 if (ngx_buf_size(cl->buf) == 0 && !ngx_buf_special(cl->buf)) { + + ngx_log_error(NGX_LOG_ALERT, ctx->pool->log, 0, + "zero size buf in chain writer " + "t:%d r:%d f:%d %p %p-%p %p %O-%O", + cl->buf->temporary, + cl->buf->recycled, + cl->buf->in_file, + cl->buf->start, + cl->buf->pos, + cl->buf->last, + cl->buf->file, + cl->buf->file_pos, + cl->buf->file_last); + ngx_debug_point(); + + continue; } - #endif size += ngx_buf_size(cl->buf); From mdounin at mdounin.ru Mon Mar 23 18:11:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:06 +0000 Subject: [nginx] Output chain: free chain links in ngx_chain_writer(). Message-ID: details: http://hg.nginx.org/nginx/rev/66176dfea01e branches: changeset: 6046:66176dfea01e user: Maxim Dounin date: Mon Mar 23 21:09:05 2015 +0300 description: Output chain: free chain links in ngx_chain_writer(). diffstat: src/core/ngx_output_chain.c | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diffs (39 lines): diff --git a/src/core/ngx_output_chain.c b/src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c +++ b/src/core/ngx_output_chain.c @@ -654,7 +654,7 @@ ngx_chain_writer(void *data, ngx_chain_t ngx_chain_writer_ctx_t *ctx = data; off_t size; - ngx_chain_t *cl; + ngx_chain_t *cl, *ln, *chain; ngx_connection_t *c; c = ctx->connection; @@ -734,15 +734,23 @@ ngx_chain_writer(void *data, ngx_chain_t return NGX_OK; } - ctx->out = c->send_chain(c, ctx->out, ctx->limit); + chain = c->send_chain(c, ctx->out, ctx->limit); ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, - "chain writer out: %p", ctx->out); + "chain writer out: %p", chain); - if (ctx->out == NGX_CHAIN_ERROR) { + if (chain == NGX_CHAIN_ERROR) { return NGX_ERROR; } + for (cl = ctx->out; cl && cl != chain; /* void */) { + ln = cl; + cl = cl->next; + ngx_free_chain(ctx->pool, ln); + } + + ctx->out = chain; + if (ctx->out == NULL) { ctx->last = &ctx->out; From mdounin at mdounin.ru Mon Mar 23 18:11:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:18 +0000 Subject: [nginx] Request body: free chain links in ngx_http_write_request... Message-ID: details: http://hg.nginx.org/nginx/rev/e2e609f59094 branches: changeset: 6047:e2e609f59094 user: Maxim Dounin date: Mon Mar 23 21:09:12 2015 +0300 description: Request body: free chain links in ngx_http_write_request_body(). diffstat: src/http/ngx_http_request_body.c | 9 +++++++-- 1 files changed, 7 insertions(+), 2 deletions(-) diffs (27 lines): diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -415,7 +415,7 @@ static ngx_int_t ngx_http_write_request_body(ngx_http_request_t *r) { ssize_t n; - ngx_chain_t *cl; + ngx_chain_t *cl, *ln; ngx_temp_file_t *tf; ngx_http_request_body_t *rb; ngx_http_core_loc_conf_t *clcf; @@ -478,8 +478,13 @@ ngx_http_write_request_body(ngx_http_req /* mark all buffers as written */ - for (cl = rb->bufs; cl; cl = cl->next) { + for (cl = rb->bufs; cl; /* void */) { + cl->buf->pos = cl->buf->last; + + ln = cl; + cl = cl->next; + ngx_free_chain(r->pool, ln); } rb->bufs = NULL; From mdounin at mdounin.ru Mon Mar 23 18:11:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:21 +0000 Subject: [nginx] Request body: moved request body writing to save filter. Message-ID: details: http://hg.nginx.org/nginx/rev/9e231d4cecca branches: changeset: 6048:9e231d4cecca user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: Request body: moved request body writing to save filter. diffstat: src/http/ngx_http_request_body.c | 22 ++++++++-------------- 1 files changed, 8 insertions(+), 14 deletions(-) diffs (38 lines): diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -277,20 +277,6 @@ ngx_http_do_read_client_request_body(ngx return rc; } - /* write to file */ - - if (ngx_http_write_request_body(r) != NGX_OK) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; - } - - /* update chains */ - - rc = ngx_http_request_body_filter(r, NULL); - - if (rc != NGX_OK) { - return rc; - } - if (rb->busy != NULL) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -1100,5 +1086,13 @@ ngx_http_request_body_save_filter(ngx_ht return NGX_HTTP_INTERNAL_SERVER_ERROR; } + if (rb->rest > 0 + && rb->buf && rb->buf->last == rb->buf->end) + { + if (ngx_http_write_request_body(r) != NGX_OK) { + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + } + return NGX_OK; } From mdounin at mdounin.ru Mon Mar 23 18:11:23 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:23 +0000 Subject: [nginx] Request body: filters support. Message-ID: details: http://hg.nginx.org/nginx/rev/42d9beeb22db branches: changeset: 6049:42d9beeb22db user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: Request body: filters support. diffstat: src/http/ngx_http.c | 5 +++-- src/http/ngx_http.h | 1 + src/http/ngx_http_core_module.c | 12 +++++++++++- src/http/ngx_http_core_module.h | 4 ++++ src/http/ngx_http_request_body.c | 8 +++----- 5 files changed, 22 insertions(+), 8 deletions(-) diffs (119 lines): diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -69,8 +69,9 @@ static ngx_int_t ngx_http_add_addrs6(ngx ngx_uint_t ngx_http_max_module; -ngx_int_t (*ngx_http_top_header_filter) (ngx_http_request_t *r); -ngx_int_t (*ngx_http_top_body_filter) (ngx_http_request_t *r, ngx_chain_t *ch); +ngx_http_output_header_filter_pt ngx_http_top_header_filter; +ngx_http_output_body_filter_pt ngx_http_top_body_filter; +ngx_http_request_body_filter_pt ngx_http_top_request_body_filter; ngx_str_t ngx_http_html_default_types[] = { diff --git a/src/http/ngx_http.h b/src/http/ngx_http.h --- a/src/http/ngx_http.h +++ b/src/http/ngx_http.h @@ -177,6 +177,7 @@ extern ngx_str_t ngx_http_html_default_ extern ngx_http_output_header_filter_pt ngx_http_top_header_filter; extern ngx_http_output_body_filter_pt ngx_http_top_body_filter; +extern ngx_http_request_body_filter_pt ngx_http_top_request_body_filter; #endif /* _NGX_HTTP_H_INCLUDED_ */ diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -26,6 +26,7 @@ static ngx_int_t ngx_http_core_find_stat ngx_http_location_tree_node_t *node); static ngx_int_t ngx_http_core_preconfiguration(ngx_conf_t *cf); +static ngx_int_t ngx_http_core_postconfiguration(ngx_conf_t *cf); static void *ngx_http_core_create_main_conf(ngx_conf_t *cf); static char *ngx_http_core_init_main_conf(ngx_conf_t *cf, void *conf); static void *ngx_http_core_create_srv_conf(ngx_conf_t *cf); @@ -779,7 +780,7 @@ static ngx_command_t ngx_http_core_comm static ngx_http_module_t ngx_http_core_module_ctx = { ngx_http_core_preconfiguration, /* preconfiguration */ - NULL, /* postconfiguration */ + ngx_http_core_postconfiguration, /* postconfiguration */ ngx_http_core_create_main_conf, /* create main configuration */ ngx_http_core_init_main_conf, /* init main configuration */ @@ -3420,6 +3421,15 @@ ngx_http_core_preconfiguration(ngx_conf_ } +static ngx_int_t +ngx_http_core_postconfiguration(ngx_conf_t *cf) +{ + ngx_http_top_request_body_filter = ngx_http_request_body_save_filter; + + return NGX_OK; +} + + static void * ngx_http_core_create_main_conf(ngx_conf_t *cf) { diff --git a/src/http/ngx_http_core_module.h b/src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h +++ b/src/http/ngx_http_core_module.h @@ -533,10 +533,14 @@ ngx_http_cleanup_t *ngx_http_cleanup_add typedef ngx_int_t (*ngx_http_output_header_filter_pt)(ngx_http_request_t *r); typedef ngx_int_t (*ngx_http_output_body_filter_pt) (ngx_http_request_t *r, ngx_chain_t *chain); +typedef ngx_int_t (*ngx_http_request_body_filter_pt) + (ngx_http_request_t *r, ngx_chain_t *chain); ngx_int_t ngx_http_output_filter(ngx_http_request_t *r, ngx_chain_t *chain); ngx_int_t ngx_http_write_filter(ngx_http_request_t *r, ngx_chain_t *chain); +ngx_int_t ngx_http_request_body_save_filter(ngx_http_request_t *r, + ngx_chain_t *chain); ngx_int_t ngx_http_set_disable_symlinks(ngx_http_request_t *r, diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -24,8 +24,6 @@ static ngx_int_t ngx_http_request_body_l ngx_chain_t *in); static ngx_int_t ngx_http_request_body_chunked_filter(ngx_http_request_t *r, ngx_chain_t *in); -static ngx_int_t ngx_http_request_body_save_filter(ngx_http_request_t *r, - ngx_chain_t *in); ngx_int_t @@ -883,7 +881,7 @@ ngx_http_request_body_length_filter(ngx_ ll = &tl->next; } - rc = ngx_http_request_body_save_filter(r, out); + rc = ngx_http_top_request_body_filter(r, out); ngx_chain_update_chains(r->pool, &rb->free, &rb->busy, &out, (ngx_buf_tag_t) &ngx_http_read_client_request_body); @@ -1035,7 +1033,7 @@ ngx_http_request_body_chunked_filter(ngx } } - rc = ngx_http_request_body_save_filter(r, out); + rc = ngx_http_top_request_body_filter(r, out); ngx_chain_update_chains(r->pool, &rb->free, &rb->busy, &out, (ngx_buf_tag_t) &ngx_http_read_client_request_body); @@ -1044,7 +1042,7 @@ ngx_http_request_body_chunked_filter(ngx } -static ngx_int_t +ngx_int_t ngx_http_request_body_save_filter(ngx_http_request_t *r, ngx_chain_t *in) { #if (NGX_DEBUG) From mdounin at mdounin.ru Mon Mar 23 18:11:26 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:26 +0000 Subject: [nginx] Request body: unbuffered reading. Message-ID: details: http://hg.nginx.org/nginx/rev/a08fad30aeac branches: changeset: 6050:a08fad30aeac user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: Request body: unbuffered reading. The r->request_body_no_buffering flag was introduced. It instructs client request body reading code to avoid reading the whole body, and to call post_handler early instead. The caller should use the ngx_http_read_unbuffered_request_body() function to read remaining parts of the body. Upstream module is now able to use this mode, if configured with the proxy_request_buffering directive. diffstat: src/http/modules/ngx_http_proxy_module.c | 26 ++++- src/http/ngx_http.h | 1 + src/http/ngx_http_request.c | 5 + src/http/ngx_http_request.h | 2 + src/http/ngx_http_request_body.c | 107 ++++++++++++++++- src/http/ngx_http_upstream.c | 185 ++++++++++++++++++++++++++++-- src/http/ngx_http_upstream.h | 1 + src/http/ngx_http_variables.c | 4 + 8 files changed, 306 insertions(+), 25 deletions(-) diffs (truncated from 602 to 300 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -292,6 +292,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.buffering), NULL }, + { ngx_string("proxy_request_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.request_buffering), + NULL }, + { ngx_string("proxy_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -876,6 +883,15 @@ ngx_http_proxy_handler(ngx_http_request_ u->accel = 1; + if (!plcf->upstream.request_buffering + && plcf->body_values == NULL && plcf->upstream.pass_request_body + && !r->headers_in.chunked) + { + /* TODO: support chunked when using HTTP/1.1 */ + + r->request_body_no_buffering = 1; + } + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -1393,7 +1409,11 @@ ngx_http_proxy_create_request(ngx_http_r "http proxy header:%N\"%*s\"", (size_t) (b->last - b->pos), b->pos); - if (plcf->body_values == NULL && plcf->upstream.pass_request_body) { + if (r->request_body_no_buffering) { + + u->request_bufs = cl; + + } else if (plcf->body_values == NULL && plcf->upstream.pass_request_body) { body = u->request_bufs; u->request_bufs = cl; @@ -2582,6 +2602,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.store_access = NGX_CONF_UNSET_UINT; conf->upstream.next_upstream_tries = NGX_CONF_UNSET_UINT; conf->upstream.buffering = NGX_CONF_UNSET; + conf->upstream.request_buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; conf->upstream.force_ranges = NGX_CONF_UNSET; @@ -2691,6 +2712,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->upstream.buffering, prev->upstream.buffering, 1); + ngx_conf_merge_value(conf->upstream.request_buffering, + prev->upstream.request_buffering, 1); + ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); diff --git a/src/http/ngx_http.h b/src/http/ngx_http.h --- a/src/http/ngx_http.h +++ b/src/http/ngx_http.h @@ -138,6 +138,7 @@ ngx_int_t ngx_http_send_special(ngx_http ngx_int_t ngx_http_read_client_request_body(ngx_http_request_t *r, ngx_http_client_body_handler_pt post_handler); +ngx_int_t ngx_http_read_unbuffered_request_body(ngx_http_request_t *r); ngx_int_t ngx_http_send_header(ngx_http_request_t *r); ngx_int_t ngx_http_special_response_handler(ngx_http_request_t *r, diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2525,6 +2525,11 @@ ngx_http_finalize_connection(ngx_http_re return; } + if (r->reading_body) { + r->keepalive = 0; + r->lingering_close = 1; + } + if (!ngx_terminate && !ngx_exiting && r->keepalive diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h +++ b/src/http/ngx_http_request.h @@ -473,6 +473,7 @@ struct ngx_http_request_s { unsigned request_body_in_clean_file:1; unsigned request_body_file_group_access:1; unsigned request_body_file_log_level:3; + unsigned request_body_no_buffering:1; unsigned subrequest_in_memory:1; unsigned waited:1; @@ -509,6 +510,7 @@ struct ngx_http_request_s { unsigned keepalive:1; unsigned lingering_close:1; unsigned discard_body:1; + unsigned reading_body:1; unsigned internal:1; unsigned error_page:1; unsigned filter_finalize:1; diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -42,12 +42,14 @@ ngx_http_read_client_request_body(ngx_ht #if (NGX_HTTP_SPDY) if (r->spdy_stream && r == r->main) { + r->request_body_no_buffering = 0; rc = ngx_http_spdy_read_request_body(r, post_handler); goto done; } #endif if (r != r->main || r->request_body || r->discard_body) { + r->request_body_no_buffering = 0; post_handler(r); return NGX_OK; } @@ -57,6 +59,10 @@ ngx_http_read_client_request_body(ngx_ht goto done; } + if (r->request_body_no_buffering) { + r->request_body_in_file_only = 0; + } + rb = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t)); if (rb == NULL) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; @@ -79,6 +85,7 @@ ngx_http_read_client_request_body(ngx_ht r->request_body = rb; if (r->headers_in.content_length_n < 0 && !r->headers_in.chunked) { + r->request_body_no_buffering = 0; post_handler(r); return NGX_OK; } @@ -171,6 +178,8 @@ ngx_http_read_client_request_body(ngx_ht } } + r->request_body_no_buffering = 0; + post_handler(r); return NGX_OK; @@ -214,6 +223,21 @@ ngx_http_read_client_request_body(ngx_ht done: + if (r->request_body_no_buffering + && (rc == NGX_OK || rc == NGX_AGAIN)) + { + if (rc == NGX_OK) { + r->request_body_no_buffering = 0; + + } else { + /* rc == NGX_AGAIN */ + r->reading_body = 1; + } + + r->read_event_handler = ngx_http_block_reading; + post_handler(r); + } + if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { r->main->count--; } @@ -222,6 +246,26 @@ done: } +ngx_int_t +ngx_http_read_unbuffered_request_body(ngx_http_request_t *r) +{ + ngx_int_t rc; + + if (r->connection->read->timedout) { + r->connection->timedout = 1; + return NGX_HTTP_REQUEST_TIME_OUT; + } + + rc = ngx_http_do_read_client_request_body(r); + + if (rc == NGX_OK) { + r->reading_body = 0; + } + + return rc; +} + + static void ngx_http_read_client_request_body_handler(ngx_http_request_t *r) { @@ -264,18 +308,43 @@ ngx_http_do_read_client_request_body(ngx for ( ;; ) { if (rb->buf->last == rb->buf->end) { - /* pass buffer to request body filter chain */ + if (rb->buf->pos != rb->buf->last) { - out.buf = rb->buf; - out.next = NULL; + /* pass buffer to request body filter chain */ - rc = ngx_http_request_body_filter(r, &out); + out.buf = rb->buf; + out.next = NULL; - if (rc != NGX_OK) { - return rc; + rc = ngx_http_request_body_filter(r, &out); + + if (rc != NGX_OK) { + return rc; + } + + } else { + + /* update chains */ + + rc = ngx_http_request_body_filter(r, NULL); + + if (rc != NGX_OK) { + return rc; + } } if (rb->busy != NULL) { + if (r->request_body_no_buffering) { + if (c->read->timer_set) { + ngx_del_timer(c->read); + } + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + + return NGX_AGAIN; + } + return NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -342,6 +411,22 @@ ngx_http_do_read_client_request_body(ngx } if (!c->read->ready) { + + if (r->request_body_no_buffering + && rb->buf->pos != rb->buf->last) + { + /* pass buffer to request body filter chain */ + + out.buf = rb->buf; + out.next = NULL; + + rc = ngx_http_request_body_filter(r, &out); + + if (rc != NGX_OK) { + return rc; + } + } + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); ngx_add_timer(c->read, clcf->client_body_timeout); @@ -387,9 +472,10 @@ ngx_http_do_read_client_request_body(ngx } } - r->read_event_handler = ngx_http_block_reading; - - rb->post_handler(r); + if (!r->request_body_no_buffering) { + r->read_event_handler = ngx_http_block_reading; + rb->post_handler(r); + } return NGX_OK; } @@ -1085,7 +1171,8 @@ ngx_http_request_body_save_filter(ngx_ht } if (rb->rest > 0 - && rb->buf && rb->buf->last == rb->buf->end) + && rb->buf && rb->buf->last == rb->buf->end + && !r->request_body_no_buffering) { if (ngx_http_write_request_body(r) != NGX_OK) { From mdounin at mdounin.ru Mon Mar 23 18:11:33 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:33 +0000 Subject: [nginx] Proxy: proxy_request_buffering chunked support. Message-ID: details: http://hg.nginx.org/nginx/rev/d97e6be2d292 branches: changeset: 6051:d97e6be2d292 user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: Proxy: proxy_request_buffering chunked support. diffstat: src/http/modules/ngx_http_proxy_module.c | 222 ++++++++++++++++++++++++++++++- 1 files changed, 216 insertions(+), 6 deletions(-) diffs (truncated from 305 to 300 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -110,7 +110,12 @@ typedef struct { ngx_http_proxy_vars_t vars; off_t internal_body_length; - ngx_uint_t head; /* unsigned head:1 */ + ngx_chain_t *free; + ngx_chain_t *busy; + + unsigned head:1; + unsigned internal_chunked:1; + unsigned header_sent:1; } ngx_http_proxy_ctx_t; @@ -121,6 +126,7 @@ static ngx_int_t ngx_http_proxy_create_k #endif static ngx_int_t ngx_http_proxy_create_request(ngx_http_request_t *r); static ngx_int_t ngx_http_proxy_reinit_request(ngx_http_request_t *r); +static ngx_int_t ngx_http_proxy_body_output_filter(void *data, ngx_chain_t *in); static ngx_int_t ngx_http_proxy_process_status_line(ngx_http_request_t *r); static ngx_int_t ngx_http_proxy_process_header(ngx_http_request_t *r); static ngx_int_t ngx_http_proxy_input_filter_init(void *data); @@ -146,6 +152,8 @@ static ngx_int_t static ngx_int_t ngx_http_proxy_internal_body_length_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_proxy_internal_chunked_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_proxy_rewrite_redirect(ngx_http_request_t *r, ngx_table_elt_t *h, size_t prefix); static ngx_int_t ngx_http_proxy_rewrite_cookie(ngx_http_request_t *r, @@ -728,8 +736,8 @@ static ngx_keyval_t ngx_http_proxy_head { ngx_string("Host"), ngx_string("$proxy_host") }, { ngx_string("Connection"), ngx_string("close") }, { ngx_string("Content-Length"), ngx_string("$proxy_internal_body_length") }, + { ngx_string("Transfer-Encoding"), ngx_string("$proxy_internal_chunked") }, { ngx_string("TE"), ngx_string("") }, - { ngx_string("Transfer-Encoding"), ngx_string("") }, { ngx_string("Keep-Alive"), ngx_string("") }, { ngx_string("Expect"), ngx_string("") }, { ngx_string("Upgrade"), ngx_string("") }, @@ -756,8 +764,8 @@ static ngx_keyval_t ngx_http_proxy_cach { ngx_string("Host"), ngx_string("$proxy_host") }, { ngx_string("Connection"), ngx_string("close") }, { ngx_string("Content-Length"), ngx_string("$proxy_internal_body_length") }, + { ngx_string("Transfer-Encoding"), ngx_string("$proxy_internal_chunked") }, { ngx_string("TE"), ngx_string("") }, - { ngx_string("Transfer-Encoding"), ngx_string("") }, { ngx_string("Keep-Alive"), ngx_string("") }, { ngx_string("Expect"), ngx_string("") }, { ngx_string("Upgrade"), ngx_string("") }, @@ -793,6 +801,10 @@ static ngx_http_variable_t ngx_http_pro ngx_http_proxy_internal_body_length_variable, 0, NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, + { ngx_string("proxy_internal_chunked"), NULL, + ngx_http_proxy_internal_chunked_variable, 0, + NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, + { ngx_null_string, NULL, NULL, 0, 0, 0 } }; @@ -885,10 +897,9 @@ ngx_http_proxy_handler(ngx_http_request_ if (!plcf->upstream.request_buffering && plcf->body_values == NULL && plcf->upstream.pass_request_body - && !r->headers_in.chunked) + && (!r->headers_in.chunked + || plcf->http_version == NGX_HTTP_VERSION_11)) { - /* TODO: support chunked when using HTTP/1.1 */ - r->request_body_no_buffering = 1; } @@ -1210,6 +1221,10 @@ ngx_http_proxy_create_request(ngx_http_r ctx->internal_body_length = body_len; len += body_len; + } else if (r->headers_in.chunked && r->reading_body) { + ctx->internal_body_length = -1; + ctx->internal_chunked = 1; + } else { ctx->internal_body_length = r->headers_in.content_length_n; } @@ -1413,6 +1428,11 @@ ngx_http_proxy_create_request(ngx_http_r u->request_bufs = cl; + if (ctx->internal_chunked) { + u->output.output_filter = ngx_http_proxy_body_output_filter; + u->output.filter_ctx = r; + } + } else if (plcf->body_values == NULL && plcf->upstream.pass_request_body) { body = u->request_bufs; @@ -1475,6 +1495,172 @@ ngx_http_proxy_reinit_request(ngx_http_r static ngx_int_t +ngx_http_proxy_body_output_filter(void *data, ngx_chain_t *in) +{ + ngx_http_request_t *r = data; + + off_t size; + u_char *chunk; + ngx_int_t rc; + ngx_buf_t *b; + ngx_chain_t *out, *cl, *tl, **ll; + ngx_http_proxy_ctx_t *ctx; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "proxy output filter"); + + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + if (in == NULL) { + out = in; + goto out; + } + + out = NULL; + ll = &out; + + if (!ctx->header_sent) { + /* first buffer contains headers, pass it unmodified */ + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "proxy output header"); + + ctx->header_sent = 1; + + tl = ngx_alloc_chain_link(r->pool); + if (tl == NULL) { + return NGX_ERROR; + } + + tl->buf = in->buf; + *ll = tl; + ll = &tl->next; + + in = in->next; + + if (in == NULL) { + tl->next = NULL; + goto out; + } + } + + size = 0; + cl = in; + + for ( ;; ) { + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "proxy output chunk: %d", ngx_buf_size(cl->buf)); + + size += ngx_buf_size(cl->buf); + + if (cl->buf->flush + || cl->buf->sync + || ngx_buf_in_memory(cl->buf) + || cl->buf->in_file) + { + tl = ngx_alloc_chain_link(r->pool); + if (tl == NULL) { + return NGX_ERROR; + } + + tl->buf = cl->buf; + *ll = tl; + ll = &tl->next; + } + + if (cl->next == NULL) { + break; + } + + cl = cl->next; + } + + if (size) { + tl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (tl == NULL) { + return NGX_ERROR; + } + + b = tl->buf; + chunk = b->start; + + if (chunk == NULL) { + /* the "0000000000000000" is 64-bit hexadecimal string */ + + chunk = ngx_palloc(r->pool, sizeof("0000000000000000" CRLF) - 1); + if (chunk == NULL) { + return NGX_ERROR; + } + + b->start = chunk; + b->end = chunk + sizeof("0000000000000000" CRLF) - 1; + } + + b->tag = (ngx_buf_tag_t) &ngx_http_proxy_body_output_filter; + b->memory = 0; + b->temporary = 1; + b->pos = chunk; + b->last = ngx_sprintf(chunk, "%xO" CRLF, size); + + tl->next = out; + out = tl; + } + + if (cl->buf->last_buf) { + tl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (tl == NULL) { + return NGX_ERROR; + } + + b = tl->buf; + + b->tag = (ngx_buf_tag_t) &ngx_http_proxy_body_output_filter; + b->temporary = 0; + b->memory = 1; + b->last_buf = 1; + b->pos = (u_char *) CRLF "0" CRLF CRLF; + b->last = b->pos + 7; + + cl->buf->last_buf = 0; + + *ll = tl; + + if (size == 0) { + b->pos += 2; + } + + } else if (size > 0) { + tl = ngx_chain_get_free_buf(r->pool, &ctx->free); + if (tl == NULL) { + return NGX_ERROR; + } + + b = tl->buf; + + b->tag = (ngx_buf_tag_t) &ngx_http_proxy_body_output_filter; + b->temporary = 0; + b->memory = 1; + b->pos = (u_char *) CRLF; + b->last = b->pos + 2; + + *ll = tl; + + } else { + *ll = NULL; + } + +out: + + rc = ngx_chain_writer(&r->upstream->writer, out); + + ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out, + (ngx_buf_tag_t) &ngx_http_proxy_body_output_filter); + + return rc; +} + + +static ngx_int_t ngx_http_proxy_process_status_line(ngx_http_request_t *r) { size_t len; @@ -2268,6 +2454,30 @@ ngx_http_proxy_internal_body_length_vari static ngx_int_t +ngx_http_proxy_internal_chunked_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_http_proxy_ctx_t *ctx; + + ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); + + if (ctx == NULL || !ctx->internal_chunked) { + v->not_found = 1; + return NGX_OK; + } + + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + + v->data = (u_char *) "chunked"; + v->len = sizeof("chunked") - 1; + + return NGX_OK; +} + From mdounin at mdounin.ru Mon Mar 23 18:11:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:36 +0000 Subject: [nginx] FastCGI: fastcgi_request_buffering. Message-ID: details: http://hg.nginx.org/nginx/rev/8ad78808a612 branches: changeset: 6052:8ad78808a612 user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: FastCGI: fastcgi_request_buffering. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 362 +++++++++++++++++++++++++++- 1 files changed, 343 insertions(+), 19 deletions(-) diffs (truncated from 463 to 300 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -81,8 +81,12 @@ typedef struct { size_t length; size_t padding; + ngx_chain_t *free; + ngx_chain_t *busy; + unsigned fastcgi_stdout:1; unsigned large_stderr:1; + unsigned header_sent:1; ngx_array_t *split_parts; @@ -147,6 +151,8 @@ static ngx_int_t ngx_http_fastcgi_create #endif static ngx_int_t ngx_http_fastcgi_create_request(ngx_http_request_t *r); static ngx_int_t ngx_http_fastcgi_reinit_request(ngx_http_request_t *r); +static ngx_int_t ngx_http_fastcgi_body_output_filter(void *data, + ngx_chain_t *in); static ngx_int_t ngx_http_fastcgi_process_header(ngx_http_request_t *r); static ngx_int_t ngx_http_fastcgi_input_filter_init(void *data); static ngx_int_t ngx_http_fastcgi_input_filter(ngx_event_pipe_t *p, @@ -257,6 +263,13 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, upstream.buffering), NULL }, + { ngx_string("fastcgi_request_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.request_buffering), + NULL }, + { ngx_string("fastcgi_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -703,6 +716,12 @@ ngx_http_fastcgi_handler(ngx_http_reques u->input_filter = ngx_http_fastcgi_non_buffered_filter; u->input_filter_ctx = r; + if (!flcf->upstream.request_buffering + && flcf->upstream.pass_request_body) + { + r->request_body_no_buffering = 1; + } + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -799,6 +818,7 @@ ngx_http_fastcgi_create_request(ngx_http ngx_chain_t *cl, *body; ngx_list_part_t *part; ngx_table_elt_t *header, **ignored; + ngx_http_upstream_t *u; ngx_http_script_code_pt code; ngx_http_script_engine_t e, le; ngx_http_fastcgi_header_t *h; @@ -810,10 +830,12 @@ ngx_http_fastcgi_create_request(ngx_http header_params = 0; ignored = NULL; + u = r->upstream; + flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module); #if (NGX_HTTP_CACHE) - params = r->upstream->cacheable ? &flcf->params_cache : &flcf->params; + params = u->cacheable ? &flcf->params_cache : &flcf->params; #else params = &flcf->params; #endif @@ -1134,12 +1156,17 @@ ngx_http_fastcgi_create_request(ngx_http h->padding_length = 0; h->reserved = 0; - h = (ngx_http_fastcgi_header_t *) b->last; - b->last += sizeof(ngx_http_fastcgi_header_t); - - if (flcf->upstream.pass_request_body) { - body = r->upstream->request_bufs; - r->upstream->request_bufs = cl; + if (r->request_body_no_buffering) { + + u->request_bufs = cl; + + u->output.output_filter = ngx_http_fastcgi_body_output_filter; + u->output.filter_ctx = r; + + } else if (flcf->upstream.pass_request_body) { + + body = u->request_bufs; + u->request_bufs = cl; #if (NGX_SUPPRESS_WARN) file_pos = 0; @@ -1194,6 +1221,9 @@ ngx_http_fastcgi_create_request(ngx_http padding = 8 - len % 8; padding = (padding == 8) ? 0 : padding; + h = (ngx_http_fastcgi_header_t *) cl->buf->last; + cl->buf->last += sizeof(ngx_http_fastcgi_header_t); + h->version = 1; h->type = NGX_HTTP_FASTCGI_STDIN; h->request_id_hi = 0; @@ -1223,9 +1253,6 @@ ngx_http_fastcgi_create_request(ngx_http b->last += padding; } - h = (ngx_http_fastcgi_header_t *) b->last; - b->last += sizeof(ngx_http_fastcgi_header_t); - cl->next = ngx_alloc_chain_link(r->pool); if (cl->next == NULL) { return NGX_ERROR; @@ -1240,17 +1267,22 @@ ngx_http_fastcgi_create_request(ngx_http } } else { - r->upstream->request_bufs = cl; + u->request_bufs = cl; } - h->version = 1; - h->type = NGX_HTTP_FASTCGI_STDIN; - h->request_id_hi = 0; - h->request_id_lo = 1; - h->content_length_hi = 0; - h->content_length_lo = 0; - h->padding_length = 0; - h->reserved = 0; + if (!r->request_body_no_buffering) { + h = (ngx_http_fastcgi_header_t *) cl->buf->last; + cl->buf->last += sizeof(ngx_http_fastcgi_header_t); + + h->version = 1; + h->type = NGX_HTTP_FASTCGI_STDIN; + h->request_id_hi = 0; + h->request_id_lo = 1; + h->content_length_hi = 0; + h->content_length_lo = 0; + h->padding_length = 0; + h->reserved = 0; + } cl->next = NULL; @@ -1284,6 +1316,294 @@ ngx_http_fastcgi_reinit_request(ngx_http static ngx_int_t +ngx_http_fastcgi_body_output_filter(void *data, ngx_chain_t *in) +{ + ngx_http_request_t *r = data; + + off_t file_pos; + u_char *pos, *start; + size_t len, padding; + ngx_buf_t *b; + ngx_int_t rc; + ngx_uint_t next, last; + ngx_chain_t *cl, *tl, *out, **ll; + ngx_http_fastcgi_ctx_t *f; + ngx_http_fastcgi_header_t *h; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "fastcgi output filter"); + + f = ngx_http_get_module_ctx(r, ngx_http_fastcgi_module); + + if (in == NULL) { + out = in; + goto out; + } + + out = NULL; + ll = &out; + + if (!f->header_sent) { + /* first buffer contains headers, pass it unmodified */ + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "fastcgi output header"); + + f->header_sent = 1; + + tl = ngx_alloc_chain_link(r->pool); + if (tl == NULL) { + return NGX_ERROR; + } + + tl->buf = in->buf; + *ll = tl; + ll = &tl->next; + + in = in->next; + + if (in == NULL) { + tl->next = NULL; + goto out; + } + } + + cl = ngx_chain_get_free_buf(r->pool, &f->free); + if (cl == NULL) { + return NGX_ERROR; + } + + b = cl->buf; + + b->tag = (ngx_buf_tag_t) &ngx_http_fastcgi_body_output_filter; + b->temporary = 1; + + if (b->start == NULL) { + /* reserve space for maximum possible padding, 7 bytes */ + + b->start = ngx_palloc(r->pool, + sizeof(ngx_http_fastcgi_header_t) + 7); + if (b->start == NULL) { + return NGX_ERROR; + } + + b->pos = b->start; + b->last = b->start; + + b->end = b->start + sizeof(ngx_http_fastcgi_header_t) + 7; + } + + *ll = cl; + + last = 0; + padding = 0; + +#if (NGX_SUPPRESS_WARN) + file_pos = 0; + pos = NULL; +#endif + + while (in) { + + ngx_log_debug7(NGX_LOG_DEBUG_EVENT, r->connection->log, 0, + "fastcgi output in l:%d f:%d %p, pos %p, size: %z " + "file: %O, size: %O", + in->buf->last_buf, + in->buf->in_file, + in->buf->start, in->buf->pos, + in->buf->last - in->buf->pos, + in->buf->file_pos, + in->buf->file_last - in->buf->file_pos); + + if (in->buf->last_buf) { + last = 1; + } + + if (ngx_buf_special(in->buf)) { + in = in->next; + continue; + } + + if (in->buf->in_file) { + file_pos = in->buf->file_pos; + + } else { + pos = in->buf->pos; + } + + next = 0; + + do { + tl = ngx_chain_get_free_buf(r->pool, &f->free); + if (tl == NULL) { + return NGX_ERROR; + } + + b = tl->buf; + start = b->start; + + ngx_memcpy(b, in->buf, sizeof(ngx_buf_t)); + + /* + * restore b->start to preserve memory allocated in the buffer, + * to reuse it later for headers and padding + */ + + b->start = start; + + if (in->buf->in_file) { + b->file_pos = file_pos; + file_pos += 32 * 1024; + + if (file_pos >= in->buf->file_last) { + file_pos = in->buf->file_last; + next = 1; + } + + b->file_last = file_pos; + len = (ngx_uint_t) (file_pos - b->file_pos); From mdounin at mdounin.ru Mon Mar 23 18:11:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2015 18:11:38 +0000 Subject: [nginx] Upstream: uwsgi_request_buffering, scgi_request_buffering. Message-ID: details: http://hg.nginx.org/nginx/rev/b6eb6ec4fbd9 branches: changeset: 6053:b6eb6ec4fbd9 user: Maxim Dounin date: Mon Mar 23 21:09:19 2015 +0300 description: Upstream: uwsgi_request_buffering, scgi_request_buffering. diffstat: src/http/modules/ngx_http_scgi_module.c | 23 ++++++++++++++++++++++- src/http/modules/ngx_http_uwsgi_module.c | 23 ++++++++++++++++++++++- 2 files changed, 44 insertions(+), 2 deletions(-) diffs (122 lines): diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -120,6 +120,13 @@ static ngx_command_t ngx_http_scgi_comma offsetof(ngx_http_scgi_loc_conf_t, upstream.buffering), NULL }, + { ngx_string("scgi_request_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.request_buffering), + NULL }, + { ngx_string("scgi_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -504,6 +511,13 @@ ngx_http_scgi_handler(ngx_http_request_t u->pipe->input_filter = ngx_event_pipe_copy_input_filter; u->pipe->input_ctx = r; + if (!scf->upstream.request_buffering + && scf->upstream.pass_request_body + && !r->headers_in.chunked) + { + r->request_body_no_buffering = 1; + } + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -865,7 +879,10 @@ ngx_http_scgi_create_request(ngx_http_re *b->last++ = (u_char) ','; - if (scf->upstream.pass_request_body) { + if (r->request_body_no_buffering) { + r->upstream->request_bufs = cl; + + } else if (scf->upstream.pass_request_body) { body = r->upstream->request_bufs; r->upstream->request_bufs = cl; @@ -1162,6 +1179,7 @@ ngx_http_scgi_create_loc_conf(ngx_conf_t conf->upstream.store_access = NGX_CONF_UNSET_UINT; conf->upstream.next_upstream_tries = NGX_CONF_UNSET_UINT; conf->upstream.buffering = NGX_CONF_UNSET; + conf->upstream.request_buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; conf->upstream.force_ranges = NGX_CONF_UNSET; @@ -1250,6 +1268,9 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->upstream.buffering, prev->upstream.buffering, 1); + ngx_conf_merge_value(conf->upstream.request_buffering, + prev->upstream.request_buffering, 1); + ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -180,6 +180,13 @@ static ngx_command_t ngx_http_uwsgi_comm offsetof(ngx_http_uwsgi_loc_conf_t, upstream.buffering), NULL }, + { ngx_string("uwsgi_request_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.request_buffering), + NULL }, + { ngx_string("uwsgi_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -672,6 +679,13 @@ ngx_http_uwsgi_handler(ngx_http_request_ u->pipe->input_filter = ngx_event_pipe_copy_input_filter; u->pipe->input_ctx = r; + if (!uwcf->upstream.request_buffering + && uwcf->upstream.pass_request_body + && !r->headers_in.chunked) + { + r->request_body_no_buffering = 1; + } + rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { @@ -1068,7 +1082,10 @@ ngx_http_uwsgi_create_request(ngx_http_r b->last = ngx_copy(b->last, uwcf->uwsgi_string.data, uwcf->uwsgi_string.len); - if (uwcf->upstream.pass_request_body) { + if (r->request_body_no_buffering) { + r->upstream->request_bufs = cl; + + } else if (uwcf->upstream.pass_request_body) { body = r->upstream->request_bufs; r->upstream->request_bufs = cl; @@ -1368,6 +1385,7 @@ ngx_http_uwsgi_create_loc_conf(ngx_conf_ conf->upstream.store_access = NGX_CONF_UNSET_UINT; conf->upstream.next_upstream_tries = NGX_CONF_UNSET_UINT; conf->upstream.buffering = NGX_CONF_UNSET; + conf->upstream.request_buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; conf->upstream.force_ranges = NGX_CONF_UNSET; @@ -1464,6 +1482,9 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_value(conf->upstream.buffering, prev->upstream.buffering, 1); + ngx_conf_merge_value(conf->upstream.request_buffering, + prev->upstream.request_buffering, 1); + ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); From oschaaf at we-amp.com Mon Mar 23 19:39:46 2015 From: oschaaf at we-amp.com (Otto van der Schaaf) Date: Mon, 23 Mar 2015 20:39:46 +0100 Subject: [PATCH] Add configuration option for the timeout for childs handling SIGTERM. Message-ID: Hi, For testing quick termination during high loads, while running with valgrind, it might be useful to be able to extend the amount of time nginx allows child processes to wrap up before sending SIGKILL. For ngx_pagespeed, the current hard-coded default of 1 second seems to be just short of what we need to be able to reliably test just this scenario, so I've made a patch so we can run with different values for with and without valgrind. Would the following patch be acceptable? Kind regards, Otto # HG changeset patch # User Otto van der Schaaf # Date 1427138606 -3600 # Mon Mar 23 20:23:26 2015 +0100 # Node ID 92c9d38d7677b5f646112cde94dc40d834d5ef74 # Parent b6eb6ec4fbd9807d75de071fffb000c4f3a5c57d Add configuration option for the timeout for childs handling SIGTERM. Adds a configuration option (child_terminate_timeout) to allow tweaking the amount of time nginx allows child processes before sending SIGKILL. This is helpful for testing termination with higher loads under valgrind, as the hard-coded default (1000 ms) might not always be enough. diff -r b6eb6ec4fbd9 -r 92c9d38d7677 src/core/nginx.c --- a/src/core/nginx.c Mon Mar 23 21:09:19 2015 +0300 +++ b/src/core/nginx.c Mon Mar 23 20:23:26 2015 +0100 @@ -125,6 +125,13 @@ offsetof(ngx_core_conf_t, rlimit_sigpending), NULL }, + { ngx_string("child_terminate_timeout"), + NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + 0, + offsetof(ngx_core_conf_t, child_terminate_timeout), + NULL }, + { ngx_string("working_directory"), NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_TAKE1, ngx_conf_set_str_slot, @@ -955,6 +962,7 @@ ccf->rlimit_nofile = NGX_CONF_UNSET; ccf->rlimit_core = NGX_CONF_UNSET; ccf->rlimit_sigpending = NGX_CONF_UNSET; + ccf->child_terminate_timeout = NGX_CONF_UNSET_MSEC; ccf->user = (ngx_uid_t) NGX_CONF_UNSET_UINT; ccf->group = (ngx_gid_t) NGX_CONF_UNSET_UINT; @@ -985,6 +993,7 @@ ngx_conf_init_value(ccf->worker_processes, 1); ngx_conf_init_value(ccf->debug_points, 0); + ngx_conf_init_value(ccf->child_terminate_timeout, 1000); #if (NGX_HAVE_CPU_AFFINITY) diff -r b6eb6ec4fbd9 -r 92c9d38d7677 src/core/ngx_cycle.h --- a/src/core/ngx_cycle.h Mon Mar 23 21:09:19 2015 +0300 +++ b/src/core/ngx_cycle.h Mon Mar 23 20:23:26 2015 +0100 @@ -85,6 +85,7 @@ ngx_int_t rlimit_sigpending; off_t rlimit_core; + ngx_msec_t child_terminate_timeout; int priority; ngx_uint_t cpu_affinity_n; diff -r b6eb6ec4fbd9 -r 92c9d38d7677 src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c Mon Mar 23 21:09:19 2015 +0300 +++ b/src/os/unix/ngx_process_cycle.c Mon Mar 23 20:23:26 2015 +0100 @@ -189,7 +189,7 @@ sigio = ccf->worker_processes + 2 /* cache processes */; - if (delay > 1000) { + if (delay > ccf->child_terminate_timeout) { ngx_signal_worker_processes(cycle, SIGKILL); } else { ngx_signal_worker_processes(cycle, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 23 22:18:27 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 01:18:27 +0300 Subject: [PATCH] Add configuration option for the timeout for childs handling SIGTERM. In-Reply-To: References: Message-ID: <20150323221827.GI88631@mdounin.ru> Hello! On Mon, Mar 23, 2015 at 08:39:46PM +0100, Otto van der Schaaf wrote: > Hi, > > For testing quick termination during high loads, while running with > valgrind, it might be useful to be able to extend the amount of time nginx > allows child processes to wrap up before sending SIGKILL. > For ngx_pagespeed, the current hard-coded default of 1 second seems to be > just short of what we need to be able to reliably test just this scenario, > so I've made a patch so we can run with different values for with and > without valgrind. > Would the following patch be acceptable? No. Note well that current limit isn't 1 second, you are misunderstanding the code. It'll signal workers multiple times, doubling wait time on each iteration, from 50ms to 1000ms including. -- Maxim Dounin http://nginx.org/ From oschaaf at we-amp.com Mon Mar 23 22:34:46 2015 From: oschaaf at we-amp.com (Otto van der Schaaf) Date: Mon, 23 Mar 2015 23:34:46 +0100 Subject: [PATCH] Add configuration option for the timeout for childs handling SIGTERM. In-Reply-To: <20150323221827.GI88631@mdounin.ru> References: <20150323221827.GI88631@mdounin.ru> Message-ID: Thanks for pointing out that the workers get signalled multiple times, I missed that indeed. In that case, termination of the module under valgrind takes a little longer then I thought it did, yet the problem remains the same. So the upper boundary of 1000 ms for the iteration has to remain fixed ? In that case, we'll have a patch to maintain (or see if we can round up in less time). Thanks! On Mon, Mar 23, 2015 at 11:18 PM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 23, 2015 at 08:39:46PM +0100, Otto van der Schaaf wrote: > > > Hi, > > > > For testing quick termination during high loads, while running with > > valgrind, it might be useful to be able to extend the amount of time > nginx > > allows child processes to wrap up before sending SIGKILL. > > For ngx_pagespeed, the current hard-coded default of 1 second seems to be > > just short of what we need to be able to reliably test just this > scenario, > > so I've made a patch so we can run with different values for with and > > without valgrind. > > Would the following patch be acceptable? > > No. Note well that current limit isn't 1 second, you are > misunderstanding the code. It'll signal workers multiple times, > doubling wait time on each iteration, from 50ms to 1000ms > including. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 24 16:06:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 16:06:20 +0000 Subject: [nginx] nginx-1.7.11-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/2b3b737b5456 branches: changeset: 6054:2b3b737b5456 user: Maxim Dounin date: Tue Mar 24 18:45:34 2015 +0300 description: nginx-1.7.11-RELEASE diffstat: docs/xml/nginx/changes.xml | 168 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 168 insertions(+), 0 deletions(-) diffs (178 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,174 @@ + + + + +???????? sendfile ????????? aio ????? ?? ?????; +?????? nginx ????????????? ?????????? AIO ??? ????????? ?????? ??? sendfile, +???? ???????????? ???????????? ????????? aio ? sendfile. + + +the "sendfile" parameter of the "aio" directive is deprecated; +now nginx automatically uses AIO to pre-load data for sendfile +if both "aio" and "sendfile" directives are used. + + + + + +????????????????? ????????? ???????. + + +experimental thread pools support. + + + + + +????????? proxy_request_buffering, fastcgi_request_buffering, +scgi_request_buffering ? uwsgi_request_buffering. + + +the "proxy_request_buffering", "fastcgi_request_buffering", +"scgi_request_buffering", and "uwsgi_request_buffering" directives. + + + + + +????????????????? API ??? ????????? ???? ???????. + + +request body filters experimental API. + + + + + +???????? ?????????? SSL-???????????? ? ???????? ??????-???????.
+??????? Sven Peter, Franck Levionnois ? Filipe Da Silva. +
+ +client SSL certificates support in mail proxy.
+Thanks to Sven Peter, Franck Levionnois, and Filipe Da Silva. +
+
+ + + +?????????? ??????? ??????? +??? ????????????? ???????? "hash ... consistent" ? ????? upstream.
+??????? Wai Keen Woon. +
+ +startup speedup +when using the "hash ... consistent" directive in the upstream block.
+Thanks to Wai Keen Woon. +
+
+ + + +?????????? ???????????? ? ????????? ????? ? ??????. + + +debug logging into a cyclic memory buffer. + + + + + +? ????????? ???-??????.
+??????? Chris West. +
+ +in hash table handling.
+Thanks to Chris West. +
+
+ + + +? ????????? proxy_cache_revalidate. + + +in the "proxy_cache_revalidate" directive. + + + + + +SSL-?????????? ????? ????????, ???? ????????????? ?????????? accept +??? ???????? proxy_protocol ????????? listen.
+??????? James Hamlin. +
+ +SSL connections might hang if deferred accept +or the "proxy_protocol" parameter of the "listen" directive were used.
+Thanks to James Hamlin. +
+
+ + + +?????????? $upstream_response_time ????? ????????? ???????? ???????? +??? ????????????? ????????? image_filter. + + +the $upstream_response_time variable might contain a wrong value +if the "image_filter" directive was used. + + + + + +? ????????? ????????????? ????????????.
+??????? R?gis Leroy. +
+ +in integer overflow handling.
+Thanks to R?gis Leroy. +
+
+ + + +??? ????????????? LibreSSL ???? ?????????? ???????? ????????? SSLv3. + + +it was not possible to enable SSLv3 with LibreSSL. + + + + + +??? ????????????? LibreSSL ? ????? ?????????? ????????? +"ignoring stale global SSL error ... called a function you should not call". + + +the "ignoring stale global SSL error ... called a function you should not call" +alerts appeared in logs when using LibreSSL. + + + + + +???????????, ????????? ? ?????????? ssl_client_certificate ? +ssl_trusted_certificate, ?????????????? +??? ??????????????? ?????????? ??????? ????????????. + + +certificates specified by the "ssl_client_certificate" and +"ssl_trusted_certificate" directives were inadvertently used +to automatically construct certificate chains. + + + +
+ + From mdounin at mdounin.ru Tue Mar 24 16:06:23 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 16:06:23 +0000 Subject: [nginx] release-1.7.11 tag Message-ID: details: http://hg.nginx.org/nginx/rev/166c2c19c522 branches: changeset: 6055:166c2c19c522 user: Maxim Dounin date: Tue Mar 24 18:45:34 2015 +0300 description: release-1.7.11 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -380,3 +380,4 @@ 6d2fbc30f8a7f70136cf08f32d5ff3179d524873 d5ea659b8bab2d6402a2266efa691f705e84001e release-1.7.8 34b201c1abd1e2d4faeae4650a21574771a03c0e release-1.7.9 860cfbcc4606ee36d898a9cd0c5ae8858db984d6 release-1.7.10 +2b3b737b5456c05cd63d3d834f4fb4d3776953d0 release-1.7.11 From albertcasademont at gmail.com Tue Mar 24 16:33:20 2015 From: albertcasademont at gmail.com (Albert Casademont) Date: Tue, 24 Mar 2015 17:33:20 +0100 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: <20150317192006.GX88631@mdounin.ru> References: <20150317192006.GX88631@mdounin.ru> Message-ID: Hi all, how did this end? Being considered for 1.9? Thanks! On Tue, Mar 17, 2015 at 8:20 PM, Maxim Dounin wrote: > Hello! > > On Tue, Mar 17, 2015 at 09:38:42PM +0300, kyprizel wrote: > > > Sure it should be tested (there are can be some memory leaks). > > Need to know if it's idologically acceptable. > > I've provided some comments in the reply to your off-list message. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 24 17:01:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Mar 2015 20:01:54 +0300 Subject: [PATCH] Add configuration option for the timeout for childs handling SIGTERM. In-Reply-To: References: <20150323221827.GI88631@mdounin.ru> Message-ID: <20150324170154.GX88631@mdounin.ru> Hello! On Mon, Mar 23, 2015 at 11:34:46PM +0100, Otto van der Schaaf wrote: > Thanks for pointing out that the workers get signalled multiple times, I > missed that indeed. In that case, termination of the module under valgrind > takes a little longer then I thought it did, yet the problem remains the > same. > So the upper boundary of 1000 ms for the iteration has to remain fixed ? In > that case, we'll have a patch to maintain (or see if we can round up in > less time). If there are good reasons why the termination takes so long - we may consider adding another iteration. Otherwise - yes, it'll remain fixed. -- Maxim Dounin http://nginx.org/ From kyprizel at gmail.com Tue Mar 24 21:20:15 2015 From: kyprizel at gmail.com (kyprizel) Date: Wed, 25 Mar 2015 00:20:15 +0300 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: References: <20150317192006.GX88631@mdounin.ru> Message-ID: Albert, this patch will not be accepted. Need to write another one, may be I'll try to do itlater. On Tue, Mar 24, 2015 at 7:33 PM, Albert Casademont < albertcasademont at gmail.com> wrote: > Hi all, how did this end? Being considered for 1.9? > > Thanks! > > > On Tue, Mar 17, 2015 at 8:20 PM, Maxim Dounin wrote: > >> Hello! >> >> On Tue, Mar 17, 2015 at 09:38:42PM +0300, kyprizel wrote: >> >> > Sure it should be tested (there are can be some memory leaks). >> > Need to know if it's idologically acceptable. >> >> I've provided some comments in the reply to your off-list message. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilvayy at gmail.com Wed Mar 25 09:08:08 2015 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Wed, 25 Mar 2015 10:08:08 +0100 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 In-Reply-To: References: <20150317192006.GX88631@mdounin.ru> Message-ID: Hi. As I'm having some spare time ahead me, i can work on this subject, if you're agree. I will send you a PM, about the details. Regards, Filipe 2015-03-24 22:20 GMT+01:00 kyprizel : > Albert, this patch will not be accepted. Need to write another one, may be > I'll try to do itlater. > > On Tue, Mar 24, 2015 at 7:33 PM, Albert Casademont > wrote: >> >> Hi all, how did this end? Being considered for 1.9? >> >> Thanks! >> >> >> On Tue, Mar 17, 2015 at 8:20 PM, Maxim Dounin wrote: >>> >>> Hello! >>> >>> On Tue, Mar 17, 2015 at 09:38:42PM +0300, kyprizel wrote: >>> >>> > Sure it should be tested (there are can be some memory leaks). >>> > Need to know if it's idologically acceptable. >>> >>> I've provided some comments in the reply to your off-list message. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From schoepke at shortcutmedia.com Wed Mar 25 10:20:37 2015 From: schoepke at shortcutmedia.com (Severin Schoepke) Date: Wed, 25 Mar 2015 11:20:37 +0100 Subject: module development: sharing data between different request phases Message-ID: <55128BF5.1090207@shortcutmedia.com> Hello there, I'm in the process of writing a custom nginx module that should do some header manipulation and some logging of strings created in that manipulation process. The module/nginx then proxies the requests to another backend... So far I have implemented a handler function that does the header manipulation and added it to the ACCESS phase. Now I'm implementing a handler function that should log some temporary data from the header manipulation to a file and added it to the LOG phase. Now I'm wondering how I can share/transfer some data/strings from the ACCESS phase handler to the LOG phase handler? - Is there some arbitrary user storage associated with a ngx_http_request_t that I could use? - Could I use variables for that? If so: how do I set a per-request variable in one handler and read it in another? - Or do I need to use a hack like storing the data in a X-Log-Data in either headers_in or headers_out and reading it from this header in the log handler? This feels very hacky and would expose the log data to the either the client or the backend... - Is there another hacky solution that I could use? Thanks a lot! cheers, Severin From oschaaf at we-amp.com Wed Mar 25 13:42:07 2015 From: oschaaf at we-amp.com (Otto van der Schaaf) Date: Wed, 25 Mar 2015 14:42:07 +0100 Subject: [PATCH] Add configuration option for the timeout for childs handling SIGTERM. In-Reply-To: <20150324170154.GX88631@mdounin.ru> References: <20150323221827.GI88631@mdounin.ru> <20150324170154.GX88631@mdounin.ru> Message-ID: On Tue, Mar 24, 2015 at 6:01 PM, Maxim Dounin wrote: > If there are good reasons why the termination takes so long - we > may consider adding another iteration. Otherwise - yes, it'll > remain fixed. > I have a test to see if the module shuts down properly upon receiving SIGTERM. This test starts up nginx plus a lot of synthetic load in parallel. The SIGTERM signal is sent a few seconds after both are up and running, at which point lots of work will have queued up in the module's worker threads. Without valgrind, the current timespan nginx allows for wrapping up is (more then) enough, but with valgrind, unwinding takes a longer time. Sometimes less, sometimes more than the current limit nginx imposes, which makes the test unreliable. Does that count as a good reason? -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Mar 26 13:01:56 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 26 Mar 2015 13:01:56 +0000 Subject: [nginx] Proxy: fixed proxy_request_buffering and chunked with pr... Message-ID: details: http://hg.nginx.org/nginx/rev/24ccec3c4a87 branches: changeset: 6056:24ccec3c4a87 user: Maxim Dounin date: Thu Mar 26 02:31:30 2015 +0300 description: Proxy: fixed proxy_request_buffering and chunked with preread body. If any preread body bytes were sent in the first chain, chunk size was incorrectly added before the whole chain, including header, resulting in an invalid request sent to upstream. Fixed to properly add chunk size after the header. diffstat: src/http/modules/ngx_http_proxy_module.c | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-) diffs (31 lines): diff -r 166c2c19c522 -r 24ccec3c4a87 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Tue Mar 24 18:45:34 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Thu Mar 26 02:31:30 2015 +0300 @@ -1503,7 +1503,7 @@ ngx_http_proxy_body_output_filter(void * u_char *chunk; ngx_int_t rc; ngx_buf_t *b; - ngx_chain_t *out, *cl, *tl, **ll; + ngx_chain_t *out, *cl, *tl, **ll, **fl; ngx_http_proxy_ctx_t *ctx; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -1546,6 +1546,7 @@ ngx_http_proxy_body_output_filter(void * size = 0; cl = in; + fl = ll; for ( ;; ) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -1602,8 +1603,8 @@ ngx_http_proxy_body_output_filter(void * b->pos = chunk; b->last = ngx_sprintf(chunk, "%xO" CRLF, size); - tl->next = out; - out = tl; + tl->next = *fl; + *fl = tl; } if (cl->buf->last_buf) { From mdounin at mdounin.ru Thu Mar 26 14:37:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Mar 2015 14:37:39 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/5c1b480ddcab branches: changeset: 6057:5c1b480ddcab user: Maxim Dounin date: Thu Mar 26 17:36:39 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1007011 -#define NGINX_VERSION "1.7.11" +#define nginx_version 1007012 +#define NGINX_VERSION "1.7.12" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From vbart at nginx.com Thu Mar 26 23:10:38 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 26 Mar 2015 23:10:38 +0000 Subject: [nginx] SPDY: fixed error handling in ngx_http_spdy_send_output_... Message-ID: details: http://hg.nginx.org/nginx/rev/7ba52c995325 branches: changeset: 6058:7ba52c995325 user: Valentin Bartenev date: Mon Mar 23 20:47:46 2015 +0300 description: SPDY: fixed error handling in ngx_http_spdy_send_output_queue(). diffstat: src/http/ngx_http_spdy.c | 20 ++++++++++++-------- 1 files changed, 12 insertions(+), 8 deletions(-) diffs (43 lines): diff -r 5c1b480ddcab -r 7ba52c995325 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Thu Mar 26 17:36:39 2015 +0300 +++ b/src/http/ngx_http_spdy.c Mon Mar 23 20:47:46 2015 +0300 @@ -700,20 +700,14 @@ ngx_http_spdy_send_output_queue(ngx_http cl = c->send_chain(c, cl, 0); if (cl == NGX_CHAIN_ERROR) { - c->error = 1; - - if (!sc->blocked) { - ngx_post_event(wev, &ngx_posted_events); - } - - return NGX_ERROR; + goto error; } clcf = ngx_http_get_module_loc_conf(sc->http_connection->conf_ctx, ngx_http_core_module); if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { - return NGX_ERROR; /* FIXME */ + goto error; } if (cl) { @@ -751,6 +745,16 @@ ngx_http_spdy_send_output_queue(ngx_http sc->last_out = frame; return NGX_OK; + +error: + + c->error = 1; + + if (!sc->blocked) { + ngx_post_event(wev, &ngx_posted_events); + } + + return NGX_ERROR; } From vbart at nginx.com Thu Mar 26 23:10:42 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 26 Mar 2015 23:10:42 +0000 Subject: [nginx] SPDY: always push pending data. Message-ID: details: http://hg.nginx.org/nginx/rev/c81d79a7befd branches: changeset: 6059:c81d79a7befd user: Valentin Bartenev date: Mon Mar 23 21:04:13 2015 +0300 description: SPDY: always push pending data. This helps to avoid suboptimal behavior when a client waits for a control frame or more data to increase window size, but the frames have been delayed in the socket buffer. The delays can be caused by bad interaction between Nagle's algorithm on nginx side and delayed ACK on the client side or by TCP_CORK/TCP_NOPUSH if SPDY was working without SSL and sendfile() was used. The pushing code is now very similar to ngx_http_set_keepalive(). diffstat: src/http/ngx_http_spdy.c | 91 +++++++++++++++++++++-------------------------- 1 files changed, 40 insertions(+), 51 deletions(-) diffs (123 lines): diff -r 7ba52c995325 -r c81d79a7befd src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Mon Mar 23 20:47:46 2015 +0300 +++ b/src/http/ngx_http_spdy.c Mon Mar 23 21:04:13 2015 +0300 @@ -662,6 +662,7 @@ ngx_http_spdy_write_handler(ngx_event_t ngx_int_t ngx_http_spdy_send_output_queue(ngx_http_spdy_connection_t *sc) { + int tcp_nodelay; ngx_chain_t *cl; ngx_event_t *wev; ngx_connection_t *c; @@ -710,6 +711,44 @@ ngx_http_spdy_send_output_queue(ngx_http goto error; } + if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) { + if (ngx_tcp_push(c->fd) == -1) { + ngx_connection_error(c, ngx_socket_errno, ngx_tcp_push_n " failed"); + goto error; + } + + c->tcp_nopush = NGX_TCP_NOPUSH_UNSET; + tcp_nodelay = ngx_tcp_nodelay_and_tcp_nopush ? 1 : 0; + + } else { + tcp_nodelay = 1; + } + + if (tcp_nodelay + && clcf->tcp_nodelay + && c->tcp_nodelay == NGX_TCP_NODELAY_UNSET) + { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "tcp_nodelay"); + + if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY, + (const void *) &tcp_nodelay, sizeof(int)) + == -1) + { +#if (NGX_SOLARIS) + /* Solaris returns EINVAL if a socket has been shut down */ + c->log_error = NGX_ERROR_IGNORE_EINVAL; +#endif + + ngx_connection_error(c, ngx_socket_errno, + "setsockopt(TCP_NODELAY) failed"); + + c->log_error = NGX_ERROR_INFO; + goto error; + } + + c->tcp_nodelay = NGX_TCP_NODELAY_SET; + } + if (cl) { ngx_add_timer(wev, clcf->send_timeout); @@ -3321,10 +3360,8 @@ ngx_http_spdy_close_stream_handler(ngx_e void ngx_http_spdy_close_stream(ngx_http_spdy_stream_t *stream, ngx_int_t rc) { - int tcp_nodelay; ngx_event_t *ev; - ngx_connection_t *c, *fc; - ngx_http_core_loc_conf_t *clcf; + ngx_connection_t *fc; ngx_http_spdy_stream_t **index, *s; ngx_http_spdy_srv_conf_t *sscf; ngx_http_spdy_connection_t *sc; @@ -3350,54 +3387,6 @@ ngx_http_spdy_close_stream(ngx_http_spdy { sc->connection->error = 1; } - - } else { - c = sc->connection; - - if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) { - if (ngx_tcp_push(c->fd) == -1) { - ngx_connection_error(c, ngx_socket_errno, - ngx_tcp_push_n " failed"); - c->error = 1; - tcp_nodelay = 0; - - } else { - c->tcp_nopush = NGX_TCP_NOPUSH_UNSET; - tcp_nodelay = ngx_tcp_nodelay_and_tcp_nopush ? 1 : 0; - } - - } else { - tcp_nodelay = 1; - } - - clcf = ngx_http_get_module_loc_conf(stream->request, - ngx_http_core_module); - - if (tcp_nodelay - && clcf->tcp_nodelay - && c->tcp_nodelay == NGX_TCP_NODELAY_UNSET) - { - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "tcp_nodelay"); - - if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY, - (const void *) &tcp_nodelay, sizeof(int)) - == -1) - { -#if (NGX_SOLARIS) - /* Solaris returns EINVAL if a socket has been shut down */ - c->log_error = NGX_ERROR_IGNORE_EINVAL; -#endif - - ngx_connection_error(c, ngx_socket_errno, - "setsockopt(TCP_NODELAY) failed"); - - c->log_error = NGX_ERROR_INFO; - c->error = 1; - - } else { - c->tcp_nodelay = NGX_TCP_NODELAY_SET; - } - } } if (sc->stream == stream) { From vbart at nginx.com Fri Mar 27 16:59:58 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 27 Mar 2015 16:59:58 +0000 Subject: [nginx] Events: made posted events macros safe. Message-ID: details: http://hg.nginx.org/nginx/rev/3d4730eada9c branches: changeset: 6060:3d4730eada9c user: Valentin Bartenev date: Fri Mar 27 19:57:15 2015 +0300 description: Events: made posted events macros safe. diffstat: src/event/ngx_event_posted.h | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diffs (36 lines): diff -r c81d79a7befd -r 3d4730eada9c src/event/ngx_event_posted.h --- a/src/event/ngx_event_posted.h Mon Mar 23 21:04:13 2015 +0300 +++ b/src/event/ngx_event_posted.h Fri Mar 27 19:57:15 2015 +0300 @@ -16,24 +16,24 @@ #define ngx_post_event(ev, q) \ \ - if (!ev->posted) { \ - ev->posted = 1; \ - ngx_queue_insert_tail(q, &ev->queue); \ + if (!(ev)->posted) { \ + (ev)->posted = 1; \ + ngx_queue_insert_tail(q, &(ev)->queue); \ \ - ngx_log_debug1(NGX_LOG_DEBUG_CORE, ev->log, 0, "post event %p", ev); \ + ngx_log_debug1(NGX_LOG_DEBUG_CORE, (ev)->log, 0, "post event %p", ev);\ \ } else { \ - ngx_log_debug1(NGX_LOG_DEBUG_CORE, ev->log, 0, \ + ngx_log_debug1(NGX_LOG_DEBUG_CORE, (ev)->log, 0, \ "update posted event %p", ev); \ } #define ngx_delete_posted_event(ev) \ \ - ev->posted = 0; \ - ngx_queue_remove(&ev->queue); \ + (ev)->posted = 0; \ + ngx_queue_remove(&(ev)->queue); \ \ - ngx_log_debug1(NGX_LOG_DEBUG_CORE, ev->log, 0, \ + ngx_log_debug1(NGX_LOG_DEBUG_CORE, (ev)->log, 0, \ "delete posted event %p", ev); From vbart at nginx.com Fri Mar 27 18:21:12 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 27 Mar 2015 18:21:12 +0000 Subject: [nginx] Events: fixed possible crash on start or reload. Message-ID: details: http://hg.nginx.org/nginx/rev/953ef81705e1 branches: changeset: 6061:953ef81705e1 user: Valentin Bartenev date: Fri Mar 27 21:19:20 2015 +0300 description: Events: fixed possible crash on start or reload. The main thread could wake up and start processing the notify event before the handler was set. diffstat: src/event/modules/ngx_epoll_module.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (20 lines): diff -r 3d4730eada9c -r 953ef81705e1 src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Fri Mar 27 19:57:15 2015 +0300 +++ b/src/event/modules/ngx_epoll_module.c Fri Mar 27 21:19:20 2015 +0300 @@ -683,14 +683,14 @@ ngx_epoll_notify(ngx_event_handler_pt ha { static uint64_t inc = 1; + notify_event.data = handler; + if ((size_t) write(notify_fd, &inc, sizeof(uint64_t)) != sizeof(uint64_t)) { ngx_log_error(NGX_LOG_ALERT, notify_event.log, ngx_errno, "write() to eventfd %d failed", notify_fd); return NGX_ERROR; } - notify_event.data = handler; - return NGX_OK; } From ian.labbe at gmail.com Sun Mar 29 18:57:21 2015 From: ian.labbe at gmail.com (=?UTF-8?Q?Ian_Labb=C3=A9?=) Date: Sun, 29 Mar 2015 14:57:21 -0400 Subject: cycle conf versus on command conf Message-ID: i would like to to a load balancing redis module.ill have groups of nodes define like what will follow. do i have to do that with the create_conf member of core_module struct or in the block_conf command "redis" ? here the example of conf in nginx.conf: redis { redis_group groupName1 { redis_node addr port weight; redis_node addr port weight; redis_node addr port weight; } redis_group groupName2 { redis_node addr port weight; } } so, is it better to do the config when redis command is called or in the create_conf module_core member when it is called in the ngx_cycle.c? thank you for your help. Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilva at ingima.com Mon Mar 30 14:38:59 2015 From: fdasilva at ingima.com (Filipe DA SILVA) Date: Mon, 30 Mar 2015 14:38:59 +0000 Subject: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 Message-ID: Hi, Thanks to my company who let me work on this subject, it's done. Additional limitation is : - Only OCSP URL of the first certificate is used to do the OCSP request . Regards, Filipe www.ingima.com ________________________________________ From: Filipe da Silva Date: 2015-03-25 10:08 GMT+01:00 Subject: Re: [PATCH] Multiple certificate support with OpenSSL >= 1.0.2 To: "nginx-devel at nginx.org" Hi. As I'm having some spare time ahead me, i can work on this subject, if you're agree. I will send you a PM, about the details. Regards, Filipe 2015-03-24 22:20 GMT+01:00 kyprizel : > Albert, this patch will not be accepted. Need to write another one, may be > I'll try to do itlater. > > On Tue, Mar 24, 2015 at 7:33 PM, Albert Casademont > wrote: >> >> Hi all, how did this end? Being considered for 1.9? >> >> Thanks! >> >> >> On Tue, Mar 17, 2015 at 8:20 PM, Maxim Dounin wrote: >>> >>> Hello! >>> >>> On Tue, Mar 17, 2015 at 09:38:42PM +0300, kyprizel wrote: >>> >>> > Sure it should be tested (there are can be some memory leaks). >>> > Need to know if it's idologically acceptable. >>> >>> I've provided some comments in the reply to your off-list message. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> -------------- next part -------------- A non-text attachment was scrubbed... Name: 00-SplitMethod.diff Type: application/octet-stream Size: 1936 bytes Desc: 00-SplitMethod.diff URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 01-AddCertList.diff Type: application/octet-stream Size: 7104 bytes Desc: 01-AddCertList.diff URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 02-AddStaplingCertIssuerList.diff Type: application/octet-stream Size: 7676 bytes Desc: 02-AddStaplingCertIssuerList.diff URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 03-FixIndentation.diff Type: application/octet-stream Size: 4440 bytes Desc: 03-FixIndentation.diff URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 04-MultiCertSupport.diff Type: application/octet-stream Size: 10253 bytes Desc: 04-MultiCertSupport.diff URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 05-MultiCertSupport2.patch Type: application/octet-stream Size: 15113 bytes Desc: 05-MultiCertSupport2.patch URL: From pluknet at nginx.com Tue Mar 31 16:23:26 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 31 Mar 2015 16:23:26 +0000 Subject: [nginx] Fixed invalid access to complex value defined as an empt... Message-ID: details: http://hg.nginx.org/nginx/rev/173561dfd567 branches: changeset: 6062:173561dfd567 user: Sergey Kandaurov date: Tue Mar 31 17:45:50 2015 +0300 description: Fixed invalid access to complex value defined as an empty string. Found by Valgrind. diffstat: src/http/modules/ngx_http_headers_filter_module.c | 6 +++--- src/http/ngx_http_special_response.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diffs (48 lines): diff -r 953ef81705e1 -r 173561dfd567 src/http/modules/ngx_http_headers_filter_module.c --- a/src/http/modules/ngx_http_headers_filter_module.c Fri Mar 27 21:19:20 2015 +0300 +++ b/src/http/modules/ngx_http_headers_filter_module.c Tue Mar 31 17:45:50 2015 +0300 @@ -378,7 +378,7 @@ ngx_http_parse_expires(ngx_str_t *value, } } - if (value->data[0] == '@') { + if (value->len && value->data[0] == '@') { value->data++; value->len--; minus = 0; @@ -390,12 +390,12 @@ ngx_http_parse_expires(ngx_str_t *value, *expires = NGX_HTTP_EXPIRES_DAILY; - } else if (value->data[0] == '+') { + } else if (value->len && value->data[0] == '+') { value->data++; value->len--; minus = 0; - } else if (value->data[0] == '-') { + } else if (value->len && value->data[0] == '-') { value->data++; value->len--; minus = 1; diff -r 953ef81705e1 -r 173561dfd567 src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c Fri Mar 27 21:19:20 2015 +0300 +++ b/src/http/ngx_http_special_response.c Tue Mar 31 17:45:50 2015 +0300 @@ -553,7 +553,7 @@ ngx_http_send_error_page(ngx_http_reques return NGX_ERROR; } - if (uri.data[0] == '/') { + if (uri.len && uri.data[0] == '/') { if (err_page->value.lengths) { ngx_http_split_args(r, &uri, &args); @@ -570,7 +570,7 @@ ngx_http_send_error_page(ngx_http_reques return ngx_http_internal_redirect(r, &uri, &args); } - if (uri.data[0] == '@') { + if (uri.len && uri.data[0] == '@') { return ngx_http_named_location(r, &uri); }