From aviram at adallom.com Sun Sep 1 08:19:06 2013 From: aviram at adallom.com (Aviram Cohen) Date: Sun, 1 Sep 2013 11:19:06 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: <20130828004143.GE2748@mdounin.ru> References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> <20130828004143.GE2748@mdounin.ru> Message-ID: Hello! On Wed, Aug 28, 2013 at 3:41 AM, Maxim Dounin wrote: > Hello! > [...] > > if (conf->upstream.ssl > && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, > &conf->upstream.ssl_certificate > conf->upstream.ssl_verify_depth) > != NGX_OK) > { > ... > } > > Additional question is what happens in a configuration like > > location / { > proxy_pass https://example.com; > proxy_ssl_verify on; > proxy_ssl_trusted_ceritifcate example.crt; > > if ($foo) { > # do nothing > } > } > > or the same with a nested location instead of "if". Quick look > suggest it will result in trusted certs loaded twice (and stale > alerts later due to how OpenSSL handles this). > I have tried this configuration (and also a nested location), and didn't see that Nginx loaded the same certificate twice (I've actually put a breakpoint on the if clause in which ngx_ssl_trusted_certificate is called, and it was called only once for the location. Can you specify exactly how to reproduce this case? Regards, Aviram From ranier at cultura.com.br Mon Sep 2 01:51:43 2013 From: ranier at cultura.com.br (ranier at cultura.com.br) Date: Sun, 1 Sep 2013 22:51:43 -0300 Subject: ngx_array.c small changes Message-ID: Hi, I?ve made small changes in \core\ngx_array.c, I think is a little better. I hope it will be accepted. Thanks. Ranier Vilela -------------- next part -------------- A non-text attachment was scrubbed... Name: ngx_array.c.dif Type: application/octet-stream Size: 1398 bytes Desc: ngx_array.c.dif URL: From mdounin at mdounin.ru Mon Sep 2 01:59:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 05:59:17 +0400 Subject: ngx_array.c small changes In-Reply-To: References: Message-ID: <20130902015917.GO29448@mdounin.ru> Hello! On Sun, Sep 01, 2013 at 10:51:43PM -0300, ranier at cultura.com.br wrote: -- Maxim Dounin http://nginx.org/en/donation.html -------------- next part -------------- A non-text attachment was scrubbed... Name: ngx_array.c.dif Type: application/octet-stream Size: 1398 bytes Desc: ngx_array.c.dif URL: From mdounin at mdounin.ru Mon Sep 2 02:05:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 06:05:32 +0400 Subject: ngx_array.c small changes In-Reply-To: References: Message-ID: <20130902020532.GP29448@mdounin.ru> Hello! On Sun, Sep 01, 2013 at 10:51:43PM -0300, ranier at cultura.com.br wrote: > Hi, > I?ve made small changes in \core\ngx_array.c, I think is a little better. > I hope it will be accepted. No, sorry, your changes break style. We generally follow C89 rules and define all variables at function scope. See also http://nginx.org/en/docs/contributing_changes.html for basic tips. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Mon Sep 2 03:57:18 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 02 Sep 2013 03:57:18 +0000 Subject: [nginx] Assume the HTTP/1.0 version by default. Message-ID: details: http://hg.nginx.org/nginx/rev/62be77b0608f branches: changeset: 5354:62be77b0608f user: Valentin Bartenev date: Mon Sep 02 03:45:14 2013 +0400 description: Assume the HTTP/1.0 version by default. It is believed to be better than fallback to HTTP/0.9, because most of the clients at present time support HTTP/1.0. It allows nginx to return error response code for them in cases when it fail to parse request line, and therefore fail to detect client protocol version. Even if the client does not support HTTP/1.0, this assumption should not cause any harm, since from the HTTP/0.9 point of view it still a valid response. diffstat: src/http/ngx_http_request.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 1608b1135a1d -r 62be77b0608f src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Fri Aug 30 21:44:16 2013 +0400 +++ b/src/http/ngx_http_request.c Mon Sep 02 03:45:14 2013 +0400 @@ -571,6 +571,7 @@ ngx_http_create_request(ngx_connection_t r->start_msec = tp->msec; r->method = NGX_HTTP_UNKNOWN; + r->http_version = NGX_HTTP_VERSION_10; r->headers_in.content_length_n = -1; r->headers_in.keep_alive_n = -1; From vbart at nginx.com Mon Sep 2 04:10:40 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 02 Sep 2013 04:10:40 +0000 Subject: [nginx] Added the NGX_EBADF define. Message-ID: details: http://hg.nginx.org/nginx/rev/32847478c2c1 branches: changeset: 5355:32847478c2c1 user: Valentin Bartenev date: Mon Sep 02 08:07:44 2013 +0400 description: Added the NGX_EBADF define. diffstat: src/event/modules/ngx_select_module.c | 2 +- src/os/unix/ngx_errno.h | 1 + src/os/win32/ngx_errno.h | 1 + 3 files changed, 3 insertions(+), 1 deletions(-) diffs (34 lines): diff -r 62be77b0608f -r 32847478c2c1 src/event/modules/ngx_select_module.c --- a/src/event/modules/ngx_select_module.c Mon Sep 02 03:45:14 2013 +0400 +++ b/src/event/modules/ngx_select_module.c Mon Sep 02 08:07:44 2013 +0400 @@ -288,7 +288,7 @@ ngx_select_process_events(ngx_cycle_t *c ngx_log_error(level, cycle->log, err, "select() failed"); - if (err == EBADF) { + if (err == NGX_EBADF) { ngx_select_repair_fd_sets(cycle); } diff -r 62be77b0608f -r 32847478c2c1 src/os/unix/ngx_errno.h --- a/src/os/unix/ngx_errno.h Mon Sep 02 03:45:14 2013 +0400 +++ b/src/os/unix/ngx_errno.h Mon Sep 02 08:07:44 2013 +0400 @@ -50,6 +50,7 @@ typedef int ngx_err_t; #define NGX_EILSEQ EILSEQ #define NGX_ENOMOREFILES 0 #define NGX_ELOOP ELOOP +#define NGX_EBADF EBADF #if (NGX_HAVE_OPENAT) #define NGX_EMLINK EMLINK diff -r 62be77b0608f -r 32847478c2c1 src/os/win32/ngx_errno.h --- a/src/os/win32/ngx_errno.h Mon Sep 02 03:45:14 2013 +0400 +++ b/src/os/win32/ngx_errno.h Mon Sep 02 08:07:44 2013 +0400 @@ -52,6 +52,7 @@ typedef DWORD ngx_e #define NGX_ENOMOREFILES ERROR_NO_MORE_FILES #define NGX_EILSEQ ERROR_NO_UNICODE_TRANSLATION #define NGX_ELOOP 0 +#define NGX_EBADF WSAEBADF #define NGX_EALREADY WSAEALREADY #define NGX_EINVAL WSAEINVAL From vbart at nginx.com Mon Sep 2 04:10:41 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 02 Sep 2013 04:10:41 +0000 Subject: [nginx] Disable symlinks: use O_PATH to open path components. Message-ID: details: http://hg.nginx.org/nginx/rev/acd51b0f6fd4 branches: changeset: 5356:acd51b0f6fd4 user: Valentin Bartenev date: Mon Sep 02 08:07:59 2013 +0400 description: Disable symlinks: use O_PATH to open path components. It was introduced in Linux 2.6.39, glibc 2.14 and allows to obtain file descriptors without actually opening files. Thus made it possible to traverse path with openat() syscalls without the need to have read permissions for path components. It is effectively emulates O_SEARCH which is missing on Linux. O_PATH is used in combination with O_RDONLY. The last one is ignored if O_PATH is used, but it allows nginx to not fail when it was built on modern system (i.e. glibc 2.14+) and run with a kernel older than 2.6.39. Then O_PATH is unknown to the kernel and ignored, while O_RDONLY is used. Sadly, fstat() is not working with O_PATH descriptors till Linux 3.6. As a workaround we fallback to fstatat() with the AT_EMPTY_PATH flag that was introduced at the same time as O_PATH. diffstat: auto/os/linux | 16 ++++++++++ src/core/ngx_open_file_cache.c | 67 ++++++++++++++++++++++++++++++++++++++++++ src/os/unix/ngx_files.h | 3 + 3 files changed, 86 insertions(+), 0 deletions(-) diffs (136 lines): diff -r 32847478c2c1 -r acd51b0f6fd4 auto/os/linux --- a/auto/os/linux Mon Sep 02 08:07:44 2013 +0400 +++ b/auto/os/linux Mon Sep 02 08:07:59 2013 +0400 @@ -68,6 +68,22 @@ if [ $ngx_found = yes ]; then fi +# O_PATH and AT_EMPTY_PATH were introduced in 2.6.39, glibc 2.14 + +ngx_feature="O_PATH" +ngx_feature_name="NGX_HAVE_O_PATH" +ngx_feature_run=no +ngx_feature_incs="#include + #include + #include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="int fd; struct stat sb; + fd = openat(AT_FDCWD, \".\", O_PATH|O_DIRECTORY|O_NOFOLLOW); + if (fstatat(fd, \"\", &sb, AT_EMPTY_PATH) != 0) return 1" +. auto/feature + + # sendfile() CC_AUX_FLAGS="$cc_aux_flags -D_GNU_SOURCE" diff -r 32847478c2c1 -r acd51b0f6fd4 src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c Mon Sep 02 08:07:44 2013 +0400 +++ b/src/core/ngx_open_file_cache.c Mon Sep 02 08:07:59 2013 +0400 @@ -25,6 +25,10 @@ static void ngx_open_file_cache_cleanup( #if (NGX_HAVE_OPENAT) static ngx_fd_t ngx_openat_file_owner(ngx_fd_t at_fd, const u_char *name, ngx_int_t mode, ngx_int_t create, ngx_int_t access, ngx_log_t *log); +#if (NGX_HAVE_O_PATH) +static ngx_int_t ngx_file_o_path_info(ngx_fd_t fd, ngx_file_info_t *fi, + ngx_log_t *log); +#endif #endif static ngx_fd_t ngx_open_file_wrapper(ngx_str_t *name, ngx_open_file_info_t *of, ngx_int_t mode, ngx_int_t create, @@ -517,10 +521,17 @@ ngx_openat_file_owner(ngx_fd_t at_fd, co goto failed; } +#if (NGX_HAVE_O_PATH) + if (ngx_file_o_path_info(fd, &fi, log) == NGX_ERROR) { + err = ngx_errno; + goto failed; + } +#else if (ngx_fd_info(fd, &fi) == NGX_FILE_ERROR) { err = ngx_errno; goto failed; } +#endif if (fi.st_uid != atfi.st_uid) { err = NGX_ELOOP; @@ -541,8 +552,64 @@ failed: return NGX_INVALID_FILE; } + +#if (NGX_HAVE_O_PATH) + +static ngx_int_t +ngx_file_o_path_info(ngx_fd_t fd, ngx_file_info_t *fi, ngx_log_t *log) +{ + static ngx_uint_t use_fstat = 1; + + /* + * In Linux 2.6.39 the O_PATH flag was introduced that allows to obtain + * a descriptor without actually opening file or directory. It requires + * less permissions for path components, but till Linux 3.6 fstat() returns + * EBADF on such descriptors, and fstatat() with the AT_EMPTY_PATH flag + * should be used instead. + * + * Three scenarios are handled in this function: + * + * 1) The kernel is newer than 3.6 or fstat() with O_PATH support was + * backported by vendor. Then fstat() is used. + * + * 2) The kernel is newer than 2.6.39 but older than 3.6. In this case + * the first call of fstat() returns EBADF and we fallback to fstatat() + * with AT_EMPTY_PATH which was introduced at the same time as O_PATH. + * + * 3) The kernel is older than 2.6.39 but nginx was build with O_PATH + * support. Since descriptors are opened with O_PATH|O_RDONLY flags + * and O_PATH is ignored by the kernel then the O_RDONLY flag is + * actually used. In this case fstat() just works. + */ + + if (use_fstat) { + if (ngx_fd_info(fd, fi) != NGX_FILE_ERROR) { + return NGX_OK; + } + + if (ngx_errno != NGX_EBADF) { + return NGX_ERROR; + } + + ngx_log_error(NGX_LOG_NOTICE, log, 0, + "fstat(O_PATH) failed with EBADF, " + "switching to fstatat(AT_EMPTY_PATH)"); + + use_fstat = 0; + return ngx_file_o_path_info(fd, fi, log); + } + + if (ngx_file_at_info(fd, "", fi, AT_EMPTY_PATH) != NGX_FILE_ERROR) { + return NGX_OK; + } + + return NGX_ERROR; +} + #endif +#endif /* NGX_HAVE_OPENAT */ + static ngx_fd_t ngx_open_file_wrapper(ngx_str_t *name, ngx_open_file_info_t *of, diff -r 32847478c2c1 -r acd51b0f6fd4 src/os/unix/ngx_files.h --- a/src/os/unix/ngx_files.h Mon Sep 02 08:07:44 2013 +0400 +++ b/src/os/unix/ngx_files.h Mon Sep 02 08:07:59 2013 +0400 @@ -91,6 +91,9 @@ typedef struct { #elif defined(O_EXEC) #define NGX_FILE_SEARCH (O_EXEC|NGX_FILE_DIRECTORY) +#elif (NGX_HAVE_O_PATH) +#define NGX_FILE_SEARCH (O_PATH|O_RDONLY|NGX_FILE_DIRECTORY) + #else #define NGX_FILE_SEARCH (O_RDONLY|NGX_FILE_DIRECTORY) #endif From a.marinov at ucdn.com Mon Sep 2 06:01:51 2013 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Mon, 2 Sep 2013 09:01:51 +0300 Subject: Sharing data when download the same object from upstream In-Reply-To: References: Message-ID: The patch in not in the milling list. We just spoke about the same problem before in the list with other developers. Unfortunately I cannot share the patch because it has been made for commercial project. However I am going to ask for permition to share it. On Fri, Aug 30, 2013 at 12:04 PM, SplitIce wrote: > Is the patch on this mailing list (forgive me I cant see it)? > > Ill happily test it for you, although for me to get any personal benefit > there would need to be a size restriction since 99.9% of requests are just > small HTML documents and would not benifit. Also the standard caching > (headers that result in a cache miss e.g cookies, cache-control) would have > to be correct. > > At the very least Ill read over it and see if I spot anything / have > recommendations. > > Regards, > Mathew > > > On Fri, Aug 30, 2013 at 6:25 PM, Anatoli Marinov wrote: > >> I discussed the idea years ago here in the mailing list but nobody from >> the main developers liked it. However I developed a patch and we have this >> in production more than 1 year and it works fine. >> >> Just think for the following case: >> You have a new file which is 1 GB and it is located far from the cache. >> Even so you can download it with 5 MBps through cache upstream so you need >> 200 seconds to get it. This file is a video file and because it is a new is >> placed on the first page. For first 30 seconds your caching server may >> receive 1000 requests (or even more) for this file and you cannot block >> all new requests for 170 seconds ?!?! to wait for file to be downloaded. >> Also all requests will be send to the origin and your proxy will generate 1 >> TB traffic instead of 1 GB. >> >> It will be amazing if this feature will be implemented as a part of the >> common caching mechanism. >> >> >> >> On Fri, Aug 30, 2013 at 11:42 AM, SplitIce wrote: >> >>> This is an interesting idea, while I don't see it being all that useful >>> for most applications there are some that could really benefit (large file >>> proxying first comes to mind). If it could be achieved without introducing >>> too much of a CPU overhead in keeping track of the requests & available >>> parts it would be quite interesting. >>> >>> I would like to see an option to supply a minimum size to restrict this >>> feature too (either by after x bytes are passed add to map/rbtree whatever >>> or based off content-length). >>> >>> Regards, >>> Mathew >>> >>> >>> On Fri, Aug 30, 2013 at 6:01 PM, Anatoli Marinov wrote: >>> >>>> Hello, >>>> >>>> >>>> On Wed, Aug 28, 2013 at 7:56 PM, Alex Garz?o wrote: >>>> >>>>> Hello Anatoli, >>>>> >>>>> Thanks for your reply. I will appreciate (a lot) your help :-) >>>>> >>>>> I'm trying to fix the code with the following requirements in mind: >>>>> >>>>> 1) We were upstreams/downstreams with good (and bad) links; in >>>>> general, upstream speed is more than downstream speed but, in some >>>>> situations, the downstream speed is a lot more quickly than the >>>>> upstream speed; >>>>> >>>> I think this is asynchronous and if the upstream is faster than the >>>> downstream it save the data to cached file faster and the downstream gets >>>> the data from the file instead of the mem buffers. >>>> >>>> >>>>> 2) I'm trying to disassociate the upstream speed from the downstream >>>>> speed. The first request (request that already will connect in the >>>>> upstream) download data to temp file, but no longer sends data to >>>>> downstream. I disabled this because, in my understand, if the first >>>>> request has a slow downstream, all others downstreams will wait data >>>>> to be sent to this slow downstream. >>>>> >>>> I think this is not necessary. >>>> >>>> >>>>> >>>>> My first doubt is: Need I worry about downstream/upstream speed? >>>>> >>>>> No >>>> >>>> >>>>> Well, I will try to explain what I did in the code: >>>>> >>>>> 1) I created a rbtree (currrent_downloads) that keeps the current >>>>> downloads (one rbtree per upstream). Each node keeps the first request >>>>> (request that already will connect into upstream) and a list >>>>> (download_info_list) that will keep two fields: (a) request waiting >>>>> data from the temp file and (b) file offset already sent from the temp >>>>> file (last_offset); >>>>> >>>>> >>>> I have the same but in ordered array (simple implementation). Anyway >>>> the rbtree will do the same. But this structure should be in shared memory >>>> because all workers should know which files are currently in downloading >>>> from upstream state. The should exist in tmp directory. >>>> >>>> >>>>> 2) In ngx_http_upstream_init_request(), when the object isn't in the >>>>> cache, before connect into upstream, I check if the object is in >>>>> rbtree (current_downloads); >>>>> >>>>> 3) When the object isn't in current_downloads, I add a node that >>>>> contains the first request (equal to current request) and I add the >>>>> current request into the download_info_list. Beyond that, I create a >>>>> timer event (polling) that will check all requests in >>>>> download_info_list and verify if there are data in temp file that >>>>> already not sent to the downstream. I create one timer event per >>>>> object [1]. >>>>> >>>>> 4) When the object is in current_downloads, I add the request into >>>>> download_info_list and finalize ngx_http_upstream_init_request() (I >>>>> just return without execute ngx_http_upstream_finalize_request()); >>>>> >>>>> 5) I have disabled (in ngx_event_pipe) the code that sends data to >>>>> downstream (requirement 2); >>>>> >>>>> 6) In the polling event, I get the current temp file offset >>>>> (first_request->upstream->pipe->temp_file->offset) and I check in the >>>>> download_info_list if this is > than last_offset. If true, I send more >>>>> data to downstream with the ngx_http_upstream_cache_send_partial (code >>>>> bellow); >>>>> >>>>> 7) In the polling event, when pipe->upstream_done || >>>>> pipe->upstream_eof || pipe->upstream_error, and all data were sent to >>>>> downstream, I execute ngx_http_upstream_finalize_request for all >>>>> requests; >>>>> >>>>> 8) I added a bit flag (first_download_request) in ngx_http_request_t >>>>> struct to avoid request to be finished before all requests were >>>>> completed. In ngx_http_upstream_finalize_request() I check this flag. >>>>> But, in really, I don't have sure if is necessary avoid this >>>>> situation... >>>>> >>>>> >>>>> Bellow you can see the ngx_http_upstream_cache_send_partial code: >>>>> >>>>> >>>>> ///////////// >>>>> static ngx_int_t >>>>> ngx_http_upstream_cache_send_partial(ngx_http_request_t *r, >>>>> ngx_temp_file_t *file, off_t offset, off_t bytes, unsigned last_buf) >>>>> { >>>>> ngx_buf_t *b; >>>>> ngx_chain_t out; >>>>> ngx_http_cache_t *c; >>>>> >>>>> c = r->cache; >>>>> >>>>> /* we need to allocate all before the header would be sent */ >>>>> >>>>> b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); >>>>> if (b == NULL) { >>>>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>>>> } >>>>> >>>>> b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); >>>>> if (b->file == NULL) { >>>>> return NGX_HTTP_INTERNAL_SERVER_ERROR; >>>>> } >>>>> >>>>> /* FIX: need to run ngx_http_send_header(r) once... */ >>>>> >>>>> b->file_pos = offset; >>>>> b->file_last = bytes; >>>>> >>>>> b->in_file = 1; >>>>> b->last_buf = last_buf; >>>>> b->last_in_chain = 1; >>>>> >>>>> b->file->fd = file->file.fd; >>>>> b->file->name = file->file.name; >>>>> b->file->log = r->connection->log; >>>>> >>>>> out.buf = b; >>>>> out.next = NULL; >>>>> >>>>> return ngx_http_output_filter(r, &out); >>>>> } >>>>> //////////// >>>>> >>>>> My second doubt is: Could I just fix ngx_event_pipe to send to all >>>>> requests (instead of to send to one request)? And, if true, >>>>> ngx_http_output_filter can be used to send a big chunk at first time >>>>> (300 MB or more) and little chunks after that? >>>>> >>>>> >>>> Use smaller chunks. >>>> >>>> Thanks in advance for your attention :-) >>>>> >>>>> [1] I know that "polling event" is a bad approach with NGINX, but I >>>>> don't know how to fix this. For example, the upstream download can be >>>>> very quickly, and is possible that I need send data to downstream in >>>>> little chunks. Upstream (in NGINX) is socket event based, but, when >>>>> download from upstream finished, which event can I expect? >>>>> >>>>> Regards. >>>>> -- >>>>> Alex Garz?o >>>>> Projetista de Software >>>>> Azion Technologies >>>>> alex.garzao (at) azion.com >>>>> >>>>> _______________________________________________ >>>>> nginx-devel mailing list >>>>> nginx-devel at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>>> >>>> >>>> You are on a right way. Just keep digging. Do not forget to turn off >>>> this features when you have flv or mp4 seek, partial requests and >>>> content-ecoding different than identity because you will send broken files >>>> to the browsers. >>>> >>>> _______________________________________________ >>>> nginx-devel mailing list >>>> nginx-devel at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>>> >>> >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >>> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 2 12:09:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 16:09:59 +0400 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> <20130828004143.GE2748@mdounin.ru> Message-ID: <20130902120959.GB65634@mdounin.ru> Hello! On Sun, Sep 01, 2013 at 11:19:06AM +0300, Aviram Cohen wrote: > Hello! > > On Wed, Aug 28, 2013 at 3:41 AM, Maxim Dounin wrote: > > Hello! > > > [...] > > > > if (conf->upstream.ssl > > && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, > > &conf->upstream.ssl_certificate > > conf->upstream.ssl_verify_depth) > > != NGX_OK) > > { > > ... > > } > > > > Additional question is what happens in a configuration like > > > > location / { > > proxy_pass https://example.com; > > proxy_ssl_verify on; > > proxy_ssl_trusted_ceritifcate example.crt; > > > > if ($foo) { > > # do nothing > > } > > } > > > > or the same with a nested location instead of "if". Quick look > > suggest it will result in trusted certs loaded twice (and stale > > alerts later due to how OpenSSL handles this). > > > > I have tried this configuration (and also a nested location), and didn't > see that Nginx loaded the same certificate twice (I've actually put > a breakpoint on the if clause in which ngx_ssl_trusted_certificate > is called, and it was called only once for the location. > > Can you specify exactly how to reproduce this case? I was probably wrong here, as the code you added is before the conf->upstream.ssl is inherited. -- Maxim Dounin http://nginx.org/en/donation.html From wandenberg at gmail.com Mon Sep 2 14:24:33 2013 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 2 Sep 2013 11:24:33 -0300 Subject: Help with shared memory usage In-Reply-To: References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> Message-ID: Hi Maxim, did you have opportunity to take a look on this patch? Regards, Wandenberg On Wed, Jul 31, 2013 at 12:28 AM, Wandenberg Peixoto wrote: > Hello! > > Thanks for your help. I hope that the patch be OK now. > I don't know if the function and variable names are on nginx pattern. > Feel free to change the patch. > If you have any other point before accept it, will be a pleasure to fix it. > > > --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 > +++ src/core/ngx_slab.c 2013-07-31 00:21:08.043034442 -0300 > @@ -615,6 +615,26 @@ fail: > > > static ngx_slab_page_t * > +ngx_slab_merge_with_neighbour(ngx_slab_pool_t *pool, ngx_slab_page_t > *page) > +{ > > + ngx_slab_page_t *neighbour = &page[page->slab]; > + if (((ngx_slab_page_t *) neighbour->prev != NULL) && (neighbour->next > != NULL) && ((neighbour->prev & NGX_SLAB_PAGE_MASK) == NGX_SLAB_PAGE)) { > + page->slab += neighbour->slab; > > + > + ((ngx_slab_page_t *) neighbour->prev)->next = neighbour->next; > + neighbour->next->prev = neighbour->prev; > + > + neighbour->slab = NGX_SLAB_PAGE_FREE; > + neighbour->prev = (uintptr_t) &pool->free; > + neighbour->next = &pool->free; > + > + return page; > + } > + return NULL; > +} > + > + > +static ngx_slab_page_t * > ngx_slab_alloc_pages(ngx_slab_pool_t *pool, ngx_uint_t pages) > { > ngx_slab_page_t *page, *p; > @@ -657,6 +677,19 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po > } > } > > + ngx_flag_t retry = 0; > + for (page = pool->free.next; page != &pool->free;) { > + if (ngx_slab_merge_with_neighbour(pool, page)) { > + retry = 1; > + } else { > + page = page->next; > + } > + } > + > + if (retry) { > + return ngx_slab_alloc_pages(pool, pages); > + } > + > ngx_slab_error(pool, NGX_LOG_CRIT, "ngx_slab_alloc() failed: no > memory"); > > return NULL; > @@ -687,6 +720,8 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo > > page->next->prev = (uintptr_t) page; > > pool->free.next = page; > + > + ngx_slab_merge_with_neighbour(pool, page); > } > > > > > > > On Tue, Jul 30, 2013 at 7:09 AM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote: >> >> [...] >> >> > What would be an alternative to not loop on pool->pages? >> >> Free memory blocks are linked in pool->free list, it should be >> enough to look there. >> >> [...] >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 2 14:49:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 18:49:27 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: References: Message-ID: <20130902144927.GD65634@mdounin.ru> Hello! (Sorry again for late reply. See below for comments.) On Fri, Aug 02, 2013 at 01:16:53PM +0800, Sepherosa Ziehau wrote: > Here is another round of SO_REUSEPORT support. The plot is changed a > little bit to allow smooth configure reloading and binary upgrading. > Here is what happens when so_reuseport is enable (this does not affect > single process model): > - Master creates the listen sockets w/ SO_REUSEPORT, but does not configure them > - The first worker process will inherit the listen sockets created by > master and configure them > - After master forked the first worker process all listen sockets are closed > - The rest of the workers will create their own listen sockets w/ SO_REUSEPORT > - During binary upgrade, listen sockets are no longer passed through > environment variables, since new master will create its own listen > sockets. Well, the old master actually does not have any listen > sockets opened :). > > The idea behind this plot is that at any given time, there is always > one listen socket left, which could inherit the syncaches and pending > sockets on the to-be-closed listen sockets. The inheritance itself is > handled by the kernel; I implemented this inheritance for DragonFlyBSD > recently (http://gitweb.dragonflybsd.org/dragonfly.git/commit/02ad2f0b874fb0a45eb69750219f79f5e8982272). > I am not tracking Linux's code, but I think Linux side will > eventually get (or already got) the proper fix. > > The patch itself: > http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport3.diff > > Configuration reloading and binary upgrading will not be interfered as > w/ the first 2 patches. > > Binary upgrading reverting method 1 ("Send the HUP signal to the old > master process. ...") will not be interfered as w/ the first 2 > patches. There still could be some glitch (but not that worse as w/ > the first 2 patches) if binary upgrading reverting method 2 ("Send the > TERM signal to the new master process. ...") is used. I think we > probably just need to mention that in the document. While this look like better that what was with previous patches (mostly due to inheritance handled by kernel), it still looks very fragile for me. In particular, I really dislike the trick with making first worker process special. It's probably should either left in the state "nothing is guaranteed" (with some understanding of what will happen in various common situations like reconfiguration, upgrade, switching so_reuseport on/off) or some way should be found to make things less tricky. Additional question to consider is what happens with security checks? Linux seems to require processs user id match on SO_REUSEPORT sockets, and I would expect this to fail if there are sockets opened both in master and in worker processes; and privileged port checks might cause problems as well. (We've also discussed this here in office serveral times, and it seems that general consensus is that SO_REUSEPORT for TCP balancing isn't really good interface. It would be much easier for everyone if normal workflow with inherited listen socket descriptors just worked. Especially given the fact that in nginx case it's mostly about benchmarking, since in real life load distribution between worker processes is good enough.) -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 2 14:53:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 18:53:34 +0400 Subject: Help with shared memory usage In-Reply-To: References: <20130701113629.GO20717@mdounin.ru> <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> Message-ID: <20130902145334.GE65634@mdounin.ru> Hello! On Mon, Sep 02, 2013 at 11:24:33AM -0300, Wandenberg Peixoto wrote: > did you have opportunity to take a look on this patch? Not yet, sorry. It's in my TODO and I'll try to look at it this week. Overall it seems good enough, but it certainly needs style/cosmetic cleanup before it can be committed. [...] -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Mon Sep 2 16:54:09 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 02 Sep 2013 16:54:09 +0000 Subject: [nginx] Disable symlinks: removed recursive call of ngx_file_o_p... Message-ID: details: http://hg.nginx.org/nginx/rev/659464c695b7 branches: changeset: 5357:659464c695b7 user: Valentin Bartenev date: Mon Sep 02 20:06:03 2013 +0400 description: Disable symlinks: removed recursive call of ngx_file_o_path_info(). It is surplus. diffstat: src/core/ngx_open_file_cache.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff -r acd51b0f6fd4 -r 659464c695b7 src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c Mon Sep 02 08:07:59 2013 +0400 +++ b/src/core/ngx_open_file_cache.c Mon Sep 02 20:06:03 2013 +0400 @@ -596,7 +596,6 @@ ngx_file_o_path_info(ngx_fd_t fd, ngx_fi "switching to fstatat(AT_EMPTY_PATH)"); use_fstat = 0; - return ngx_file_o_path_info(fd, fi, log); } if (ngx_file_at_info(fd, "", fi, AT_EMPTY_PATH) != NGX_FILE_ERROR) { From sepherosa at gmail.com Tue Sep 3 02:31:55 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Tue, 3 Sep 2013 10:31:55 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: <20130902144927.GD65634@mdounin.ru> References: <20130902144927.GD65634@mdounin.ru> Message-ID: On Mon, Sep 2, 2013 at 10:49 PM, Maxim Dounin wrote: > Hello! > > (Sorry again for late reply. See below for comments.) > > Thank you for the reply. > On Fri, Aug 02, 2013 at 01:16:53PM +0800, Sepherosa Ziehau wrote: > > > Here is another round of SO_REUSEPORT support. The plot is changed a > > little bit to allow smooth configure reloading and binary upgrading. > > Here is what happens when so_reuseport is enable (this does not affect > > single process model): > > - Master creates the listen sockets w/ SO_REUSEPORT, but does not > configure them > > - The first worker process will inherit the listen sockets created by > > master and configure them > > - After master forked the first worker process all listen sockets are > closed > > - The rest of the workers will create their own listen sockets w/ > SO_REUSEPORT > > - During binary upgrade, listen sockets are no longer passed through > > environment variables, since new master will create its own listen > > sockets. Well, the old master actually does not have any listen > > sockets opened :). > > > > The idea behind this plot is that at any given time, there is always > > one listen socket left, which could inherit the syncaches and pending > > sockets on the to-be-closed listen sockets. The inheritance itself is > > handled by the kernel; I implemented this inheritance for DragonFlyBSD > > recently ( > http://gitweb.dragonflybsd.org/dragonfly.git/commit/02ad2f0b874fb0a45eb69750219f79f5e8982272 > ). > > I am not tracking Linux's code, but I think Linux side will > > eventually get (or already got) the proper fix. > > > > The patch itself: > > http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport3.diff > > > > Configuration reloading and binary upgrading will not be interfered as > > w/ the first 2 patches. > > > > Binary upgrading reverting method 1 ("Send the HUP signal to the old > > master process. ...") will not be interfered as w/ the first 2 > > patches. There still could be some glitch (but not that worse as w/ > > the first 2 patches) if binary upgrading reverting method 2 ("Send the > > TERM signal to the new master process. ...") is used. I think we > > probably just need to mention that in the document. > > While this look like better that what was with previous patches > (mostly due to inheritance handled by kernel), it still looks very > fragile for me. In particular, I really dislike the trick with > making first worker process special. > > Well, the idea is to keep at least one listen socket opened. Maybe I could find other way in kernel to make it less tricky. However, that may add extra syscall or socket option. > It's probably should either left in the state "nothing is > guaranteed" (with some understanding of what will happen in > various common situations like reconfiguration, upgrade, switching > so_reuseport on/off) or some way should be found to make things > less tricky. > To be frank, at least interfering the reconfigure probably is not wanted. And I don't want "nothing is guaranteed" (which probably is the first 2 patches). > > Additional question to consider is what happens with security > checks? Linux seems to require processs user id match on > SO_REUSEPORT sockets, and I would expect this to fail if there are > BSD's SO_REUSEPORT don't check uid. However, as far as I understand the code, when nginx worker creates SO_REUSEPORT listen socket, the uid is not changed yet. > sockets opened both in master and in worker processes; and > privileged port checks might cause problems as well. > See the above comment. > > (We've also discussed this here in office serveral times, and it > seems that general consensus is that SO_REUSEPORT for TCP balancing > isn't really good interface. It would be much easier for everyone > if normal workflow with inherited listen socket descriptors just > worked. Especially given the fact that in nginx case it's mostly > about benchmarking, since in real life load distribution between > worker processes is good enough.) In DragonFly, SO_REUSEPORT is more than load balance: it makes the accepted sockets network processing completely CPU localized (from user land to kernel land on both RX and TX path). This level of network processing CPU localization could not be achieved by the old listen socket inheritance usage model (even if I could divide listen socket's completion queue to each CPU base on RX hash, the level of CPU localization achieved by SO_REUSEPORT still could not be achieved easily). In addition to the CPU localization, it also avoids nginx's accept mutex contention (I have not measured the contention rate though, but no contention should be better, imho). Best Regards, sephe -------------- next part -------------- An HTML attachment was scrubbed... URL: From parker.p.dev at gmail.com Tue Sep 3 11:22:07 2013 From: parker.p.dev at gmail.com (Phil Parker) Date: Tue, 3 Sep 2013 12:22:07 +0100 Subject: [PATCH] Proxy remote server SSL certificate verification Message-ID: I've done some basic testing on this patch on: -------------- next part -------------- An HTML attachment was scrubbed... URL: From parker.p.dev at gmail.com Tue Sep 3 11:24:28 2013 From: parker.p.dev at gmail.com (Phil Parker) Date: Tue, 3 Sep 2013 12:24:28 +0100 Subject: [PATCH] Proxy remote server SSL certificate verification Message-ID: > On Tue, Sep 3, 2013 at 12:22 PM, Phil Parker wrote: > > I've done some basic testing on this patch on: > Ignore previous - accidental send! I've done some basic testing on this patch on: nginx version: nginx/1.4.2 Linux 3.8.0-25-generic #37-Ubuntu SMP Thu Jun 6 20:47:07 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux And it satisfies our functional tests and works as expected. Is there any indication of how long it normally takes this patch to make it to a stable version? Thanks, P. From aviram at adallom.com Tue Sep 3 12:32:58 2013 From: aviram at adallom.com (Aviram Cohen) Date: Tue, 3 Sep 2013 15:32:58 +0300 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: <20130902120959.GB65634@mdounin.ru> References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> <20130828004143.GE2748@mdounin.ru> <20130902120959.GB65634@mdounin.ru> Message-ID: Hello! Thanks for the comments. The new version with all the fixes is attached (and also pasted in this mail). Best regards, Aviram diff -Npru nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c --- nginx-1.4.1/src/http/modules/ngx_http_proxy_module.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/modules/ngx_http_proxy_module.c 2013-09-03 15:23:15.607874155 +0300 @@ -74,6 +74,11 @@ typedef struct { ngx_uint_t http_version; +#if (NGX_HTTP_SSL) + ngx_uint_t ssl_verify_depth; + ngx_str_t ssl_trusted_certificate; +#endif + ngx_uint_t headers_hash_max_size; ngx_uint_t headers_hash_bucket_size; } ngx_http_proxy_loc_conf_t; @@ -510,6 +515,27 @@ static ngx_command_t ngx_http_proxy_com NGX_HTTP_LOC_CONF_OFFSET, offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + + { ngx_string("proxy_ssl_verify"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_verify), + NULL }, + + { ngx_string("proxy_ssl_verify_depth"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_num_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_verify_depth), + NULL }, + + { ngx_string("proxy_ssl_trusted_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_trusted_certificate), + NULL }, #endif @@ -2382,6 +2408,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->body_set = NULL; * conf->body_source = { 0, NULL }; * conf->redirects = NULL; + * conf->ssl_trusted_certificate = NULL; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2419,8 +2446,11 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.pass_headers = NGX_CONF_UNSET_PTR; conf->upstream.intercept_errors = NGX_CONF_UNSET; + #if (NGX_HTTP_SSL) conf->upstream.ssl_session_reuse = NGX_CONF_UNSET; + conf->upstream.ssl_verify = NGX_CONF_UNSET; + conf->ssl_verify_depth = NGX_CONF_UNSET_UINT; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2695,8 +2725,36 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t prev->upstream.intercept_errors, 0); #if (NGX_HTTP_SSL) + ngx_conf_merge_value(conf->upstream.ssl_session_reuse, prev->upstream.ssl_session_reuse, 1); + ngx_conf_merge_value(conf->upstream.ssl_verify, + prev->upstream.ssl_verify, 0); + ngx_conf_merge_uint_value(conf->ssl_verify_depth, + prev->ssl_verify_depth, 1); + ngx_conf_merge_str_value(conf->ssl_trusted_certificate, + prev->ssl_trusted_certificate, ""); + + if (conf->upstream.ssl && conf->upstream.ssl_verify) { + if (conf->ssl_trusted_certificate.len == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "no \"proxy_ssl_trusted_certificate\" is " + " defined for the \"proxy_ssl_verify\" " + "directive"); + + return NGX_CONF_ERROR; + } + } + + if (conf->upstream.ssl + && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, + &conf->ssl_trusted_certificate, + conf->ssl_verify_depth) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + #endif ngx_conf_merge_value(conf->redirect, prev->redirect, 1); diff -Npru nginx-1.4.1/src/http/ngx_http_upstream.c nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c --- nginx-1.4.1/src/http/ngx_http_upstream.c 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.c 2013-09-03 15:23:15.611874377 +0300 @@ -1319,12 +1319,30 @@ ngx_http_upstream_ssl_handshake(ngx_conn { ngx_http_request_t *r; ngx_http_upstream_t *u; - + X509 *cert; + r = c->data; u = r->upstream; if (c->ssl->handshaked) { - + if (u->conf->ssl_verify) { + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "upstream ssl certificate validation failed"); + goto fail; + } + + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert == NULL) { + ngx_log_error(NGX_LOG_INFO, c->log, 0, + "upstream sent no required SSL certificate"); + goto fail; + } + + X509_free(cert); + } + if (u->conf->ssl_session_reuse) { u->peer.save_session(&u->peer, u->peer.data); } @@ -1332,13 +1350,21 @@ ngx_http_upstream_ssl_handshake(ngx_conn c->write->handler = ngx_http_upstream_handler; c->read->handler = ngx_http_upstream_handler; + c = r->connection; + ngx_http_upstream_send_request(r, u); + ngx_http_run_posted_requests(c); + return; } +fail: + c = r->connection; + ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); + ngx_http_run_posted_requests(c); } #endif diff -Npru nginx-1.4.1/src/http/ngx_http_upstream.h nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.h --- nginx-1.4.1/src/http/ngx_http_upstream.h 2013-05-06 13:26:50.000000000 +0300 +++ nginx-1.4.1-proxy-ssl-verification/src/http/ngx_http_upstream.h 2013-09-03 15:23:15.611874377 +0300 @@ -191,6 +191,7 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_ssl_t *ssl; ngx_flag_t ssl_session_reuse; + ngx_flag_t ssl_verify; #endif ngx_str_t module; On Mon, Sep 2, 2013 at 3:09 PM, Maxim Dounin wrote: > Hello! > > On Sun, Sep 01, 2013 at 11:19:06AM +0300, Aviram Cohen wrote: > >> Hello! >> >> On Wed, Aug 28, 2013 at 3:41 AM, Maxim Dounin wrote: >> > Hello! >> > >> [...] >> > >> > if (conf->upstream.ssl >> > && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, >> > &conf->upstream.ssl_certificate >> > conf->upstream.ssl_verify_depth) >> > != NGX_OK) >> > { >> > ... >> > } >> > >> > Additional question is what happens in a configuration like >> > >> > location / { >> > proxy_pass https://example.com; >> > proxy_ssl_verify on; >> > proxy_ssl_trusted_ceritifcate example.crt; >> > >> > if ($foo) { >> > # do nothing >> > } >> > } >> > >> > or the same with a nested location instead of "if". Quick look >> > suggest it will result in trusted certs loaded twice (and stale >> > alerts later due to how OpenSSL handles this). >> > >> >> I have tried this configuration (and also a nested location), and didn't >> see that Nginx loaded the same certificate twice (I've actually put >> a breakpoint on the if clause in which ngx_ssl_trusted_certificate >> is called, and it was called only once for the location. >> >> Can you specify exactly how to reproduce this case? > > I was probably wrong here, as the code you added is before the > conf->upstream.ssl is inherited. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Aviram Cohen, R&D Adallom, 1 Ha'Barzel st., Tel-Aviv, Israel Mobile: +972 (54) 5833508 aviram at adallom.com, www.adallom.com -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.4.1-proxy-ssl-verification.patch Type: application/octet-stream Size: 6235 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Sep 3 13:21:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2013 17:21:39 +0400 Subject: [PATCH] Proxy remote server SSL certificate verification In-Reply-To: References: <20130820140912.GF19334@mdounin.ru> <20130821143033.GP19334@mdounin.ru> <20130828004143.GE2748@mdounin.ru> <20130902120959.GB65634@mdounin.ru> Message-ID: <20130903132139.GK65634@mdounin.ru> Hello! On Tue, Sep 03, 2013 at 03:32:58PM +0300, Aviram Cohen wrote: > Thanks for the comments. The new version with all the fixes is > attached (and also pasted in this mail). See below for comments. [...] > @@ -2382,6 +2408,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ > * conf->body_set = NULL; > * conf->body_source = { 0, NULL }; > * conf->redirects = NULL; > + * conf->ssl_trusted_certificate = NULL; > */ > > conf->upstream.store = NGX_CONF_UNSET; Nitpicking: ssl_trusted_certificate is ngx_str_t, so it's set to { 0, NULL } (much like body_source above). [...] > + if (conf->upstream.ssl && conf->upstream.ssl_verify) { > + if (conf->ssl_trusted_certificate.len == 0) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "no \"proxy_ssl_trusted_certificate\" is " > + " defined for the \"proxy_ssl_verify\" " > + "directive"); > + > + return NGX_CONF_ERROR; > + } > + } > + > + if (conf->upstream.ssl > + && ngx_ssl_trusted_certificate(cf, conf->upstream.ssl, > + &conf->ssl_trusted_certificate, > + conf->ssl_verify_depth) > + != NGX_OK) > + { > + return NGX_CONF_ERROR; > + } Nitpicking: if (conf->upstream.ssl) seems to be common condition, and probably conf->upstream.ssl_verify too. Merging the two under something like if (conf->upstream.ssl && conf->upstream.ssl_verify) { if (conf->ssl_trusted_certificate.len == 0) { ... } if (ngx_ssl_trusted_certificate(...) != NGX_OK) { ... } } should be better. [...] > @@ -1319,12 +1319,30 @@ ngx_http_upstream_ssl_handshake(ngx_conn > { > ngx_http_request_t *r; > ngx_http_upstream_t *u; > - > + X509 *cert; > + > r = c->data; > u = r->upstream; > > if (c->ssl->handshaked) { > - > + if (u->conf->ssl_verify) { > + if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { > + ngx_log_error(NGX_LOG_ERR, c->log, 0, > + "upstream ssl certificate validation failed"); This seems to lack detailsed error reporting. Logging at least error returned and X509_verify_cert_error_string() as it's done in ngx_http_process_request() is really good idea. > + goto fail; > + } > + > + cert = SSL_get_peer_certificate(c->ssl->connection); > + > + if (cert == NULL) { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > + "upstream sent no required SSL certificate"); The "required" is probably not needed. And NGX_LOG_INFO logging level looks very wrong - while it's used by ngx_http_process_request(), it's used in a situation which may be easily triggered by broken/malicious clients, so the "info" level is appropiate. No certificate from an upstream server deserves at least "error" level (which is already used in your patch by previous error message). > + goto fail; > + } > + > + X509_free(cert); > + } > + > if (u->conf->ssl_session_reuse) { > u->peer.save_session(&u->peer, u->peer.data); > } One more relatively major point: certificate checking seems to lack any peer name validation. Without it, any certificate issued by a trusted certificate authority can be used, making it impossible to use certificate verification to prevent MITM if you don't control trusted CAs. I tend to think it's required for initial proxy certificate verification. Though probably there should be a directive to switch the off, like in Apache, see http://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslproxycheckpeername. Additionally, it might be good idea to introduce an ssl_crl counterpart for proxy. > @@ -1332,13 +1350,21 @@ ngx_http_upstream_ssl_handshake(ngx_conn > c->write->handler = ngx_http_upstream_handler; > c->read->handler = ngx_http_upstream_handler; > > + c = r->connection; > + > ngx_http_upstream_send_request(r, u); > > + ngx_http_run_posted_requests(c); > + > return; > } > > +fail: > + c = r->connection; > + > ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > > + ngx_http_run_posted_requests(c); > } Nitpicking: I think there should be empty line after "fail:". BTW, you may want to build patches against mercurial repo, it already has the posted requests code here. See http://nginx.org/en/docs/contributing_changes.html for basic tips. [...] -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Sep 3 14:36:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2013 18:36:44 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: References: <20130902144927.GD65634@mdounin.ru> Message-ID: <20130903143643.GM65634@mdounin.ru> Hello! On Tue, Sep 03, 2013 at 10:31:55AM +0800, Sepherosa Ziehau wrote: [...] > > While this look like better that what was with previous patches > > (mostly due to inheritance handled by kernel), it still looks very > > fragile for me. In particular, I really dislike the trick with > > making first worker process special. > > > > > Well, the idea is to keep at least one listen socket opened. Maybe I could > find other way in kernel to make it less tricky. However, that may add > extra syscall or socket option. I think extra syscall/socket option will be ok as long as it'll save us from the hassle of opening sockets. Not sure what to do with Linux compatibility though. Another aproach which may be slightly better than the code is your last patch is to reopen sockets before spawning each worker process: this way, master may keep listen sockets open (listen queue is shared with the same socket as inherited by a worker process then, right?) and worker processes are equal and don't need to open sockets themself. It needs careful handling on dead process respawn codepath though. > > It's probably should either left in the state "nothing is > > guaranteed" (with some understanding of what will happen in > > various common situations like reconfiguration, upgrade, switching > > so_reuseport on/off) or some way should be found to make things > > less tricky. > > > > To be frank, at least interfering the reconfigure probably is not wanted. > And I don't want "nothing is guaranteed" (which probably is the first 2 > patches). As far as I can tell, reconfiguration should just work with inheritance in the kernel you've implemented - as new worker processes are spawn before old worker processes are created. There may be races though. [...] > > (We've also discussed this here in office serveral times, and it > > seems that general consensus is that SO_REUSEPORT for TCP balancing > > isn't really good interface. It would be much easier for everyone > > if normal workflow with inherited listen socket descriptors just > > worked. Especially given the fact that in nginx case it's mostly > > about benchmarking, since in real life load distribution between > > worker processes is good enough.) > > > In DragonFly, SO_REUSEPORT is more than load balance: it makes the accepted > sockets network processing completely CPU localized (from user land to > kernel land on both RX and TX path). This level of network processing CPU > localization could not be achieved by the old listen socket inheritance > usage model (even if I could divide listen socket's completion queue to > each CPU base on RX hash, the level of CPU localization achieved by > SO_REUSEPORT still could not be achieved easily). Could you please point out how it's achieved? We here tend to think that proper interface from an application point of view would be to implement a socket option which basically creates separate listen queues for inherited sockets. But if this isn't going to work, it's probably better to focus on SO_REUSEPORT. BTW, are you going to be on the upcoming EuroBSDcon? I'm not, but Igor and Gleb Smirnoff (glebius at freebsd.org) will be there, and it will be cool if you'll meet and discuss the SO_REUSEPORT usage for balancing. -- Maxim Dounin http://nginx.org/en/donation.html From ywu at about.com Tue Sep 3 19:28:30 2013 From: ywu at about.com (Yongfeng Wu) Date: Tue, 3 Sep 2013 19:28:30 +0000 Subject: timeout for very slow client Message-ID: <0B4D8CAE1C77DF4F871890F33A3A837B18AD45AB@S059EXCHMB01.staff.iaccap.com> Hi, I have a question about the slow clients. If a client (for example, a dial up client) is very slow and nginx needs a long time to send a big response to the client, does nginx have a timeout mechanism for that? Or nginx will keep sending the response until all sent out? I know we have a directive "send_timeout", but that's for " only between two successive write operations, not for the transmission of the whole response". Thanks a lot, Yong -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 4 17:15:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:15:35 +0000 Subject: [nginx] Win32: Open Watcom C compatibility fixes. Message-ID: details: http://hg.nginx.org/nginx/rev/670ceaba03d8 branches: changeset: 5358:670ceaba03d8 user: Maxim Dounin date: Wed Sep 04 20:48:22 2013 +0400 description: Win32: Open Watcom C compatibility fixes. Precompiled headers are disabled as they lead to internal compiler errors with long configure lines. Couple of false positive warnings silenced. Various win32 typedefs are adjusted to work with Open Watcom C 1.9 headers. With this patch, it's now again possible to compile nginx using owc386, with options we normally compile on win32 minus ipv6 and ssl. diffstat: auto/cc/owc | 8 ++++---- auto/lib/pcre/makefile.owc | 2 +- src/core/ngx_string.c | 2 +- src/http/modules/ngx_http_mp4_module.c | 2 +- src/os/win32/ngx_win32_config.h | 14 ++++++++++++++ 5 files changed, 21 insertions(+), 7 deletions(-) diffs (85 lines): diff --git a/auto/cc/owc b/auto/cc/owc --- a/auto/cc/owc +++ b/auto/cc/owc @@ -65,10 +65,10 @@ have=NGX_HAVE_C99_VARIADIC_MACROS . auto # the precompiled headers -CORE_DEPS="$CORE_DEPS $NGX_OBJS/ngx_config.pch" -NGX_PCH="$NGX_OBJS/ngx_config.pch" -NGX_BUILD_PCH="-fhq=$NGX_OBJS/ngx_config.pch" -NGX_USE_PCH="-fh=$NGX_OBJS/ngx_config.pch" +#CORE_DEPS="$CORE_DEPS $NGX_OBJS/ngx_config.pch" +#NGX_PCH="$NGX_OBJS/ngx_config.pch" +#NGX_BUILD_PCH="-fhq=$NGX_OBJS/ngx_config.pch" +#NGX_USE_PCH="-fh=$NGX_OBJS/ngx_config.pch" # the link flags, built target is NT GUI mode application diff --git a/auto/lib/pcre/makefile.owc b/auto/lib/pcre/makefile.owc --- a/auto/lib/pcre/makefile.owc +++ b/auto/lib/pcre/makefile.owc @@ -4,7 +4,7 @@ CFLAGS = -c -zq -bt=nt -ot -op -oi -oe -s -bm $(CPU_OPT) -PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 \ +PCREFLAGS = -DHAVE_CONFIG_H -DPCRE_STATIC -DPOSIX_MALLOC_THRESHOLD=10 & -DSUPPORT_PCRE8 -DHAVE_MEMMOVE diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c --- a/src/core/ngx_string.c +++ b/src/core/ngx_string.c @@ -486,7 +486,7 @@ ngx_sprintf_num(u_char *buf, u_char *las if (hexadecimal == 0) { - if (ui64 <= NGX_MAX_UINT32_VALUE) { + if (ui64 <= (uint64_t) NGX_MAX_UINT32_VALUE) { /* * To divide 64-bit numbers and to find remainders diff --git a/src/http/modules/ngx_http_mp4_module.c b/src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c +++ b/src/http/modules/ngx_http_mp4_module.c @@ -1129,7 +1129,7 @@ ngx_http_mp4_update_mdat_atom(ngx_http_m atom_header = mp4->mdat_atom_header; - if ((uint64_t) atom_data_size > 0xffffffff) { + if ((uint64_t) atom_data_size > (uint64_t) 0xffffffff) { atom_size = 1; atom_header_size = sizeof(ngx_mp4_atom_header64_t); ngx_mp4_set_64value(atom_header + sizeof(ngx_mp4_atom_header_t), diff --git a/src/os/win32/ngx_win32_config.h b/src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h +++ b/src/os/win32/ngx_win32_config.h @@ -128,13 +128,27 @@ typedef unsigned short int uint16_t; typedef __int64 int64_t; typedef unsigned __int64 uint64_t; + +#ifndef __WATCOMC__ typedef int intptr_t; typedef u_int uintptr_t; +#endif + /* Windows defines off_t as long, which is 32-bit */ typedef __int64 off_t; #define _OFF_T_DEFINED +#ifdef __WATCOMC__ + +/* off_t is redefined by sys/types.h used by zlib.h */ +#define __TYPES_H_INCLUDED +typedef int dev_t; +typedef unsigned int ino_t; + +#endif + + typedef int ssize_t; typedef uint32_t in_addr_t; typedef u_short in_port_t; From mdounin at mdounin.ru Wed Sep 4 17:15:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:15:36 +0000 Subject: [nginx] Win32: Borland C compatibility fixes. Message-ID: details: http://hg.nginx.org/nginx/rev/2fda9065d0f4 branches: changeset: 5359:2fda9065d0f4 user: Maxim Dounin date: Wed Sep 04 20:48:23 2013 +0400 description: Win32: Borland C compatibility fixes. Several false positive warnings silenced, notably W8012 "Comparing signed and unsigned" (due to u_short values promoted to int), and W8072 "Suspicious pointer arithmetic" (due to large type values added to pointers). With this patch, it's now again possible to compile nginx using bcc32, with options we normally compile on win32 minus ipv6 and ssl. diffstat: auto/lib/pcre/makefile.bcc | 4 ++-- src/event/ngx_event_accept.c | 2 +- src/http/modules/ngx_http_memcached_module.c | 2 +- src/http/modules/ngx_http_mp4_module.c | 10 +++++++--- src/http/modules/ngx_http_proxy_module.c | 4 ++-- src/http/modules/ngx_http_upstream_ip_hash_module.c | 2 +- src/http/ngx_http_file_cache.c | 2 +- src/http/ngx_http_request_body.c | 8 ++++---- src/os/win32/ngx_win32_config.h | 8 ++++++++ 9 files changed, 27 insertions(+), 15 deletions(-) diffs (174 lines): diff --git a/auto/lib/pcre/makefile.bcc b/auto/lib/pcre/makefile.bcc --- a/auto/lib/pcre/makefile.bcc +++ b/auto/lib/pcre/makefile.bcc @@ -13,8 +13,8 @@ pcre.lib: bcc32 -c $(CFLAGS) -I. $(PCREFLAGS) pcre_*.c - > pcre.lst - for %n in (*.obj) do @echo +%n & >> pcre.lst + copy /y nul pcre.lst + for %n in (*.obj) do @echo +%n ^^& >> pcre.lst echo + >> pcre.lst tlib pcre.lib @pcre.lst diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c +++ b/src/event/ngx_event_accept.c @@ -297,7 +297,7 @@ ngx_event_accept(ngx_event_t *ev) cidr = ecf->debug_connection.elts; for (i = 0; i < ecf->debug_connection.nelts; i++) { - if (cidr[i].family != c->sockaddr->sa_family) { + if (cidr[i].family != (ngx_uint_t) c->sockaddr->sa_family) { goto next; } diff --git a/src/http/modules/ngx_http_memcached_module.c b/src/http/modules/ngx_http_memcached_module.c --- a/src/http/modules/ngx_http_memcached_module.c +++ b/src/http/modules/ngx_http_memcached_module.c @@ -520,7 +520,7 @@ ngx_http_memcached_filter(void *data, ss return NGX_OK; } - last += u->length - NGX_HTTP_MEMCACHED_END; + last += (size_t) (u->length - NGX_HTTP_MEMCACHED_END); if (ngx_strncmp(last, ngx_http_memcached_end, b->last - last) != 0) { ngx_log_error(NGX_LOG_ERR, ctx->request->connection->log, 0, diff --git a/src/http/modules/ngx_http_mp4_module.c b/src/http/modules/ngx_http_mp4_module.c --- a/src/http/modules/ngx_http_mp4_module.c +++ b/src/http/modules/ngx_http_mp4_module.c @@ -157,7 +157,11 @@ typedef struct { #define ngx_mp4_atom_header(mp4) (mp4->buffer_pos - 8) #define ngx_mp4_atom_data(mp4) mp4->buffer_pos #define ngx_mp4_atom_data_size(t) (uint64_t) (sizeof(t) - 8) -#define ngx_mp4_atom_next(mp4, n) mp4->buffer_pos += n; mp4->offset += n + + +#define ngx_mp4_atom_next(mp4, n) \ + mp4->buffer_pos += (size_t) n; \ + mp4->offset += n #define ngx_mp4_set_atom_name(p, n1, n2, n3, n4) \ @@ -956,7 +960,7 @@ ngx_http_mp4_read_ftyp_atom(ngx_http_mp4 ngx_log_debug0(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, "mp4 ftyp atom"); if (atom_data_size > 1024 - || ngx_mp4_atom_data(mp4) + atom_data_size > mp4->buffer_end) + || ngx_mp4_atom_data(mp4) + (size_t) atom_data_size > mp4->buffer_end) { ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0, "\"%s\" mp4 ftyp atom is too large:%uL", @@ -1304,7 +1308,7 @@ ngx_http_mp4_read_trak_atom(ngx_http_mp4 trak->out[NGX_HTTP_MP4_TRAK_ATOM].buf = atom; - atom_end = mp4->buffer_pos + atom_data_size; + atom_end = mp4->buffer_pos + (size_t) atom_data_size; atom_file_end = mp4->offset + atom_data_size; rc = ngx_http_mp4_read_atom(mp4, ngx_http_mp4_trak_atoms, atom_data_size); diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -1712,7 +1712,7 @@ ngx_http_proxy_chunked_filter(ngx_event_ if (buf->last - buf->pos >= ctx->chunked.size) { - buf->pos += ctx->chunked.size; + buf->pos += (size_t) ctx->chunked.size; b->last = buf->pos; ctx->chunked.size = 0; @@ -1875,7 +1875,7 @@ ngx_http_proxy_non_buffered_chunked_filt b->tag = u->output.tag; if (buf->last - buf->pos >= ctx->chunked.size) { - buf->pos += ctx->chunked.size; + buf->pos += (size_t) ctx->chunked.size; b->last = buf->pos; ctx->chunked.size = 0; diff --git a/src/http/modules/ngx_http_upstream_ip_hash_module.c b/src/http/modules/ngx_http_upstream_ip_hash_module.c --- a/src/http/modules/ngx_http_upstream_ip_hash_module.c +++ b/src/http/modules/ngx_http_upstream_ip_hash_module.c @@ -174,7 +174,7 @@ ngx_http_upstream_get_ip_hash_peer(ngx_p for ( ;; ) { - for (i = 0; i < iphp->addrlen; i++) { + for (i = 0; i < (ngx_uint_t) iphp->addrlen; i++) { hash = (hash * 113 + iphp->addr[i]) % 6271; } diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -503,7 +503,7 @@ ngx_http_file_cache_read(ngx_http_reques return NGX_DECLINED; } - if (h->body_start > c->body_start) { + if ((size_t) h->body_start > c->body_start) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has too long header", c->file.name.data); diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -714,7 +714,7 @@ ngx_http_discard_request_body_filter(ngx size = b->last - b->pos; if ((off_t) size > rb->chunked->size) { - b->pos += rb->chunked->size; + b->pos += (size_t) rb->chunked->size; rb->chunked->size = 0; } else { @@ -753,7 +753,7 @@ ngx_http_discard_request_body_filter(ngx size = b->last - b->pos; if ((off_t) size > r->headers_in.content_length_n) { - b->pos += r->headers_in.content_length_n; + b->pos += (size_t) r->headers_in.content_length_n; r->headers_in.content_length_n = 0; } else { @@ -866,7 +866,7 @@ ngx_http_request_body_length_filter(ngx_ rb->rest -= size; } else { - cl->buf->pos += rb->rest; + cl->buf->pos += (size_t) rb->rest; rb->rest = 0; b->last = cl->buf->pos; b->last_buf = 1; @@ -972,7 +972,7 @@ ngx_http_request_body_chunked_filter(ngx size = cl->buf->last - cl->buf->pos; if ((off_t) size > rb->chunked->size) { - cl->buf->pos += rb->chunked->size; + cl->buf->pos += (size_t) rb->chunked->size; r->headers_in.content_length_n += rb->chunked->size; rb->chunked->size = 0; diff --git a/src/os/win32/ngx_win32_config.h b/src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h +++ b/src/os/win32/ngx_win32_config.h @@ -146,6 +146,14 @@ typedef __int64 off_t; typedef int dev_t; typedef unsigned int ino_t; +#elif __BORLANDC__ + +/* off_t is redefined by sys/types.h used by zlib.h */ +#define __TYPES_H + +typedef int dev_t; +typedef unsigned int ino_t; + #endif From mdounin at mdounin.ru Wed Sep 4 17:15:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:15:38 +0000 Subject: [nginx] Win32: MinGW GCC compatibility. Message-ID: details: http://hg.nginx.org/nginx/rev/3d2d3e1cf427 branches: changeset: 5360:3d2d3e1cf427 user: Maxim Dounin date: Wed Sep 04 20:48:28 2013 +0400 description: Win32: MinGW GCC compatibility. Several warnings silenced, notably (ngx_socket_t) -1 is now checked on socket operations instead of -1, as ngx_socket_t is unsigned on win32 and gcc complains on comparison. With this patch, it's now possible to compile nginx using mingw gcc, with options we normally compile on win32. diffstat: auto/lib/openssl/conf | 4 ++++ auto/lib/pcre/conf | 5 +++++ auto/lib/pcre/make | 19 +++++++++---------- auto/lib/zlib/make | 23 ++++++++++++++++++++++- auto/os/win32 | 13 ++++++++++++- src/core/ngx_connection.c | 6 +++--- src/core/ngx_cycle.c | 6 +++--- src/core/ngx_resolver.c | 2 +- src/event/modules/ngx_iocp_module.c | 2 +- src/event/modules/ngx_win32_select_module.c | 4 ++-- src/event/ngx_event_accept.c | 2 +- src/event/ngx_event_acceptex.c | 2 +- src/event/ngx_event_connect.c | 2 +- src/event/ngx_event_pipe.c | 6 ++++-- src/os/win32/ngx_atomic.h | 3 ++- src/os/win32/ngx_process_cycle.c | 4 ++-- src/os/win32/ngx_win32_config.h | 7 +++++++ src/os/win32/ngx_win32_init.c | 2 +- src/os/win32/ngx_wsarecv.c | 4 ++-- 19 files changed, 83 insertions(+), 33 deletions(-) diffs (truncated from 394 to 300 lines): diff --git a/auto/lib/openssl/conf b/auto/lib/openssl/conf --- a/auto/lib/openssl/conf +++ b/auto/lib/openssl/conf @@ -33,6 +33,10 @@ if [ $OPENSSL != NONE ]; then CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libssl.a" CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libcrypto.a" CORE_LIBS="$CORE_LIBS $NGX_LIBDL" + + if [ "$NGX_PLATFORM" = win32 ]; then + CORE_LIBS="$CORE_LIBS -lgdi32 -lcrypt32 -lws2_32" + fi ;; esac diff --git a/auto/lib/pcre/conf b/auto/lib/pcre/conf --- a/auto/lib/pcre/conf +++ b/auto/lib/pcre/conf @@ -73,6 +73,11 @@ if [ $PCRE != NONE ]; then *) have=NGX_PCRE . auto/have + + if [ "$NGX_PLATFORM" = win32 ]; then + have=PCRE_STATIC . auto/have + fi + CORE_DEPS="$CORE_DEPS $PCRE/pcre.h" LINK_DEPS="$LINK_DEPS $PCRE/.libs/libpcre.a" CORE_LIBS="$CORE_LIBS $PCRE/.libs/libpcre.a" diff --git a/auto/lib/pcre/make b/auto/lib/pcre/make --- a/auto/lib/pcre/make +++ b/auto/lib/pcre/make @@ -23,14 +23,16 @@ case "$NGX_CC_NAME" in ngx_pcre=`echo \-DPCRE=\"$PCRE\" | sed -e "s/\//$ngx_regex_dirsep/g"` ;; + *) + ngx_makefile= + ;; + esac -case "$NGX_PLATFORM" in +if [ -n "$ngx_makefile" ]; then - win32) - - cat << END >> $NGX_MAKEFILE + cat << END >> $NGX_MAKEFILE `echo "$PCRE/pcre.lib: $PCRE/pcre.h $NGX_MAKEFILE" \ | sed -e "s/\//$ngx_regex_dirsep/g"` @@ -41,10 +43,9 @@ case "$NGX_PLATFORM" in END - ;; +else - *) - cat << END >> $NGX_MAKEFILE + cat << END >> $NGX_MAKEFILE $PCRE/pcre.h: $PCRE/Makefile @@ -60,6 +61,4 @@ END END - ;; - -esac +fi diff --git a/auto/lib/zlib/make b/auto/lib/zlib/make --- a/auto/lib/zlib/make +++ b/auto/lib/zlib/make @@ -24,6 +24,10 @@ case "$NGX_CC_NAME" in ngx_zlib=`echo \-DZLIB=\"$ZLIB\" | sed -e "s/\//$ngx_regex_dirsep/g"` ;; + *) + ngx_makefile= + ;; + esac @@ -33,13 +37,30 @@ done=NO case "$NGX_PLATFORM" in win32) - cat << END >> $NGX_MAKEFILE + + if [ -n "$ngx_makefile" ]; then + cat << END >> $NGX_MAKEFILE `echo "$ZLIB/zlib.lib: $NGX_MAKEFILE" | sed -e "s/\//$ngx_regex_dirsep/g"` \$(MAKE) -f auto/lib/zlib/$ngx_makefile $ngx_opt $ngx_zlib END + else + + cat << END >> $NGX_MAKEFILE + +$ZLIB/libz.a: $NGX_MAKEFILE + cd $ZLIB \\ + && \$(MAKE) distclean \\ + && \$(MAKE) -f win32/Makefile.gcc \\ + CFLAGS="$ZLIB_OPT" CC="\$(CC)" \\ + libz.a + +END + + fi + done=YES ;; diff --git a/auto/os/win32 b/auto/os/win32 --- a/auto/os/win32 +++ b/auto/os/win32 @@ -9,10 +9,21 @@ CORE_INCS="$WIN32_INCS" CORE_DEPS="$WIN32_DEPS" CORE_SRCS="$WIN32_SRCS $IOCP_SRCS" OS_CONFIG="$WIN32_CONFIG" -CORE_LIBS="$CORE_LIBS advapi32.lib ws2_32.lib" NGX_ICONS="$NGX_WIN32_ICONS" SELECT_SRCS=$WIN32_SELECT_SRCS +case "$NGX_CC_NAME" in + + gcc) + CORE_LIBS="$CORE_LIBS -ladvapi32 -lws2_32" + ;; + + *) + CORE_LIBS="$CORE_LIBS advapi32.lib ws2_32.lib" + ;; + +esac + EVENT_MODULES="$EVENT_MODULES $IOCP_MODULE" EVENT_FOUND=YES diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -297,7 +297,7 @@ ngx_open_listening_sockets(ngx_cycle_t * continue; } - if (ls[i].fd != -1) { + if (ls[i].fd != (ngx_socket_t) -1) { continue; } @@ -312,7 +312,7 @@ ngx_open_listening_sockets(ngx_cycle_t * s = ngx_socket(ls[i].sockaddr->sa_family, ls[i].type, 0); - if (s == -1) { + if (s == (ngx_socket_t) -1) { ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, ngx_socket_n " %V failed", &ls[i].addr_text); return NGX_ERROR; @@ -863,7 +863,7 @@ ngx_close_connection(ngx_connection_t *c ngx_uint_t log_error, level; ngx_socket_t fd; - if (c->fd == -1) { + if (c->fd == (ngx_socket_t) -1) { ngx_log_error(NGX_LOG_ALERT, c->log, 0, "connection already closed"); return; } diff --git a/src/core/ngx_cycle.c b/src/core/ngx_cycle.c --- a/src/core/ngx_cycle.c +++ b/src/core/ngx_cycle.c @@ -543,7 +543,7 @@ ngx_init_cycle(ngx_cycle_t *old_cycle) } } - if (nls[n].fd == -1) { + if (nls[n].fd == (ngx_socket_t) -1) { nls[n].open = 1; } } @@ -649,7 +649,7 @@ old_shm_zone_done: ls = old_cycle->listening.elts; for (i = 0; i < old_cycle->listening.nelts; i++) { - if (ls[i].remain || ls[i].fd == -1) { + if (ls[i].remain || ls[i].fd == (ngx_socket_t) -1) { continue; } @@ -813,7 +813,7 @@ failed: ls = cycle->listening.elts; for (i = 0; i < cycle->listening.nelts; i++) { - if (ls[i].fd == -1 || !ls[i].open) { + if (ls[i].fd == (ngx_socket_t) -1 || !ls[i].open) { continue; } diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -2221,7 +2221,7 @@ ngx_udp_connect(ngx_udp_connection_t *uc ngx_log_debug1(NGX_LOG_DEBUG_EVENT, &uc->log, 0, "UDP socket %d", s); - if (s == -1) { + if (s == (ngx_socket_t) -1) { ngx_log_error(NGX_LOG_ALERT, &uc->log, ngx_socket_errno, ngx_socket_n " failed"); return NGX_ERROR; diff --git a/src/event/modules/ngx_iocp_module.c b/src/event/modules/ngx_iocp_module.c --- a/src/event/modules/ngx_iocp_module.c +++ b/src/event/modules/ngx_iocp_module.c @@ -170,7 +170,7 @@ ngx_iocp_timer(void *data) #endif } -#ifdef __WATCOMC__ +#if defined(__WATCOMC__) || defined(__GNUC__) return 0; #endif } diff --git a/src/event/modules/ngx_win32_select_module.c b/src/event/modules/ngx_win32_select_module.c --- a/src/event/modules/ngx_win32_select_module.c +++ b/src/event/modules/ngx_win32_select_module.c @@ -148,8 +148,8 @@ ngx_select_add_event(ngx_event_t *ev, ng return NGX_ERROR; } - if ((event == NGX_READ_EVENT) && (max_read >= FD_SETSIZE) - || (event == NGX_WRITE_EVENT) && (max_write >= FD_SETSIZE)) + if ((event == NGX_READ_EVENT && max_read >= FD_SETSIZE) + || (event == NGX_WRITE_EVENT && max_write >= FD_SETSIZE)) { ngx_log_error(NGX_LOG_ERR, ev->log, 0, "maximum number of descriptors " diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c +++ b/src/event/ngx_event_accept.c @@ -70,7 +70,7 @@ ngx_event_accept(ngx_event_t *ev) s = accept(lc->fd, (struct sockaddr *) sa, &socklen); #endif - if (s == -1) { + if (s == (ngx_socket_t) -1) { err = ngx_socket_errno; if (err == NGX_EAGAIN) { diff --git a/src/event/ngx_event_acceptex.c b/src/event/ngx_event_acceptex.c --- a/src/event/ngx_event_acceptex.c +++ b/src/event/ngx_event_acceptex.c @@ -108,7 +108,7 @@ ngx_event_post_acceptex(ngx_listening_t ngx_log_debug1(NGX_LOG_DEBUG_EVENT, &ls->log, 0, ngx_socket_n " s:%d", s); - if (s == -1) { + if (s == (ngx_socket_t) -1) { ngx_log_error(NGX_LOG_ALERT, &ls->log, ngx_socket_errno, ngx_socket_n " failed"); diff --git a/src/event/ngx_event_connect.c b/src/event/ngx_event_connect.c --- a/src/event/ngx_event_connect.c +++ b/src/event/ngx_event_connect.c @@ -31,7 +31,7 @@ ngx_event_connect_peer(ngx_peer_connecti ngx_log_debug1(NGX_LOG_DEBUG_EVENT, pc->log, 0, "socket %d", s); - if (s == -1) { + if (s == (ngx_socket_t) -1) { ngx_log_error(NGX_LOG_ALERT, pc->log, ngx_socket_errno, ngx_socket_n " failed"); return NGX_ERROR; diff --git a/src/event/ngx_event_pipe.c b/src/event/ngx_event_pipe.c --- a/src/event/ngx_event_pipe.c +++ b/src/event/ngx_event_pipe.c @@ -57,7 +57,7 @@ ngx_event_pipe(ngx_event_pipe_t *p, ngx_ do_write = 1; } - if (p->upstream->fd != -1) { + if (p->upstream->fd != (ngx_socket_t) -1) { rev = p->upstream->read; flags = (rev->eof || rev->error) ? NGX_CLOSE_EVENT : 0; @@ -74,7 +74,9 @@ ngx_event_pipe(ngx_event_pipe_t *p, ngx_ } } - if (p->downstream->fd != -1 && p->downstream->data == p->output_ctx) { + if (p->downstream->fd != (ngx_socket_t) -1 + && p->downstream->data == p->output_ctx) + { From mdounin at mdounin.ru Wed Sep 4 17:15:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:15:39 +0000 Subject: [nginx] Win32: $request_time fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/7094d6da2806 branches: changeset: 5361:7094d6da2806 user: Maxim Dounin date: Wed Sep 04 20:48:30 2013 +0400 description: Win32: $request_time fixed. On win32, time_t is 64 bits wide by default, and passing an ngx_msec_int_t argument for %T format specifier doesn't work. This doesn't manifest itself on other platforms as time_t and ngx_msec_int_t are usually of the same size. diffstat: src/http/modules/ngx_http_log_module.c | 2 +- src/http/ngx_http_variables.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff --git a/src/http/modules/ngx_http_log_module.c b/src/http/modules/ngx_http_log_module.c --- a/src/http/modules/ngx_http_log_module.c +++ b/src/http/modules/ngx_http_log_module.c @@ -780,7 +780,7 @@ ngx_http_log_request_time(ngx_http_reque ((tp->sec - r->start_sec) * 1000 + (tp->msec - r->start_msec)); ms = ngx_max(ms, 0); - return ngx_sprintf(buf, "%T.%03M", ms / 1000, ms % 1000); + return ngx_sprintf(buf, "%T.%03M", (time_t) ms / 1000, ms % 1000); } diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c +++ b/src/http/ngx_http_variables.c @@ -1992,7 +1992,7 @@ ngx_http_variable_request_time(ngx_http_ ((tp->sec - r->start_sec) * 1000 + (tp->msec - r->start_msec)); ms = ngx_max(ms, 0); - v->len = ngx_sprintf(p, "%T.%03M", ms / 1000, ms % 1000) - p; + v->len = ngx_sprintf(p, "%T.%03M", (time_t) ms / 1000, ms % 1000) - p; v->valid = 1; v->no_cacheable = 0; v->not_found = 0; From mdounin at mdounin.ru Wed Sep 4 17:37:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:03 +0000 Subject: [nginx] Handling of ngx_int_t != intptr_t case. Message-ID: details: http://hg.nginx.org/nginx/rev/79b9101cecf4 branches: changeset: 5362:79b9101cecf4 user: Maxim Dounin date: Wed Sep 04 21:16:59 2013 +0400 description: Handling of ngx_int_t != intptr_t case. Casts between pointers and integers produce warnings on size mismatch. To silence them, cast to (u)intptr_t should be used. Prevoiusly, casts to ngx_(u)int_t were used in some cases, and several ngx_int_t expressions had no casts. As of now it's mostly style as ngx_int_t is defined as intptr_t. diffstat: src/core/ngx_slab.c | 3 ++- src/http/modules/ngx_http_map_module.c | 6 +++--- src/http/modules/perl/ngx_http_perl_module.c | 4 ++-- 3 files changed, 7 insertions(+), 6 deletions(-) diffs (64 lines): diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c --- a/src/core/ngx_slab.c +++ b/src/core/ngx_slab.c @@ -440,7 +440,8 @@ ngx_slab_free_locked(ngx_slab_pool_t *po n = ((uintptr_t) p & (ngx_pagesize - 1)) >> shift; m = (uintptr_t) 1 << (n & (sizeof(uintptr_t) * 8 - 1)); n /= (sizeof(uintptr_t) * 8); - bitmap = (uintptr_t *) ((uintptr_t) p & ~(ngx_pagesize - 1)); + bitmap = (uintptr_t *) + ((uintptr_t) p & ~((uintptr_t) ngx_pagesize - 1)); if (bitmap[n] & m) { diff --git a/src/http/modules/ngx_http_map_module.c b/src/http/modules/ngx_http_map_module.c --- a/src/http/modules/ngx_http_map_module.c +++ b/src/http/modules/ngx_http_map_module.c @@ -131,7 +131,7 @@ ngx_http_map_variable(ngx_http_request_t } if (!value->valid) { - value = ngx_http_get_flushed_variable(r, (ngx_uint_t) value->data); + value = ngx_http_get_flushed_variable(r, (uintptr_t) value->data); if (value == NULL || value->not_found) { value = &ngx_http_variable_null_value; @@ -414,7 +414,7 @@ ngx_http_map(ngx_conf_t *cf, ngx_command var = ctx->var_values.elts; for (i = 0; i < ctx->var_values.nelts; i++) { - if (index == (ngx_int_t) var[i].data) { + if (index == (intptr_t) var[i].data) { var = &var[i]; goto found; } @@ -429,7 +429,7 @@ ngx_http_map(ngx_conf_t *cf, ngx_command var->no_cacheable = 0; var->not_found = 0; var->len = 0; - var->data = (u_char *) index; + var->data = (u_char *) (intptr_t) index; goto found; } diff --git a/src/http/modules/perl/ngx_http_perl_module.c b/src/http/modules/perl/ngx_http_perl_module.c --- a/src/http/modules/perl/ngx_http_perl_module.c +++ b/src/http/modules/perl/ngx_http_perl_module.c @@ -421,7 +421,7 @@ ngx_http_perl_ssi(ngx_http_request_t *r, return NGX_ERROR; } - asv[0] = (SV *) i; + asv[0] = (SV *) (uintptr_t) i; for (i = 0; args[i]; i++) { asv[i + 1] = newSVpvn((char *) args[i]->data, args[i]->len); @@ -692,7 +692,7 @@ ngx_http_perl_call_handler(pTHX_ ngx_htt if (args) { EXTEND(sp, (intptr_t) args[0]); - for (i = 1; i <= (ngx_uint_t) args[0]; i++) { + for (i = 1; i <= (uintptr_t) args[0]; i++) { PUSHs(sv_2mortal(args[i])); } } From mdounin at mdounin.ru Wed Sep 4 17:37:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:04 +0000 Subject: [nginx] Request cleanup code unified, no functional changes. Message-ID: details: http://hg.nginx.org/nginx/rev/31af4ae8ad9c branches: changeset: 5363:31af4ae8ad9c user: Maxim Dounin date: Wed Sep 04 21:17:00 2013 +0400 description: Request cleanup code unified, no functional changes. Additionally, detaching a cleanup chain from a request is a bit more resilent to various bugs if any. diffstat: src/http/ngx_http_request.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diffs (20 lines): diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -3343,10 +3343,15 @@ ngx_http_free_request(ngx_http_request_t return; } - for (cln = r->cleanup; cln; cln = cln->next) { + cln = r->cleanup; + r->cleanup = NULL; + + while (cln) { if (cln->handler) { cln->handler(cln->data); } + + cln = cln->next; } #if (NGX_STAT_STUB) From mdounin at mdounin.ru Wed Sep 4 17:37:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:05 +0000 Subject: [nginx] Fixed incorrect response line on "return 203". Message-ID: details: http://hg.nginx.org/nginx/rev/941c5e3561ed branches: changeset: 5364:941c5e3561ed user: Maxim Dounin date: Wed Sep 04 21:17:01 2013 +0400 description: Fixed incorrect response line on "return 203". Reported by Weibin Yao, http://mailman.nginx.org/pipermail/nginx-devel/2013-April/003607.html. diffstat: src/http/ngx_http_header_filter_module.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diffs (16 lines): diff --git a/src/http/ngx_http_header_filter_module.c b/src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c +++ b/src/http/ngx_http_header_filter_module.c @@ -270,6 +270,12 @@ ngx_http_header_filter(ngx_http_request_ len += NGX_INT_T_LEN; status_line = NULL; } + + if (status_line && status_line->len == 0) { + status = r->headers_out.status; + len += NGX_INT_T_LEN; + status_line = NULL; + } } clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); From mdounin at mdounin.ru Wed Sep 4 17:37:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:07 +0000 Subject: [nginx] SSL: clear error queue after SSL_CTX_load_verify_locatio... Message-ID: details: http://hg.nginx.org/nginx/rev/6c35a1f428f2 branches: changeset: 5365:6c35a1f428f2 user: Maxim Dounin date: Wed Sep 04 21:17:02 2013 +0400 description: SSL: clear error queue after SSL_CTX_load_verify_locations(). The SSL_CTX_load_verify_locations() may leave errors in the error queue while returning success (e.g. if there are duplicate certificates in the file specified), resulting in "ignoring stale global SSL error" alerts later at runtime. diffstat: src/event/ngx_event_openssl.c | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diffs (31 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -363,6 +363,13 @@ ngx_ssl_client_certificate(ngx_conf_t *c return NGX_ERROR; } + /* + * SSL_CTX_load_verify_locations() may leave errors in the error queue + * while returning success + */ + + ERR_clear_error(); + list = SSL_load_client_CA_file((char *) cert->data); if (list == NULL) { @@ -407,6 +414,13 @@ ngx_ssl_trusted_certificate(ngx_conf_t * return NGX_ERROR; } + /* + * SSL_CTX_load_verify_locations() may leave errors in the error queue + * while returning success + */ + + ERR_clear_error(); + return NGX_OK; } From mdounin at mdounin.ru Wed Sep 4 17:37:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:08 +0000 Subject: [nginx] Configure: fixed building with Sun C if CFLAGS set (tick... Message-ID: details: http://hg.nginx.org/nginx/rev/945aa9c7f282 branches: changeset: 5366:945aa9c7f282 user: Maxim Dounin date: Wed Sep 04 21:17:03 2013 +0400 description: Configure: fixed building with Sun C if CFLAGS set (ticket #65). diffstat: auto/cc/conf | 23 +++++++++++++++++++++++ 1 files changed, 23 insertions(+), 0 deletions(-) diffs (33 lines): diff --git a/auto/cc/conf b/auto/cc/conf --- a/auto/cc/conf +++ b/auto/cc/conf @@ -43,6 +43,29 @@ if test -n "$CFLAGS"; then ngx_include_opt="-I" ;; + sunc) + + case "$NGX_MACHINE" in + + i86pc) + NGX_AUX=" src/os/unix/ngx_sunpro_x86.il" + ;; + + sun4u | sun4v) + NGX_AUX=" src/os/unix/ngx_sunpro_sparc64.il" + ;; + + esac + + case $CPU in + + amd64) + NGX_AUX=" src/os/unix/ngx_sunpro_amd64.il" + ;; + + esac + ;; + esac else From mdounin at mdounin.ru Wed Sep 4 17:37:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:09 +0000 Subject: [nginx] Configure: TCP_KEEPIDLE test name simplified. Message-ID: details: http://hg.nginx.org/nginx/rev/a15abc456bb5 branches: changeset: 5367:a15abc456bb5 user: Maxim Dounin date: Wed Sep 04 21:17:05 2013 +0400 description: Configure: TCP_KEEPIDLE test name simplified. diffstat: auto/unix | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -330,7 +330,7 @@ ngx_feature_test="setsockopt(0, IPPROTO_ . auto/feature -ngx_feature="TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT" +ngx_feature="TCP_KEEPIDLE" ngx_feature_name="NGX_HAVE_KEEPALIVE_TUNABLE" ngx_feature_run=no ngx_feature_incs="#include From mdounin at mdounin.ru Wed Sep 4 17:37:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 04 Sep 2013 17:37:11 +0000 Subject: [nginx] Upstream: fixed $upstream_response_time format specifiers. Message-ID: details: http://hg.nginx.org/nginx/rev/cd46297325bd branches: changeset: 5368:cd46297325bd user: Maxim Dounin date: Wed Sep 04 21:30:09 2013 +0400 description: Upstream: fixed $upstream_response_time format specifiers. diffstat: src/http/ngx_http_upstream.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4309,7 +4309,7 @@ ngx_http_upstream_response_time_variable ms = (ngx_msec_int_t) (state[i].response_sec * 1000 + state[i].response_msec); ms = ngx_max(ms, 0); - p = ngx_sprintf(p, "%d.%03d", ms / 1000, ms % 1000); + p = ngx_sprintf(p, "%T.%03M", (time_t) ms / 1000, ms % 1000); } else { *p++ = '-'; From vbart at nginx.com Wed Sep 4 18:35:09 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 04 Sep 2013 18:35:09 +0000 Subject: [nginx] Return reason phrase for 414. Message-ID: details: http://hg.nginx.org/nginx/rev/907f01a2a7c0 branches: changeset: 5369:907f01a2a7c0 user: Valentin Bartenev date: Tue Sep 03 21:07:19 2013 +0400 description: Return reason phrase for 414. After 62be77b0608f nginx can return this code. diffstat: src/http/ngx_http_header_filter_module.c | 5 +---- 1 files changed, 1 insertions(+), 4 deletions(-) diffs (15 lines): diff -r cd46297325bd -r 907f01a2a7c0 src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Wed Sep 04 21:30:09 2013 +0400 +++ b/src/http/ngx_http_header_filter_module.c Tue Sep 03 21:07:19 2013 +0400 @@ -92,10 +92,7 @@ static ngx_str_t ngx_http_status_lines[] ngx_string("411 Length Required"), ngx_string("412 Precondition Failed"), ngx_string("413 Request Entity Too Large"), - ngx_null_string, /* "414 Request-URI Too Large", but we never send it - * because we treat such requests as the HTTP/0.9 - * requests and send only a body without a header - */ + ngx_string("414 Request-URI Too Large"), ngx_string("415 Unsupported Media Type"), ngx_string("416 Requested Range Not Satisfiable"), From agentzh at gmail.com Wed Sep 4 21:38:34 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 4 Sep 2013 14:38:34 -0700 Subject: Only fire a handler once In-Reply-To: References: Message-ID: Hello! On Fri, Aug 16, 2013 at 10:28 PM, Aaron Bedra wrote: > I'm looking for a way to make sure a handler only fires once. If your handler is possible to run multiple times for the same request (like the post_subrequest handlers) and you want to avoid that, you can just use a module ctx field to serve as a flag for that. For example, ctx = ngx_http_get_module_ctx(r, ngx_http_foo_module); if (ctx == NULL) { /* create ctx here and ctx->already_run should be initialized to 0 */ } if (ctx->already_run) { return NGX_DONE; } /* first time */ ctx->already_run = 1; /* process normally */ Regards, -agentzh From sepherosa at gmail.com Thu Sep 5 06:47:34 2013 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Thu, 5 Sep 2013 14:47:34 +0800 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: <20130903143643.GM65634@mdounin.ru> References: <20130902144927.GD65634@mdounin.ru> <20130903143643.GM65634@mdounin.ru> Message-ID: On Tue, Sep 3, 2013 at 10:36 PM, Maxim Dounin wrote: > Hello! > Hi, > > Well, the idea is to keep at least one listen socket opened. Maybe I > could > > find other way in kernel to make it less tricky. However, that may add > > extra syscall or socket option. > > I think extra syscall/socket option will be ok as long as it'll > save us from the hassle of opening sockets. Not sure what to do > with Linux compatibility though. > Yeah, this is also my concern. > > Another aproach which may be slightly better than the code is your > last patch is to reopen sockets before spawning each worker > process: this way, master may keep listen sockets open (listen > queue is shared with the same socket as inherited by a worker > process then, right?) and worker processes are equal and don't > need to open sockets themself. It needs careful handling on dead > process respawn codepath though. > This may be doable and could better than my approach. I will take a look at the code and try implementing it. > > > (We've also discussed this here in office serveral times, and it > > > seems that general consensus is that SO_REUSEPORT for TCP balancing > > > isn't really good interface. It would be much easier for everyone > > > if normal workflow with inherited listen socket descriptors just > > > worked. Especially given the fact that in nginx case it's mostly > > > about benchmarking, since in real life load distribution between > > > worker processes is good enough.) > > > > > > In DragonFly, SO_REUSEPORT is more than load balance: it makes the > accepted > > sockets network processing completely CPU localized (from user land to > > kernel land on both RX and TX path). This level of network processing > CPU > > localization could not be achieved by the old listen socket inheritance > > usage model (even if I could divide listen socket's completion queue to > > each CPU base on RX hash, the level of CPU localization achieved by > > SO_REUSEPORT still could not be achieved easily). > > Could you please point out how it's achieved? > > I have just put something up, which may help understanding what I have described above. Here it is: http://leaf.dragonflybsd.org/~sephe/netisr_so_reuseport.txt > We here tend to think that proper interface from an application > point of view would be to implement a socket option which > basically creates separate listen queues for inherited sockets. > But if this isn't going to work, it's probably better to focus on > SO_REUSEPORT. > Well, I think I am going to stick w/ SO_REUSEPORT, mainly because the implementation is simple, straightforward, less invasive and the result is good. Besides, user space applications only need small changes to the listen socket related code (most of the time, it is quite simple), which means easy adoption. And in addition to TCP listen socket, SO_REUSEPORT also helps UDP socket reception load distribution and processing CPU localization. > > BTW, are you going to be on the upcoming EuroBSDcon? I'm not, but > Igor and Gleb Smirnoff (glebius at freebsd.org) will be there, and it > will be cool if you'll meet and discuss the SO_REUSEPORT usage for > balancing. > > Sorry, I am not going to attend EuroBSDcon. However, it will be cool if we could discuss (through email) about SO_REUSEPORT or something that you folks are planning. Best Regards, sephe -- Tomorrow Will Never Die -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Sep 5 13:07:57 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 05 Sep 2013 13:07:57 +0000 Subject: [nginx] Fixed handling of the ready flag with kqueue. Message-ID: details: http://hg.nginx.org/nginx/rev/ee78c7705a8e branches: changeset: 5370:ee78c7705a8e user: Valentin Bartenev date: Thu Sep 05 16:53:02 2013 +0400 description: Fixed handling of the ready flag with kqueue. There is nothing to do more when recv() has returned 0, so we should drop the flag. diffstat: src/os/unix/ngx_readv_chain.c | 1 + src/os/unix/ngx_recv.c | 1 + 2 files changed, 2 insertions(+), 0 deletions(-) diffs (22 lines): diff -r 907f01a2a7c0 -r ee78c7705a8e src/os/unix/ngx_readv_chain.c --- a/src/os/unix/ngx_readv_chain.c Tue Sep 03 21:07:19 2013 +0400 +++ b/src/os/unix/ngx_readv_chain.c Thu Sep 05 16:53:02 2013 +0400 @@ -129,6 +129,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx "%d available bytes", rev->available); #endif + rev->ready = 0; rev->eof = 1; rev->available = 0; } diff -r 907f01a2a7c0 -r ee78c7705a8e src/os/unix/ngx_recv.c --- a/src/os/unix/ngx_recv.c Tue Sep 03 21:07:19 2013 +0400 +++ b/src/os/unix/ngx_recv.c Thu Sep 05 16:53:02 2013 +0400 @@ -80,6 +80,7 @@ ngx_unix_recv(ngx_connection_t *c, u_cha * even if kqueue reported about available data */ + rev->ready = 0; rev->eof = 1; rev->available = 0; } From vbart at nginx.com Thu Sep 5 13:07:58 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 05 Sep 2013 13:07:58 +0000 Subject: [nginx] Events: removed unused flags from the ngx_event_s struct... Message-ID: details: http://hg.nginx.org/nginx/rev/b95e70ae6bcd branches: changeset: 5371:b95e70ae6bcd user: Valentin Bartenev date: Thu Sep 05 16:53:02 2013 +0400 description: Events: removed unused flags from the ngx_event_s structure. They are not used since 708f8bb772ec (pre 0.0.1). diffstat: src/event/ngx_event.h | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r ee78c7705a8e -r b95e70ae6bcd src/event/ngx_event.h --- a/src/event/ngx_event.h Thu Sep 05 16:53:02 2013 +0400 +++ b/src/event/ngx_event.h Thu Sep 05 16:53:02 2013 +0400 @@ -69,10 +69,6 @@ struct ngx_event_s { unsigned delayed:1; - unsigned read_discarded:1; - - unsigned unexpected_eof:1; - unsigned deferred_accept:1; /* the pending eof reported by kqueue or in aio chain operation */ From ywu at about.com Thu Sep 5 14:16:45 2013 From: ywu at about.com (Yongfeng Wu) Date: Thu, 5 Sep 2013 14:16:45 +0000 Subject: timeout for very slow client Message-ID: <0B4D8CAE1C77DF4F871890F33A3A837B18AD4935@S059EXCHMB01.staff.iaccap.com> Any ideas please? Thanks a lot. From: Yongfeng Wu Sent: Tuesday, September 03, 2013 3:28 PM To: 'nginx-devel at nginx.org' Subject: timeout for very slow client Hi, I have a question about the slow clients. If a client (for example, a dial up client) is very slow and nginx needs a long time to send a big response to the client, does nginx have a timeout mechanism for that? Or nginx will keep sending the response until all sent out? I know we have a directive "send_timeout", but that's for " only between two successive write operations, not for the transmission of the whole response". Thanks a lot, Yong -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.bedra at gmail.com Thu Sep 5 14:22:20 2013 From: aaron.bedra at gmail.com (Aaron Bedra) Date: Thu, 5 Sep 2013 09:22:20 -0500 Subject: Only fire a handler once In-Reply-To: References: Message-ID: Thanks! On Wed, Sep 4, 2013 at 4:38 PM, Yichun Zhang (agentzh) wrote: > Hello! > > On Fri, Aug 16, 2013 at 10:28 PM, Aaron Bedra wrote: > > I'm looking for a way to make sure a handler only fires once. > > If your handler is possible to run multiple times for the same request > (like the post_subrequest handlers) and you want to avoid that, you > can just use a module ctx field to serve as a flag for that. For > example, > > ctx = ngx_http_get_module_ctx(r, ngx_http_foo_module); > if (ctx == NULL) { > /* create ctx here and ctx->already_run should be initialized to 0 > */ > } > > if (ctx->already_run) { > return NGX_DONE; > } > /* first time */ > ctx->already_run = 1; > > /* process normally */ > > Regards, > -agentzh > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 5 15:16:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Sep 2013 19:16:28 +0400 Subject: timeout for very slow client In-Reply-To: <0B4D8CAE1C77DF4F871890F33A3A837B18AD4935@S059EXCHMB01.staff.iaccap.com> References: <0B4D8CAE1C77DF4F871890F33A3A837B18AD4935@S059EXCHMB01.staff.iaccap.com> Message-ID: <20130905151628.GO65634@mdounin.ru> Hello! On Thu, Sep 05, 2013 at 02:16:45PM +0000, Yongfeng Wu wrote: > Any ideas please? [...] > I have a question about the slow clients. > > If a client (for example, a dial up client) is very slow and > nginx needs a long time to send a big response to the client, > does nginx have a timeout mechanism for that? Or nginx will keep > sending the response until all sent out? > > I know we have a directive "send_timeout", but that's for " only > between two successive write operations, not for the > transmission of the whole response". What makes you think that the question is relevant for the nginx-devel@ list? It might be a good idea to ask it in nginx@ instead. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Sep 5 17:28:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Sep 2013 21:28:19 +0400 Subject: [PATCH] SO_REUSEPORT support for listen sockets (round 3) In-Reply-To: References: <20130902144927.GD65634@mdounin.ru> <20130903143643.GM65634@mdounin.ru> Message-ID: <20130905172819.GP65634@mdounin.ru> Hello! On Thu, Sep 05, 2013 at 02:47:34PM +0800, Sepherosa Ziehau wrote: [...] > > Another aproach which may be slightly better than the code is your > > last patch is to reopen sockets before spawning each worker > > process: this way, master may keep listen sockets open (listen > > queue is shared with the same socket as inherited by a worker > > process then, right?) and worker processes are equal and don't > > need to open sockets themself. It needs careful handling on dead > > process respawn codepath though. > > > > > This may be doable and could better than my approach. I will take a look > at the code and try implementing it. Please note that "before" above isn't something well-thought, "after" might be better. > > > In DragonFly, SO_REUSEPORT is more than load balance: it makes the accepted > > > sockets network processing completely CPU localized (from user land to > > > kernel land on both RX and TX path). This level of network processing CPU > > > localization could not be achieved by the old listen socket inheritance > > > usage model (even if I could divide listen socket's completion queue to > > > each CPU base on RX hash, the level of CPU localization achieved by > > > SO_REUSEPORT still could not be achieved easily). > > > > Could you please point out how it's achieved? > > > > > > I have just put something up, which may help understanding what I have > described above. Here it is: > http://leaf.dragonflybsd.org/~sephe/netisr_so_reuseport.txt Thanks a lot. > > We here tend to think that proper interface from an application > > point of view would be to implement a socket option which > > basically creates separate listen queues for inherited sockets. > > But if this isn't going to work, it's probably better to focus on > > SO_REUSEPORT. > > > > > Well, I think I am going to stick w/ SO_REUSEPORT, mainly because the > implementation is simple, straightforward, less invasive and the result is > good. Besides, user space applications only need small changes to the > listen socket related code (most of the time, it is quite simple), which > means easy adoption. And in addition to TCP listen socket, SO_REUSEPORT > also helps UDP socket reception load distribution and processing CPU > localization. Thanks, your position is clear enough and I understand your points - SO_REUSEPORT is indeed looks like a simple and effective aproach from kernel point of view, and probably we can live with it from nginx point of view too. We were thinking about some way to implement per-process listen queues for sockets, probably explicitly created with some setsockopt to avoid a need for looking into shared queue. I think it should still be possible to achieve the similar level of CPU locality this way, and it should require less changes than SO_REUSEPORT. On the other hand, it's likely more intrusive from kernel point of view (and it's another interface). > > BTW, are you going to be on the upcoming EuroBSDcon? I'm not, but > > Igor and Gleb Smirnoff (glebius at freebsd.org) will be there, and it > > will be cool if you'll meet and discuss the SO_REUSEPORT usage for > > balancing. > > > > > > Sorry, I am not going to attend EuroBSDcon. However, it will be cool if we > could discuss (through email) about SO_REUSEPORT or something that you > folks are planning. One of the questions we are trying to solve is whether we are going to work on SO_REUSEPORT balancing support in FreeBSD. Gleb (who is the primary person here to do the actual work) is very busy right now due to upcoming FreeBSD 10 code freeze, but he promised to look into details and discuss this with other network stack developers on EuroBSDcon. -- Maxim Dounin http://nginx.org/en/donation.html From mellery451 at gmail.com Thu Sep 5 19:00:39 2013 From: mellery451 at gmail.com (Michael Ellery) Date: Thu, 05 Sep 2013 12:00:39 -0700 Subject: ngx_hash_t - basic usage Message-ID: <5228D4D7.2070701@gmail.com> devs, I'm working on a module and I'd like to have a hash table to use across requests. I do, however, need to be able to both add and remove elements - does this implementation support removal? I'd appreciate any suggested examples to look at in the existing source to help get started with basic usage. Regards, Mike Ellery From mdounin at mdounin.ru Fri Sep 6 12:05:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Sep 2013 16:05:29 +0400 Subject: ngx_hash_t - basic usage In-Reply-To: <5228D4D7.2070701@gmail.com> References: <5228D4D7.2070701@gmail.com> Message-ID: <20130906120529.GA20921@mdounin.ru> Hello! On Thu, Sep 05, 2013 at 12:00:39PM -0700, Michael Ellery wrote: > devs, > > I'm working on a module and I'd like to have a hash table to use across > requests. I do, however, need to be able to both add and remove elements > - does this implementation support removal? It doesn't support removals (as well as additions after ngx_hash_init() is called). It is meant to be created during configuration parsing, and used without any further modifications. Note well that ngx_hash_init() might be costly as it tries to find an optimal hash table size. > I'd appreciate any suggested examples to look at in the existing source > to help get started with basic usage. For very basic usage take a look at ngx_http_ssi_filter_module.c, it uses hash to speed up lookups of SSI command names in a static list. Tha hash is created in ngx_http_ssi_init_main_conf() and then used by ngx_http_ssi_body_filter() to find a command. More complete example can be found in ngx_http_map_module.c, as the map module is essentially an interface to create hashes using configuration directives. -- Maxim Dounin http://nginx.org/en/donation.html From neil.mckee.ca at gmail.com Fri Sep 6 19:14:34 2013 From: neil.mckee.ca at gmail.com (Neil Mckee) Date: Fri, 6 Sep 2013 12:14:34 -0700 Subject: best way to get a 1-second timer-tick? In-Reply-To: References: <0637C50E-EAEF-4365-819A-70D36A65C783@gmail.com> <20110423090402.GK56867@mdounin.ru> Message-ID: <2CF0E458-0632-4058-A800-B8A72BC21FA0@gmail.com> I finally got back to looking at this again. Turns out I had been calling ngx_add_timer() too soon -- before the event rbtree was up and running. I moved it to the callback that comes when an nginx process starts, and now it works. I now get my "tick" even if nothing else is happening. https://code.google.com/p/nginx-sflow-module/source/browse/trunk/ngx_http_sflow_module.c#172 Just one question: I had to declare my ngx_event_t as a static var so I could access it here, because I couldn't see how to follow the ngx_cycle_t pointer back to the module's heap-allocated data. Is there a good way, or should I just be content to use a static var? Neil On Apr 23, 2011, at 3:39 AM, Arnaud GRANAL wrote: > 2011/4/23 Maxim Dounin : >> Hello! >> > > Hi! > >> On Fri, Apr 22, 2011 at 09:09:15PM +0300, Arnaud GRANAL wrote: >> >>> On Fri, Apr 22, 2011 at 9:02 PM, Neil Mckee wrote: >>>> Hello, >>>> >>> >>> Hi Neil, >>> >>>> I have written a module to implement sFlow in nginx (nginx-sflow-module.googlecode.com). I'm simulating a 1-second timer-tick by assuming that the request handler will be called at least once per second. That's probably a safe assumption for any server that would care about sFlow monitoring, but I expect there's a better way... >>>> >>>> I tried asking for a timer callback like this: >>>> >>>> ngx_event_t *ev = ngx_pcalloc(pool, sizeof(ngx_event_t)); >>>> ev->hander = ngx_http_sflow_tick_event_hander; >>>> ngx_add_timer(ev, 1000); >>>> >>>> but (like most russian girls) the event never called me back. It looks like I might have to hang this on a file-descriptor somehow, but that's where I'm getting lost. Any pointers would be most appreciated. >>>> >>> >>> The main thing is that you should use ngx_add_event() here instead of >>> callocing the event struct directly and be careful of what you do with >>> ngx_http_finalize_request. >> >> No, you are wrong. >> > > Then i'll know for the next guy :o) > > A. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://nginx.org/mailman/listinfo/nginx-devel From ddiaz at cenditel.gob.ve Sat Sep 7 23:29:30 2013 From: ddiaz at cenditel.gob.ve (=?iso-8859-1?q?Dhionel_D=EDaz?=) Date: Sat, 07 Sep 2013 18:59:30 -0430 Subject: [PATCH 0 of 1] Unescape URI in Destination header while handling COPY or MOVE methods. Message-ID: Dear nginx developers, While testing ngx_http_dav_module, it was found that in MOVE operations the destination path contains percent encoded characters unchanged from Destination header. A small patch that fixes this issue follows. Thanks for all the good work with this important Free Software Project. Best regards, -- Dhionel D?az Centro Nacional de Desarrollo e Investigaci?n en Tecnolog?as Libres Ministerio del Poder Popular para Ciencia, Tecnolog?a e Innovaci?n Rep?blica Bolivariana de Venezuela From ddiaz at cenditel.gob.ve Sat Sep 7 23:29:31 2013 From: ddiaz at cenditel.gob.ve (=?iso-8859-1?q?Dhionel_D=EDaz?=) Date: Sat, 07 Sep 2013 18:59:31 -0430 Subject: [PATCH 1 of 1] Unescape URI in Destination header while handling COPY or MOVE methods In-Reply-To: References: Message-ID: <1c6337457a63fb4b1c17.1378596571@cenditel1053.cenditel> # HG changeset patch # User Dhionel D?az # Date 1378590256 16200 # Sat Sep 07 17:14:16 2013 -0430 # Node ID 1c6337457a63fb4b1c17b416403ffdd3aa6599e7 # Parent b95e70ae6bcdbae99a967df01e1011839f19ee0e Unescape URI in Destination header while handling COPY or MOVE methods. diff -r b95e70ae6bcd -r 1c6337457a63 src/http/modules/ngx_http_dav_module.c --- a/src/http/modules/ngx_http_dav_module.c Thu Sep 05 16:53:02 2013 +0400 +++ b/src/http/modules/ngx_http_dav_module.c Sat Sep 07 17:14:16 2013 -0430 @@ -515,7 +515,7 @@ static ngx_int_t ngx_http_dav_copy_move_handler(ngx_http_request_t *r) { - u_char *p, *host, *last, ch; + u_char *p, *src, *host, *last, ch; size_t len, root; ngx_err_t err; ngx_int_t rc, depth; @@ -608,6 +608,10 @@ duri.data = p; flags = 0; + src = p; + ngx_unescape_uri(&p, &src, duri.len, NGX_UNESCAPE_URI); + duri.len = p - duri.data; + if (ngx_http_parse_unsafe_uri(r, &duri, &args, &flags) != NGX_OK) { goto invalid_destination; } From mdounin at mdounin.ru Mon Sep 9 11:34:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Sep 2013 15:34:14 +0400 Subject: best way to get a 1-second timer-tick? In-Reply-To: <2CF0E458-0632-4058-A800-B8A72BC21FA0@gmail.com> References: <0637C50E-EAEF-4365-819A-70D36A65C783@gmail.com> <20110423090402.GK56867@mdounin.ru> <2CF0E458-0632-4058-A800-B8A72BC21FA0@gmail.com> Message-ID: <20130909113414.GC20921@mdounin.ru> Hello! On Fri, Sep 06, 2013 at 12:14:34PM -0700, Neil Mckee wrote: > I finally got back to looking at this again. Turns out I had > been calling ngx_add_timer() too soon -- before the event rbtree > was up and running. I moved it to the callback that comes when > an nginx process starts, and now it works. I now get my "tick" > even if nothing else is happening. > > https://code.google.com/p/nginx-sflow-module/source/browse/trunk/ngx_http_sflow_module.c#172 > > Just one question: I had to declare my ngx_event_t as a static > var so I could access it here, because I couldn't see how to > follow the ngx_cycle_t pointer back to the module's > heap-allocated data. Is there a good way, or should I just be > content to use a static var? The ngx_http_cycle_get_module_main_conf() macro should do the trick. On the other hand, static var should work too. -- Maxim Dounin http://nginx.org/en/donation.html From alan.hamlett at gmail.com Mon Sep 9 22:43:10 2013 From: alan.hamlett at gmail.com (Alan Hamlett) Date: Mon, 9 Sep 2013 15:43:10 -0700 Subject: limit_conn before SSL handshake Message-ID: Currently the limit_conn and limit_conn_zone config options have this context (can only be used inside these config scopes). context: http,server,location http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn Those 2 configs have no way to prevent nginx from negotiating the SSL handshake, since they only apply after nginx has a HTTP request. This means the nginx server can become CPU bound by spending all it's time in SSL only to have the request dropped by limit_conn. How about making limit_conn and limit_conn_zone be applied before the SSL handshake so precious CPU isn't spent negotiating an SSL session when the connection limit will end up blocking the request anyway? -- Alan http://ahamlett.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lior.k at zend.com Mon Sep 9 23:40:25 2013 From: lior.k at zend.com (Lior Kaplan) Date: Tue, 10 Sep 2013 01:40:25 +0200 Subject: Installation script from nginx Linux repositories In-Reply-To: References: Message-ID: ping ? feedback ? Kaplan On Tue, Aug 27, 2013 at 3:16 PM, Lior Kaplan wrote: > Hi, > > Continuing my tweet question [1], Zend would like to contribute this > simple script to help automate the installation from nginx.org Linux > repositories [2]. > > We've built the script as part of our ZendServer on Ngnix installation > script. > > Let let me know if you have any specifc license requirements. > > Kaplan > > [1] https://twitter.com/KaplanZend/status/362497285189414913 > [2] http://nginx.org/en/linux_packages.html#stable > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From salvadorgirones at gmail.com Tue Sep 10 11:00:30 2013 From: salvadorgirones at gmail.com (=?ISO-8859-1?Q?Salvador_Giron=E8s_Gil?=) Date: Tue, 10 Sep 2013 13:00:30 +0200 Subject: Fwd: Problems with SSL + SPDY cause CLOSE_WAIT connections In-Reply-To: References: Message-ID: Hi, I know this appeared some time ago with first versions of SPDY, but we have been using 1.4.1 SPDY in production and it is generating lots of CLOSE_WAIT connections. Doing some testing, I found reproducible steps and a possible patch. Our env (listing what I think are relevant points, please ask for more if you need it): 1. Nginx 1.4.1 2. SSL + SPDY 3. Proxy-pass 443 port to a Rails app 3. Sendfile on Steps to reproduce: 1. Perform a POST with a big body (I'm uploading a file) 2. Refresh the browser before the upload finishes 3. Close the browser When the connection is closed due the browser close, the error.log (with debug enabled) says: *2013/09/10 10:02:53 [info] 4372#0:* *12 client closed prematurely connection while processing SPDY, client: 1.2.3.4, server: 0.0.0.0:443 *2013/09/10 10:02:53 [debug] 4372#0:* *12 http reading blocked And nothing else happens. I tried to add a connection check inside *ngx_http_block_reading *based on * ngx_http_test_reading:* > #if (NGX_HTTP_SPDY) if (r->spdy_stream) { if (c->error) { err = 0; goto closed; } return; } #endif I'm checking if the spdy_stream is open before trying to read from it. It converts the CLOSE_WAIT connections to TIME_WAIT and they do finalize after some seconds. I dig into nginx code yesterday, so I'm far from correctly understand where the real problem is, but this "patch" seems reasonable to me. Can someone with better nginx source code understanding, provide some feedback? And thanks for all your work! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 10 12:29:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Sep 2013 16:29:32 +0400 Subject: limit_conn before SSL handshake In-Reply-To: References: Message-ID: <20130910122931.GH20921@mdounin.ru> Hello! On Mon, Sep 09, 2013 at 03:43:10PM -0700, Alan Hamlett wrote: > Currently the limit_conn and limit_conn_zone config options have this > context (can only be used inside these config scopes). > context: http,server,location > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn > > Those 2 configs have no way to prevent nginx from negotiating the SSL > handshake, since they only apply after nginx has a HTTP request. > This means the nginx server can become CPU bound by spending all it's time > in SSL only to have the request dropped by limit_conn. > > How about making limit_conn and limit_conn_zone be applied before the SSL > handshake so precious CPU isn't spent negotiating an SSL session when the > connection limit will end up blocking the request anyway? If you want to limit total number of TCP connections from a given IP address, it's usually more effective to limit them at network layer. Most firewalls can do it for you. That's basically why limit_conn/limit_req doesn't do it - instead, they are designed to limit things at HTTP level. Adding an option to limit TCP connections (and, likely, connection rate) in nginx itself might be intresting from configuration simplicity point of view - but it's mostly unrelated to limit_conn/limit_req. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Sep 10 14:29:03 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 Sep 2013 18:29:03 +0400 Subject: Fwd: Problems with SSL + SPDY cause CLOSE_WAIT connections In-Reply-To: References: Message-ID: <201309101829.03755.vbart@nginx.com> On Tuesday 10 September 2013 15:00:30 Salvador Giron?s Gil wrote: > Hi, > > I know this appeared some time ago with first versions of SPDY, but we have > been using 1.4.1 SPDY in production and it is generating lots of CLOSE_WAIT > connections. > > Doing some testing, I found reproducible steps and a possible patch. > > Our env (listing what I think are relevant points, please ask for more if > you need it): > 1. Nginx 1.4.1 > 2. SSL + SPDY > 3. Proxy-pass 443 port to a Rails app > 3. Sendfile on > > Steps to reproduce: > 1. Perform a POST with a big body (I'm uploading a file) > 2. Refresh the browser before the upload finishes > 3. Close the browser > > When the connection is closed due the browser close, the error.log (with > debug enabled) says: > *2013/09/10 10:02:53 [info] 4372#0:* *12 client closed prematurely > connection while processing SPDY, client: 1.2.3.4, server: 0.0.0.0:443 > *2013/09/10 10:02:53 [debug] 4372#0:* *12 http reading blocked > > And nothing else happens. > > I tried to add a connection check inside *ngx_http_block_reading *based on > * ngx_http_test_reading:* > > > #if (NGX_HTTP_SPDY) > > if (r->spdy_stream) { > > if (c->error) { > err = 0; > > goto closed; > } > > return; > } > #endif [..] No, it is just wrong place to fix. The right patch looks like this: diff -r 50f065641b4c src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Wed Jul 17 16:51:21 2013 +0400 +++ b/src/http/ngx_http_spdy.c Tue Sep 10 18:24:41 2013 +0400 @@ -1204,6 +1204,7 @@ ngx_http_spdy_state_data(ngx_http_spdy_c } if (rb->post_handler) { + r->read_event_handler = ngx_http_block_reading; rb->post_handler(r); } } @@ -2604,6 +2605,9 @@ ngx_http_spdy_read_request_body(ngx_http r->request_body->post_handler = post_handler; + r->read_event_handler = ngx_http_test_reading; + r->write_event_handler = ngx_http_request_empty_handler; + return NGX_AGAIN; } > I'm checking if the spdy_stream is open before trying to read from it. It > converts the CLOSE_WAIT connections to TIME_WAIT and they do finalize after > some seconds. > > I dig into nginx code yesterday, so I'm far from correctly understand where > the real problem is, but this "patch" seems reasonable to me. > > Can someone with better nginx source code understanding, provide some > feedback? > > And thanks for all your work! Thank you for the report. Please, try the patch above. wbr, Valentin V. Bartenev From mellery451 at gmail.com Tue Sep 10 21:33:31 2013 From: mellery451 at gmail.com (Michael Ellery) Date: Tue, 10 Sep 2013 14:33:31 -0700 Subject: returning a 302 from ACCESS_PHASE handler Message-ID: <522F902B.9050508@gmail.com> Hello, I have an ACCESS_PHASE handler and, in some cases, I want to return a 302 to a different domain. I'm currently doing the following: ngx_table_elt_t *set_location = ngx_list_push(&r->headers_out.headers); if (set_location == NULL) { SXEL2("ERROR: failed to add location header"); return -1; } r->err_status = NGX_HTTP_MOVED_TEMPORARILY; set_location->hash = 1; ngx_str_set(&set_location->key, "Location"); set_location->value.len = len; set_location->value.data = location; ngx_http_clear_location(r); ngx_http_finalize_request(r, NGX_HTTP_MOVED_TEMPORARILY); and then returning NGX_OK from my handler. This seems to work in the sense that I do get the expected 302 at my client - HOWEVER, I also get a segfault: 2013/09/10 13:26:22 [alert] 28910#0: worker process 29362 exited on signal 11 ...which seems to be happening at ngx_http_proxy_module.c:645 ngx_http_set_ctx(r, ctx, ngx_http_proxy_module); because r->ctx is NULL at this point. I suspect that calling ngx_http_finalize_request from my handler is causing this, although I have not conclusively proven this. Does someone know if it's possible to return a redirect from ACCESS handlers and, if so, what is the proper way to accomplish it? TIA, Mike Ellery From flygoast at 126.com Wed Sep 11 01:50:30 2013 From: flygoast at 126.com (flygoast) Date: Wed, 11 Sep 2013 09:50:30 +0800 (CST) Subject: returning a 302 from ACCESS_PHASE handler In-Reply-To: <522F902B.9050508@gmail.com> References: <522F902B.9050508@gmail.com> Message-ID: <1d1a8f48.3853.1410ab6713b.Coremail.flygoast@126.com> I once wrote a similar module, just directly in the handler, return NGX_HTTP_MOVED_TEMPORARILY; see: https://github.com/flygoast/ngx_http_url_hash_module At 2013-09-11 05:33:31,"Michael Ellery" wrote: >Hello, > >I have an ACCESS_PHASE handler and, in some cases, I want to return a 302 to a different domain. I'm currently doing >the following: > > > ngx_table_elt_t *set_location = ngx_list_push(&r->headers_out.headers); > if (set_location == NULL) { > SXEL2("ERROR: failed to add location header"); > return -1; > } > > r->err_status = NGX_HTTP_MOVED_TEMPORARILY; > > set_location->hash = 1; > ngx_str_set(&set_location->key, "Location"); > set_location->value.len = len; > set_location->value.data = location; > > ngx_http_clear_location(r); > > ngx_http_finalize_request(r, NGX_HTTP_MOVED_TEMPORARILY); > > > >and then returning NGX_OK from my handler. > >This seems to work in the sense that I do get the expected 302 at my client - HOWEVER, I also get a segfault: > >2013/09/10 13:26:22 [alert] 28910#0: worker process 29362 exited on signal 11 > >...which seems to be happening at ngx_http_proxy_module.c:645 > > ngx_http_set_ctx(r, ctx, ngx_http_proxy_module); > >because r->ctx is NULL at this point. > >I suspect that calling ngx_http_finalize_request from my handler is causing this, although I have not conclusively >proven this. > >Does someone know if it's possible to return a redirect from ACCESS handlers and, if so, what is the proper way to >accomplish it? > >TIA, >Mike Ellery > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at adallom.com Wed Sep 11 07:27:20 2013 From: greg at adallom.com (Greg Vishnepolsky) Date: Wed, 11 Sep 2013 10:27:20 +0300 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: References: Message-ID: Hello, Please consider the following problem and solution proposal. I'm using nginx in a transparent proxy configuration, sort of like this: ... location / { ... proxy_set_header Host $host; proxy_pass https://$host; } This works OK, except for a pretty bad performance issue, where (in this case) nginx doesn't do the following: 1) SSL session reuse 2) Caching of upstream keepalive connections If an explicit upstream configuration is defined per proxied host, like this: upstream some.proxied.host.com { server some.domain.com:443; keepalive 10; } Only then will SSL session reuse and upstream keepalive work as expected. Unfortunately, this "trick" can't be used for the aforementioned "transparent proxy" configuration, since I don't know in advance which hosts are going to be proxied. In order to allow for these very important optimizations when proxy_pass is used with "unknown" hosts, I believe a patch is needed. An additional configuration such as this should be considered: server { ... proxy_upstream_default_keepalive on; proxy_upstream_default_keepalive_max_hosts 10; proxy_upstream_default_keepalive_max_connections 100; .... } These are to be configurations of the ngx_http_proxy_module. In the proposed patch, upstreams are created dynamically for new hosts (up to "max_hosts" at the same time). For each upstream the number of simultaneous keepalive'd connections would be "max_connections". The following patch does not currently solve the SSL session reuse problem, but it does handle the keepalive pooling problem. Here it is on github: https://gist.github.com/gregvish/6511822/raw/1f7de28c8ac7bf133376eccee9bb4fc65a8a2917/default_keepalive_patch It was taken against the 1.4.2 version source code. Most of the new code was added to the "ngx_http_upstream_keepalive_module". This new code is called by the function "ngx_http_upstream_resolve_handler", after the call to "ngx_http_upstream_create_round_robin_peer". The new function I added, "ngx_http_upstream_default_keepalive_adapt_peer" converts the peer that was created by "create_round_robin_peer" into a "keepalive'd" peer (if the "conversion" fails, the peer remains a regular round robin one, and continues to work). Inside the new code in the "keepalive_module", new hosts are added to a data structure, where uniqueness is established by a tuple of (host, port). If all the allotted entries are taken, a "garbage collection" occurs. During the collection, cache entries are removed where no keepalive'd connections are currently established. I did some testing, and it works as expected. Didn't see memory leaks or unexpected behavior. As for the SSL session reuse in this case, "ngx_http_upstream_create_round_robin_peer" allocates the "ngx_http_upstream_rr_peers_t" struct (which holds the SSL session) in the request pool (r->pool), and not somewhere "persistent". A similar patch can be added there, so there is a cache of these structures as well. This should enable the session reuse in this case too. I didn't write this patch yet, since I believe the keepalive'd connections have a greater performance impact. P.S. I'd love to receive any feedback that you are willing to give me. Perhaps you can think of a far better way to do this, or improve the code in some way. Also I'd like to know if you'd consider this patch to be added to an official release, and what needs to be changed so this can happen. Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 11 11:35:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Sep 2013 15:35:46 +0400 Subject: returning a 302 from ACCESS_PHASE handler In-Reply-To: <522F902B.9050508@gmail.com> References: <522F902B.9050508@gmail.com> Message-ID: <20130911113546.GK20921@mdounin.ru> Hello! On Tue, Sep 10, 2013 at 02:33:31PM -0700, Michael Ellery wrote: > I have an ACCESS_PHASE handler and, in some cases, I want to > return a 302 to a different domain. I'm currently doing > the following: [...] > I suspect that calling ngx_http_finalize_request from my handler > is causing this, although I have not conclusively > proven this. > > Does someone know if it's possible to return a redirect from > ACCESS handlers and, if so, what is the proper way to > accomplish it? You shouldn't call finalize request yourself, just return NGX_HTTP_MOVED_TEMPORARILY instead. Take a look at ngx_http_core_access_phase() in src/http/ngx_http_core_module.c to see how it's handled. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Sep 11 11:53:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Sep 2013 15:53:38 +0400 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: References: Message-ID: <20130911115338.GL20921@mdounin.ru> Hello! On Wed, Sep 11, 2013 at 10:27:20AM +0300, Greg Vishnepolsky wrote: [...] > These are to be configurations of the ngx_http_proxy_module. In the > proposed patch, upstreams are created dynamically for new hosts (up to > "max_hosts" at the same time). For each upstream the number of simultaneous > keepalive'd connections would be "max_connections". > The following patch does not currently solve the SSL session reuse problem, > but it does handle the keepalive pooling problem. Here it is on github: > https://gist.github.com/gregvish/6511822/raw/1f7de28c8ac7bf133376eccee9bb4fc65a8a2917/default_keepalive_patch > It was taken against the 1.4.2 version source code. > > Most of the new code was added to the "ngx_http_upstream_keepalive_module". > This new code is called by the function > "ngx_http_upstream_resolve_handler", after the call to > "ngx_http_upstream_create_round_robin_peer". The new function I added, > "ngx_http_upstream_default_keepalive_adapt_peer" converts the peer that was > created by "create_round_robin_peer" into a "keepalive'd" peer (if the > "conversion" fails, the peer remains a regular round robin one, and > continues to work). While the patch may work, it looks bad from architectural point of view. It essentially makes upstream keepalive module an integral part of the upstream module, which isn't a good thing (and also will break --without-http_upstream_keepalive_module). The upstream module should provide an interface to do things instead. Also, it looks like the patch adds lots of code duplication. The code to check peer address and lookup a connection in the cache is already present in the upstream keepalive module, and it should be used instead of adding another structures/code to do the same task. -- Maxim Dounin http://nginx.org/en/donation.html From greg at adallom.com Wed Sep 11 12:46:51 2013 From: greg at adallom.com (Greg Vishnepolsky) Date: Wed, 11 Sep 2013 15:46:51 +0300 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) Message-ID: Hi Maxim, thanks for the prompt reply! > While the patch may work, it looks bad from architectural point of > view. It essentially makes upstream keepalive module an integral > part of the upstream module, which isn't a good thing (and also > will break --without-http_upstream_ > keepalive_module). The > upstream module should provide an interface to do things instead. You're definitely right about this, I haven't thought about that configure option. How do you suggest to decouple the code? Perhaps add some kind of callback to the proxy configuration and expose a setter interface? > Also, it looks like the patch adds lots of code duplication. > The code to check peer address and lookup a connection in the > cache is already present in the upstream keepalive module, and it > should be used instead of adding another structures/code to do the > same task. When you're saying "is already present", are you referring to the code in "ngx_http_upstream_get_keepalive_peer", where "item->sockaddr" is being compared, as the key to the connection cache? If so, I'll try to see if it works in the described case. Perhaps a hostname should be added as another "uniqueness" identifier to this cache in addition to "sockaddr"? Then a single "ngx_http_upstream_keepalive_srv_conf_t" can be used for many hosts? If you believe that this should work, I agree that this is a better way to do the patch. Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 11 13:30:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Sep 2013 17:30:58 +0400 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: References: Message-ID: <20130911133058.GN20921@mdounin.ru> Hello! On Wed, Sep 11, 2013 at 03:46:51PM +0300, Greg Vishnepolsky wrote: > Hi Maxim, thanks for the prompt reply! > > > While the patch may work, it looks bad from architectural point of > > view. It essentially makes upstream keepalive module an integral > > part of the upstream module, which isn't a good thing (and also > > will break --without-http_upstream_ > > keepalive_module). The > > upstream module should provide an interface to do things instead. > > You're definitely right about this, I haven't thought about that configure > option. How do you suggest to decouple the code? Perhaps add some kind of > callback to the proxy configuration and expose a setter interface? I think right aproach would be to expose some kind of "default" upstream which can be used by modules / configured by users. Not sure how exactly this should be done from user point of view though. > > Also, it looks like the patch adds lots of code duplication. > > The code to check peer address and lookup a connection in the > > cache is already present in the upstream keepalive module, and it > > should be used instead of adding another structures/code to do the > > same task. > > When you're saying "is already present", are you referring to the code in > "ngx_http_upstream_get_keepalive_peer", where "item->sockaddr" is being > compared, as the key to the connection cache? > If so, I'll try to see if it works in the described case. Perhaps a > hostname should be added as another "uniqueness" identifier to this cache > in addition to "sockaddr"? Then a single > "ngx_http_upstream_keepalive_srv_conf_t" can be used for many hosts? > If you believe that this should work, I agree that this is a better way to > do the patch. Yes. The sockaddr contains information needed to identify a peer, and it's already used in multi-server upstream blocks for this. -- Maxim Dounin http://nginx.org/en/donation.html From greg at adallom.com Wed Sep 11 15:54:00 2013 From: greg at adallom.com (Greg Vishnepolsky) Date: Wed, 11 Sep 2013 18:54:00 +0300 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: <20130911133058.GN20921@mdounin.ru> References: <20130911133058.GN20921@mdounin.ru> Message-ID: Hi Maxim, OK, I've implemented your advice about the cache, and some initial testing shows that it works. I have removed all the code that manages the "kcf cahce", and now there is only one default "ngx_http_upstream_keepalive_srv_conf_t". I have not yet implemented the decoupling from the upstream module, but I'll get to it soon. Here is the improved patch: https://gist.github.com/gregvish/6525382/raw/8e0d71a69319d3a9628c903e0112a275b3aff9c7/v2_default_keepalive_patch You've said the following: > Yes. The sockaddr contains information needed to identify a peer, > and it's already used in multi-server upstream blocks for this. However, in case of SSL connections, it is insufficient to identify a peer according to the sockaddr. The hostname is important. For examlple: https://a.host.com resolves to 1.1.1.1:443 https://b.host.com also resoves to 1.1.1.1:443 If the server at 1.1.1.1 holds an SSL cert _only_ for a.host.com, it would be wrong to use keepalive connections that were opened to this sockaddr for requests for b.host.com. If a connection will not be reused, during SSL handshake the host cert can be properly verified for each new host. The solution that I implemented for this is to add a "host" field to "ngx_http_upstream_keepalive_cache_t" and "ngx_http_upstream_keepalive_peer_data_t". The function "ngx_http_upstream_get_keepalive_peer" now also checks that the host matches, as well as the sockaddr to reuse a keepalive connection. Please tell me what you think so far. Thanks, Greg On Wed, Sep 11, 2013 at 4:30 PM, Maxim Dounin wrote: > Hello! > > On Wed, Sep 11, 2013 at 03:46:51PM +0300, Greg Vishnepolsky wrote: > > > Hi Maxim, thanks for the prompt reply! > > > > > While the patch may work, it looks bad from architectural point of > > > view. It essentially makes upstream keepalive module an integral > > > part of the upstream module, which isn't a good thing (and also > > > will break --without-http_upstream_ > > > keepalive_module). The > > > upstream module should provide an interface to do things instead. > > > > You're definitely right about this, I haven't thought about that > configure > > option. How do you suggest to decouple the code? Perhaps add some kind of > > callback to the proxy configuration and expose a setter interface? > > I think right aproach would be to expose some kind of "default" > upstream which can be used by modules / configured by users. Not > sure how exactly this should be done from user point of view > though. > > > > Also, it looks like the patch adds lots of code duplication. > > > The code to check peer address and lookup a connection in the > > > cache is already present in the upstream keepalive module, and it > > > should be used instead of adding another structures/code to do the > > > same task. > > > > When you're saying "is already present", are you referring to the code in > > "ngx_http_upstream_get_keepalive_peer", where "item->sockaddr" is being > > compared, as the key to the connection cache? > > If so, I'll try to see if it works in the described case. Perhaps a > > hostname should be added as another "uniqueness" identifier to this cache > > in addition to "sockaddr"? Then a single > > "ngx_http_upstream_keepalive_srv_conf_t" can be used for many hosts? > > If you believe that this should work, I agree that this is a better way > to > > do the patch. > > Yes. The sockaddr contains information needed to identify a peer, > and it's already used in multi-server upstream blocks for this. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 11 16:32:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Sep 2013 20:32:59 +0400 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: References: <20130911133058.GN20921@mdounin.ru> Message-ID: <20130911163259.GP20921@mdounin.ru> Hello! On Wed, Sep 11, 2013 at 06:54:00PM +0300, Greg Vishnepolsky wrote: [...] > However, in case of SSL connections, it is insufficient to identify a peer > according to the sockaddr. The hostname is important. For examlple: > https://a.host.com resolves to 1.1.1.1:443 > https://b.host.com also resoves to 1.1.1.1:443 > If the server at 1.1.1.1 holds an SSL cert _only_ for a.host.com, it would > be wrong to use keepalive connections that were opened to this sockaddr for > requests for b.host.com. If a connection will not be reused, during SSL > handshake the host cert can be properly verified for each new host. > The solution that I implemented for this is to add a "host" field to > "ngx_http_upstream_keepalive_cache_t" and > "ngx_http_upstream_keepalive_peer_data_t". The function > "ngx_http_upstream_get_keepalive_peer" now also checks that the host > matches, as well as the sockaddr to reuse a keepalive connection. As of now, there is no SSL certificate verification in proxy, and hence there is no need for a check here. With ceritificate verification introduction some check will be needed, but just a host equality check might be suboptimal - e.g., a certificate might be for *.example.com, and both a.example.com and b.example.com are valid hostnames for a connection, but a host check won't allow a connection reuse. Possible solution would be to check SSL peer name on an already established connection. SNI will also complicate things once introduced. But much like the certificate verification, it's a separate problem. -- Maxim Dounin http://nginx.org/en/donation.html From kyprizel at gmail.com Sat Sep 14 10:49:49 2013 From: kyprizel at gmail.com (kyprizel) Date: Sat, 14 Sep 2013 14:49:49 +0400 Subject: Distributed SSL session cache Message-ID: Hi, I'm thinking on design of patch for adding distributed SSL session cache and have a question - is it possible and ok to create keepalive upstream to some storage (memcached/redis/etc), then use it from ngx_ssl_new_session/ngx_ssl_get_cached_session ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Sat Sep 14 15:08:23 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Sat, 14 Sep 2013 19:08:23 +0400 Subject: Distributed SSL session cache In-Reply-To: References: Message-ID: <58391E4C-521A-41AC-8F68-770EE57D445F@nginx.com> On Sep 14, 2013, at 14:49, kyprizel wrote: > Hi, > I'm thinking on design of patch for adding distributed SSL session cache and have a question - > is it possible and ok to create keepalive upstream to some storage (memcached/redis/etc), then use it from ngx_ssl_new_session/ngx_ssl_get_cached_session ? I'm hoping someone from our dev team will follow up with more details, however I was gonna ask if you ever checked Matt Palmer's work (he did that for GitHub apparently) http://hezmatt.org/~mpalmer/blog/2011/06/28/ssl-session-caching-in-nginx.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Sep 14 18:53:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Sep 2013 22:53:39 +0400 Subject: Distributed SSL session cache In-Reply-To: <58391E4C-521A-41AC-8F68-770EE57D445F@nginx.com> References: <58391E4C-521A-41AC-8F68-770EE57D445F@nginx.com> Message-ID: <20130914185339.GD29076@mdounin.ru> Hello! On Sat, Sep 14, 2013 at 07:08:23PM +0400, Andrew Alexeev wrote: > On Sep 14, 2013, at 14:49, kyprizel wrote: > > > Hi, > > I'm thinking on design of patch for adding distributed SSL > > session cache and have a question - > > is it possible and ok to create keepalive upstream to some > > storage (memcached/redis/etc), then use it from > > ngx_ssl_new_session/ngx_ssl_get_cached_session ? > > I'm hoping someone from our dev team will follow up with more > details, however I was gonna ask if you ever checked Matt > Palmer's work (he did that for GitHub apparently) > > http://hezmatt.org/~mpalmer/blog/2011/06/28/ssl-session-caching-in-nginx.html It's blocking, and unlikely can be called "work". Dirty hack at most. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Sep 14 19:06:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Sep 2013 23:06:40 +0400 Subject: Distributed SSL session cache In-Reply-To: References: Message-ID: <20130914190640.GE29076@mdounin.ru> Hello! On Sat, Sep 14, 2013 at 02:49:49PM +0400, kyprizel wrote: > Hi, > I'm thinking on design of patch for adding distributed SSL session cache > and have a question - > is it possible and ok to create keepalive upstream to some storage > (memcached/redis/etc), then use it from > ngx_ssl_new_session/ngx_ssl_get_cached_session ? As far as I remember, OpenSSL doesn't provide a non-blocking interface to session lookup (I've just did a quick look though code, and it seems I remeber it right). This basically ruins the the idea unless you are brave enough to implement needed interfaces in OpenSSL. I would rather focus on a support for SSL session tickets shared between multiple servers. -- Maxim Dounin http://nginx.org/en/donation.html From greg at adallom.com Sun Sep 15 16:30:00 2013 From: greg at adallom.com (Greg Vishnepolsky) Date: Sun, 15 Sep 2013 19:30:00 +0300 Subject: Fwd: Automatic pooling of upstream keepalive connections (patch proposal) In-Reply-To: <20130911163259.GP20921@mdounin.ru> References: <20130911133058.GN20921@mdounin.ru> <20130911163259.GP20921@mdounin.ru> Message-ID: Hi there, While I agree that precise host comparison is suboptimal, I think that in this case it's better to chose the strictest (and simplest) approach for the sake of security. Here is a slightly revised version of the patch: https://gist.github.com/gregvish/6572002/raw/4664ac0d3a81473086f185075a1f67c1e02b5877/v3_default_keepalive_patch I've attempted to think of a nice way to decouple the code, but I couldn't think of anything pretty. At this point I've put some ifdefs around the code that references the keepalive module from the proxy and upstream modules. This coupling is similar to the existing coupling of the "ngx_http_upstream_round_robin" module with the upstream module. In that case, the round robin balancer is used as a default for "un-resolved" upstreams. I guess the right solution for the problem at hand should involve configuring both the balancer and keepalive (and other upstream options) for the default case. Thanks, Greg On Wed, Sep 11, 2013 at 7:32 PM, Maxim Dounin wrote: > Hello! > > On Wed, Sep 11, 2013 at 06:54:00PM +0300, Greg Vishnepolsky wrote: > > [...] > > > However, in case of SSL connections, it is insufficient to identify a > peer > > according to the sockaddr. The hostname is important. For examlple: > > https://a.host.com resolves to 1.1.1.1:443 > > https://b.host.com also resoves to 1.1.1.1:443 > > If the server at 1.1.1.1 holds an SSL cert _only_ for a.host.com, it > would > > be wrong to use keepalive connections that were opened to this sockaddr > for > > requests for b.host.com. If a connection will not be reused, during SSL > > handshake the host cert can be properly verified for each new host. > > The solution that I implemented for this is to add a "host" field to > > "ngx_http_upstream_keepalive_cache_t" and > > "ngx_http_upstream_keepalive_peer_data_t". The function > > "ngx_http_upstream_get_keepalive_peer" now also checks that the host > > matches, as well as the sockaddr to reuse a keepalive connection. > > As of now, there is no SSL certificate verification in proxy, and > hence there is no need for a check here. > > With ceritificate verification introduction some check will be > needed, but just a host equality check might be suboptimal - e.g., a > certificate might be for *.example.com, and both a.example.com and > b.example.com are valid hostnames for a connection, but a host > check won't allow a connection reuse. Possible solution would be > to check SSL peer name on an already established connection. > > SNI will also complicate things once introduced. But much like > the certificate verification, it's a separate problem. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Sun Sep 15 20:51:38 2013 From: kyprizel at gmail.com (kyprizel) Date: Mon, 16 Sep 2013 00:51:38 +0400 Subject: Distributed SSL session cache In-Reply-To: <20130914190640.GE29076@mdounin.ru> References: <20130914190640.GE29076@mdounin.ru> Message-ID: SSL session tickets are not good enough b/c they don't support modern cipher modes (like GCM) and they don't work with PFS. Is it generally possible to implement session lookup in non-blocking way in this case? If yes - is there any good example of OpenSSL's non-blocking callbacks? P.S. As an alternative (and I don't like this idea) - we can distribute sessions to nginx cache via custom-written module, something like it's done in stud. On Sat, Sep 14, 2013 at 11:06 PM, Maxim Dounin wrote: > Hello! > > On Sat, Sep 14, 2013 at 02:49:49PM +0400, kyprizel wrote: > > > Hi, > > I'm thinking on design of patch for adding distributed SSL session cache > > and have a question - > > is it possible and ok to create keepalive upstream to some storage > > (memcached/redis/etc), then use it from > > ngx_ssl_new_session/ngx_ssl_get_cached_session ? > > As far as I remember, OpenSSL doesn't provide a non-blocking > interface to session lookup (I've just did a quick look though > code, and it seems I remeber it right). This basically ruins the > the idea unless you are brave enough to implement needed > interfaces in OpenSSL. > > I would rather focus on a support for SSL session tickets shared > between multiple servers. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Mon Sep 16 08:30:30 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 16 Sep 2013 01:30:30 -0700 Subject: Distributed SSL session cache In-Reply-To: References: <20130914190640.GE29076@mdounin.ru> Message-ID: Hello, > SSL session tickets are not good enough b/c they don't support modern cipher modes (like GCM) and they don't work with PFS. Neither is true. Below is the output of nginx's debug log for two SSL handshakes. First connection creates new session (and does full handshake), while the second one successfully reuses session (and is doing only abbreviated handshake) using Session Ticket from the first connection. As you can see, there was no problem with negotiating TLS 1.2 or PFS cipher suite. [debug] 20655#0: *1 SSL_accept: before/accept initialization [debug] 20655#0: *1 SSL server name: "localhost" [debug] 20655#0: *1 SSL_accept: SSLv3 read client hello A [debug] 20655#0: *1 SSL_accept: SSLv3 write server hello A [debug] 20655#0: *1 SSL_accept: SSLv3 write certificate A [debug] 20655#0: *1 SSL_accept: SSLv3 write key exchange A [debug] 20655#0: *1 SSL_accept: SSLv3 write server done A [debug] 20655#0: *1 SSL_accept: SSLv3 flush data [debug] 20655#0: *1 SSL_do_handshake: -1 [debug] 20655#0: *1 SSL_get_error: 2 [debug] 20655#0: *1 SSL handshake handler: 0 [debug] 20655#0: *1 SSL_accept: SSLv3 read client key exchange A [debug] 20655#0: *1 SSL_accept: SSLv3 read finished A [debug] 20655#0: *1 SSL_accept: SSLv3 write session ticket A [debug] 20655#0: *1 SSL_accept: SSLv3 write change cipher spec A [debug] 20655#0: *1 SSL_accept: SSLv3 write finished A [debug] 20655#0: *1 SSL_accept: SSLv3 flush data [debug] 20655#0: *1 SSL_do_handshake: 1 [debug] 20655#0: *1 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" [debug] 20655#0: *2 SSL_accept: before/accept initialization [debug] 20655#0: *2 SSL server name: "localhost" [debug] 20655#0: *2 SSL_accept: SSLv3 read client hello A [debug] 20655#0: *2 SSL_accept: SSLv3 write server hello A [debug] 20655#0: *2 SSL_accept: SSLv3 write change cipher spec A [debug] 20655#0: *2 SSL_accept: SSLv3 write finished A [debug] 20655#0: *2 SSL_accept: SSLv3 flush data [debug] 20655#0: *2 SSL_do_handshake: -1 [debug] 20655#0: *2 SSL_get_error: 2 [debug] 20655#0: *2 SSL handshake handler: 0 [debug] 20655#0: *2 SSL_accept: SSLv3 read finished A [debug] 20655#0: *2 SSL_do_handshake: 1 [debug] 20655#0: *2 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" [debug] 20655#0: *2 SSL reused session Best regards, Piotr Sikora From kyprizel at gmail.com Mon Sep 16 09:03:09 2013 From: kyprizel at gmail.com (kyprizel) Date: Mon, 16 Sep 2013 13:03:09 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130914190640.GE29076@mdounin.ru> Message-ID: Piotr, are we talking about "session tickets" ( http://tools.ietf.org/html/rfc4507) ? On Mon, Sep 16, 2013 at 12:30 PM, Piotr Sikora wrote: > Hello, > > > SSL session tickets are not good enough b/c they don't support modern > cipher modes (like GCM) and they don't work with PFS. > > Neither is true. Below is the output of nginx's debug log for two SSL > handshakes. First connection creates new session (and does full > handshake), while the second one successfully reuses session (and is > doing only abbreviated handshake) using Session Ticket from the first > connection. As you can see, there was no problem with negotiating TLS > 1.2 or PFS cipher suite. > > [debug] 20655#0: *1 SSL_accept: before/accept initialization > [debug] 20655#0: *1 SSL server name: "localhost" > [debug] 20655#0: *1 SSL_accept: SSLv3 read client hello A > [debug] 20655#0: *1 SSL_accept: SSLv3 write server hello A > [debug] 20655#0: *1 SSL_accept: SSLv3 write certificate A > [debug] 20655#0: *1 SSL_accept: SSLv3 write key exchange A > [debug] 20655#0: *1 SSL_accept: SSLv3 write server done A > [debug] 20655#0: *1 SSL_accept: SSLv3 flush data > [debug] 20655#0: *1 SSL_do_handshake: -1 > [debug] 20655#0: *1 SSL_get_error: 2 > [debug] 20655#0: *1 SSL handshake handler: 0 > [debug] 20655#0: *1 SSL_accept: SSLv3 read client key exchange A > [debug] 20655#0: *1 SSL_accept: SSLv3 read finished A > [debug] 20655#0: *1 SSL_accept: SSLv3 write session ticket A > [debug] 20655#0: *1 SSL_accept: SSLv3 write change cipher spec A > [debug] 20655#0: *1 SSL_accept: SSLv3 write finished A > [debug] 20655#0: *1 SSL_accept: SSLv3 flush data > [debug] 20655#0: *1 SSL_do_handshake: 1 > [debug] 20655#0: *1 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 > TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" > > [debug] 20655#0: *2 SSL_accept: before/accept initialization > [debug] 20655#0: *2 SSL server name: "localhost" > [debug] 20655#0: *2 SSL_accept: SSLv3 read client hello A > [debug] 20655#0: *2 SSL_accept: SSLv3 write server hello A > [debug] 20655#0: *2 SSL_accept: SSLv3 write change cipher spec A > [debug] 20655#0: *2 SSL_accept: SSLv3 write finished A > [debug] 20655#0: *2 SSL_accept: SSLv3 flush data > [debug] 20655#0: *2 SSL_do_handshake: -1 > [debug] 20655#0: *2 SSL_get_error: 2 > [debug] 20655#0: *2 SSL handshake handler: 0 > [debug] 20655#0: *2 SSL_accept: SSLv3 read finished A > [debug] 20655#0: *2 SSL_do_handshake: 1 > [debug] 20655#0: *2 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 > TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" > [debug] 20655#0: *2 SSL reused session > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Mon Sep 16 09:12:06 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 16 Sep 2013 02:12:06 -0700 Subject: Distributed SSL session cache In-Reply-To: References: <20130914190640.GE29076@mdounin.ru> Message-ID: Hey, > Piotr, are we talking about "session tickets" > (http://tools.ietf.org/html/rfc4507) ? Yes, we do. Best regards, Piotr Sikora From mdounin at mdounin.ru Mon Sep 16 11:55:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Sep 2013 15:55:26 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130914190640.GE29076@mdounin.ru> Message-ID: <20130916115526.GA57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 12:51:38AM +0400, kyprizel wrote: > SSL session tickets are not good enough b/c they don't support modern > cipher modes (like GCM) and they don't work with PFS. This was already replied by Piotr. Session tickets are just a way to store SSL session on the client, hence I see no problems with any ciphers. Forward secrecy might be a problem if you use long-term session tickets keys, but it's more about session tickets keys rotation. > Is it generally possible to implement session lookup in non-blocking way in > this case? > If yes - is there any good example of OpenSSL's non-blocking callbacks? It should be possible, but it will likely require non-trivial changes in OpenSSL. And I don't know any good examples. > P.S. As an alternative (and I don't like this idea) - we can distribute > sessions to nginx cache via custom-written module, something like it's done > in stud. This should be doable, and probably it's simpliest solution if you want to stick with server-side sessions store. -- Maxim Dounin http://nginx.org/en/donation.html From daniel.black at openquery.com Mon Sep 16 12:58:47 2013 From: daniel.black at openquery.com (Daniel Black) Date: Mon, 16 Sep 2013 22:58:47 +1000 (EST) Subject: Distributed SSL session cache In-Reply-To: Message-ID: <1617692855.67.1379336327594.JavaMail.root@zimbra.lentz.com.au> For reference some explicit use of session tickets was written by me here: https://github.com/grooverdan/nginx Work on distributing the opaque session tickets still needs to be done. I hope this saves someone some implementation effort. This was previously reviewed. Please look up the comments previously received by Maxim (in the mailing list archives) before representing this. I strongly encourage a distributed session ticket implementation. I wish I'd had the time to finish it myself. -- Daniel Black, Engineer @ Open Query (http://openquery.com) Remote expertise & maintenance for MySQL/MariaDB server environments. From daniel.black at openquery.com Mon Sep 16 13:21:25 2013 From: daniel.black at openquery.com (Daniel Black) Date: Mon, 16 Sep 2013 23:21:25 +1000 (EST) Subject: Distributed SSL session cache In-Reply-To: <20130916115526.GA57081@mdounin.ru> Message-ID: <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> ----- Original Message ----- > Hello! > > On Mon, Sep 16, 2013 at 12:51:38AM +0400, kyprizel wrote: > > > SSL session tickets are not good enough b/c they don't support > > modern > > cipher modes (like GCM) and they don't work with PFS. > > This was already replied by Piotr. Session tickets are just a way > to store SSL session on the client, hence I see no problems with > any ciphers. Forward secrecy might be a problem if you use > long-term session tickets keys, but it's more about session > tickets keys rotation. agree > > Is it generally possible to implement session lookup in non-blocking > > way in > > this case? > > If yes - is there any good example of OpenSSL's non-blocking > > callbacks? > > It should be possible, but it will likely require non-trivial > changes in OpenSSL. And I don't know any good examples. http://twistedmatrix.com/trac/browser/trunk/twisted/protocols/tls.py is in python and uses python wrapped OpenSSL calls however it is non-blocking. > > P.S. As an alternative (and I don't like this idea) - we can > > distribute > > sessions to nginx cache via custom-written module, something like > > it's done > > in stud. > > This should be doable, and probably it's simpliest solution if you > want to stick with server-side sessions store. I was considering name space allocation in the tls ticket name amongst servers and an async distribution mechanism amongst servers (multicast?). Since there is a 120 bytes of bytes per server of session tickets allocating this on every web/mail server in a cluster probably isn't a high memory overhead and since the session key info is reused its not BW intensive either. It also solves some non-blocking aspects associated with key retrieval. On client incompatibility (on ticket renewals), gnutls devs fixed it right away, openssl had already done a fix and nss I had troubles replicating the problem. -- Daniel Black, Engineer @ Open Query (http://openquery.com) Remote expertise & maintenance for MySQL/MariaDB server environments. From mdounin at mdounin.ru Mon Sep 16 13:37:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Sep 2013 17:37:27 +0400 Subject: Distributed SSL session cache In-Reply-To: <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> Message-ID: <20130916133727.GF57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 11:21:25PM +1000, Daniel Black wrote: [...] > > > Is it generally possible to implement session lookup in non-blocking > > > way in > > > this case? > > > If yes - is there any good example of OpenSSL's non-blocking > > > callbacks? > > > > It should be possible, but it will likely require non-trivial > > changes in OpenSSL. And I don't know any good examples. > > > http://twistedmatrix.com/trac/browser/trunk/twisted/protocols/tls.py is in python and uses python wrapped OpenSSL calls however it is non-blocking. We are talking about implementing session lookup callbacks in the OpenSSL in a non-blocking way. Using OpenSSL for non-blocking communication is what nginx already do. > > > P.S. As an alternative (and I don't like this idea) - we can > > > distribute > > > sessions to nginx cache via custom-written module, something like > > > it's done > > > in stud. > > > > This should be doable, and probably it's simpliest solution if you > > want to stick with server-side sessions store. > > I was considering name space allocation in the tls ticket name > amongst servers and an async distribution mechanism amongst > servers (multicast?). Since there is a 120 bytes of bytes per > server of session tickets allocating this on every web/mail > server in a cluster probably isn't a high memory overhead and > since the session key info is reused its not BW intensive > either. It also solves some non-blocking aspects associated with > key retrieval. > > On client incompatibility (on ticket renewals), gnutls devs > fixed it right away, openssl had already done a fix and nss I > had troubles replicating the problem. This, again, about distribution of sessions, not session ticket keys. If considering distribution of session ticket keys, simpliest solution would be to just load keys with a configuration. This allows to don't bother with security of distribution, which otherwise is a major problem. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Mon Sep 16 14:34:50 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 14:34:50 +0000 Subject: [nginx] Events: support for EPOLLRDHUP (ticket #320). Message-ID: details: http://hg.nginx.org/nginx/rev/36b58ddb566d branches: changeset: 5372:36b58ddb566d user: Valentin Bartenev date: Fri Jul 12 14:51:07 2013 +0400 description: Events: support for EPOLLRDHUP (ticket #320). Since Linux 2.6.17, epoll is able to report about peer half-closed connection using special EPOLLRDHUP flag on a read event. diffstat: auto/modules | 1 + auto/os/linux | 16 ++++++++++++++++ src/event/modules/ngx_epoll_module.c | 18 +++++++++++++----- src/event/ngx_event.h | 9 +++++++-- 4 files changed, 37 insertions(+), 7 deletions(-) diffs (130 lines): diff -r b95e70ae6bcd -r 36b58ddb566d auto/modules --- a/auto/modules Thu Sep 05 16:53:02 2013 +0400 +++ b/auto/modules Fri Jul 12 14:51:07 2013 +0400 @@ -42,6 +42,7 @@ fi if [ $NGX_TEST_BUILD_EPOLL = YES ]; then have=NGX_HAVE_EPOLL . auto/have + have=NGX_HAVE_EPOLLRDHUP . auto/have have=NGX_HAVE_EVENTFD . auto/have have=NGX_TEST_BUILD_EPOLL . auto/have EVENT_MODULES="$EVENT_MODULES $EPOLL_MODULE" diff -r b95e70ae6bcd -r 36b58ddb566d auto/os/linux --- a/auto/os/linux Thu Sep 05 16:53:02 2013 +0400 +++ b/auto/os/linux Fri Jul 12 14:51:07 2013 +0400 @@ -65,6 +65,22 @@ if [ $ngx_found = yes ]; then CORE_SRCS="$CORE_SRCS $EPOLL_SRCS" EVENT_MODULES="$EVENT_MODULES $EPOLL_MODULE" EVENT_FOUND=YES + + + # EPOLLRDHUP appeared in Linux 2.6.17, glibc 2.8 + + ngx_feature="EPOLLRDHUP" + ngx_feature_name="NGX_HAVE_EPOLLRDHUP" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs= + ngx_feature_test="int efd = 0, fd = 0; + struct epoll_event ee; + ee.events = EPOLLIN|EPOLLRDHUP|EPOLLET; + ee.data.ptr = NULL; + epoll_ctl(efd, EPOLL_CTL_ADD, fd, &ee)" + . auto/feature fi diff -r b95e70ae6bcd -r 36b58ddb566d src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Thu Sep 05 16:53:02 2013 +0400 +++ b/src/event/modules/ngx_epoll_module.c Fri Jul 12 14:51:07 2013 +0400 @@ -25,6 +25,8 @@ #define EPOLLERR 0x008 #define EPOLLHUP 0x010 +#define EPOLLRDHUP 0x2000 + #define EPOLLET 0x80000000 #define EPOLLONESHOT 0x40000000 @@ -396,13 +398,13 @@ ngx_epoll_add_event(ngx_event_t *ev, ngx if (event == NGX_READ_EVENT) { e = c->write; prev = EPOLLOUT; -#if (NGX_READ_EVENT != EPOLLIN) - events = EPOLLIN; +#if (NGX_READ_EVENT != EPOLLIN|EPOLLRDHUP) + events = EPOLLIN|EPOLLRDHUP; #endif } else { e = c->read; - prev = EPOLLIN; + prev = EPOLLIN|EPOLLRDHUP; #if (NGX_WRITE_EVENT != EPOLLOUT) events = EPOLLOUT; #endif @@ -466,7 +468,7 @@ ngx_epoll_del_event(ngx_event_t *ev, ngx } else { e = c->read; - prev = EPOLLIN; + prev = EPOLLIN|EPOLLRDHUP; } if (e->active) { @@ -501,7 +503,7 @@ ngx_epoll_add_connection(ngx_connection_ { struct epoll_event ee; - ee.events = EPOLLIN|EPOLLOUT|EPOLLET; + ee.events = EPOLLIN|EPOLLOUT|EPOLLET|EPOLLRDHUP; ee.data.ptr = (void *) ((uintptr_t) c | c->read->instance); ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -666,6 +668,12 @@ ngx_epoll_process_events(ngx_cycle_t *cy if ((revents & EPOLLIN) && rev->active) { +#if (NGX_HAVE_EPOLLRDHUP) + if (revents & EPOLLRDHUP) { + rev->pending_eof = 1; + } +#endif + if ((flags & NGX_POST_THREAD_EVENTS) && !rev->accept) { rev->posted_ready = 1; diff -r b95e70ae6bcd -r 36b58ddb566d src/event/ngx_event.h --- a/src/event/ngx_event.h Thu Sep 05 16:53:02 2013 +0400 +++ b/src/event/ngx_event.h Fri Jul 12 14:51:07 2013 +0400 @@ -71,7 +71,7 @@ struct ngx_event_s { unsigned deferred_accept:1; - /* the pending eof reported by kqueue or in aio chain operation */ + /* the pending eof reported by kqueue, epoll or in aio chain operation */ unsigned pending_eof:1; #if !(NGX_THREADS) @@ -349,6 +349,11 @@ extern ngx_event_actions_t ngx_event_a #define NGX_VNODE_EVENT 0 +#if (NGX_HAVE_EPOLL) && !(NGX_HAVE_EPOLLRDHUP) +#define EPOLLRDHUP 0 +#endif + + #if (NGX_HAVE_KQUEUE) #define NGX_READ_EVENT EVFILT_READ @@ -392,7 +397,7 @@ extern ngx_event_actions_t ngx_event_a #elif (NGX_HAVE_EPOLL) -#define NGX_READ_EVENT EPOLLIN +#define NGX_READ_EVENT (EPOLLIN|EPOLLRDHUP) #define NGX_WRITE_EVENT EPOLLOUT #define NGX_LEVEL_EVENT 0 From vbart at nginx.com Mon Sep 16 14:34:52 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 14:34:52 +0000 Subject: [nginx] Upstream: use EPOLLRDHUP to check broken connections (ti... Message-ID: details: http://hg.nginx.org/nginx/rev/46bdbca10dfc branches: changeset: 5373:46bdbca10dfc user: Valentin Bartenev date: Mon Sep 16 18:33:39 2013 +0400 description: Upstream: use EPOLLRDHUP to check broken connections (ticket #320). This allows to detect client connection close with pending data on Linux while processing upstream. diffstat: src/http/ngx_http_upstream.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 49 insertions(+), 0 deletions(-) diffs (59 lines): diff -r 36b58ddb566d -r 46bdbca10dfc src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Jul 12 14:51:07 2013 +0400 +++ b/src/http/ngx_http_upstream.c Mon Sep 16 18:33:39 2013 +0400 @@ -1070,6 +1070,55 @@ ngx_http_upstream_check_broken_connectio #endif +#if (NGX_HAVE_EPOLLRDHUP) + + if ((ngx_event_flags & NGX_USE_EPOLL_EVENT) && ev->pending_eof) { + socklen_t len; + + ev->eof = 1; + c->error = 1; + + err = 0; + len = sizeof(ngx_err_t); + + /* + * BSDs and Linux return 0 and set a pending error in err + * Solaris returns -1 and sets errno + */ + + if (getsockopt(c->fd, SOL_SOCKET, SO_ERROR, (void *) &err, &len) + == -1) + { + err = ngx_errno; + } + + if (err) { + ev->error = 1; + } + + if (!u->cacheable && u->peer.connection) { + ngx_log_error(NGX_LOG_INFO, ev->log, err, + "epoll_wait() reported that client prematurely closed " + "connection, so upstream connection is closed too"); + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_CLIENT_CLOSED_REQUEST); + return; + } + + ngx_log_error(NGX_LOG_INFO, ev->log, err, + "epoll_wait() reported that client prematurely closed " + "connection"); + + if (u->peer.connection == NULL) { + ngx_http_upstream_finalize_request(r, u, + NGX_HTTP_CLIENT_CLOSED_REQUEST); + } + + return; + } + +#endif + n = recv(c->fd, buf, 1, MSG_PEEK); err = ngx_socket_errno; From vbart at nginx.com Mon Sep 16 14:34:53 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 14:34:53 +0000 Subject: [nginx] Use EPOLLRDHUP in ngx_http_test_reading() (ticket #320). Message-ID: details: http://hg.nginx.org/nginx/rev/ef3d094bb6d3 branches: changeset: 5374:ef3d094bb6d3 user: Valentin Bartenev date: Mon Sep 16 18:33:39 2013 +0400 description: Use EPOLLRDHUP in ngx_http_test_reading() (ticket #320). This allows to detect client connection close with pending data when the ngx_http_test_reading() request event handler is set. diffstat: src/http/ngx_http_request.c | 27 +++++++++++++++++++++++++++ 1 files changed, 27 insertions(+), 0 deletions(-) diffs (37 lines): diff -r 46bdbca10dfc -r ef3d094bb6d3 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Sep 16 18:33:39 2013 +0400 +++ b/src/http/ngx_http_request.c Mon Sep 16 18:33:39 2013 +0400 @@ -2694,6 +2694,33 @@ ngx_http_test_reading(ngx_http_request_t #endif +#if (NGX_HAVE_EPOLLRDHUP) + + if ((ngx_event_flags & NGX_USE_EPOLL_EVENT) && rev->pending_eof) { + socklen_t len; + + rev->eof = 1; + c->error = 1; + + err = 0; + len = sizeof(ngx_err_t); + + /* + * BSDs and Linux return 0 and set a pending error in err + * Solaris returns -1 and sets errno + */ + + if (getsockopt(c->fd, SOL_SOCKET, SO_ERROR, (void *) &err, &len) + == -1) + { + err = ngx_errno; + } + + goto closed; + } + +#endif + n = recv(c->fd, buf, 1, MSG_PEEK); if (n == 0) { From vbart at nginx.com Mon Sep 16 15:07:21 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 15:07:21 +0000 Subject: [nginx] Use ngx_pcalloc() in ngx_conf_merge_path_value(). Message-ID: details: http://hg.nginx.org/nginx/rev/7d8770196436 branches: changeset: 5375:7d8770196436 user: Valentin Bartenev date: Mon Sep 16 18:49:10 2013 +0400 description: Use ngx_pcalloc() in ngx_conf_merge_path_value(). It initializes the "data" pointer of ngx_path_t that will be checked after subsequent changes. diffstat: src/core/ngx_file.c | 6 +----- 1 files changed, 1 insertions(+), 5 deletions(-) diffs (23 lines): diff -r ef3d094bb6d3 -r 7d8770196436 src/core/ngx_file.c --- a/src/core/ngx_file.c Mon Sep 16 18:33:39 2013 +0400 +++ b/src/core/ngx_file.c Mon Sep 16 18:49:10 2013 +0400 @@ -402,7 +402,7 @@ ngx_conf_merge_path_value(ngx_conf_t *cf return NGX_CONF_OK; } - *path = ngx_palloc(cf->pool, sizeof(ngx_path_t)); + *path = ngx_pcalloc(cf->pool, sizeof(ngx_path_t)); if (*path == NULL) { return NGX_CONF_ERROR; } @@ -421,10 +421,6 @@ ngx_conf_merge_path_value(ngx_conf_t *cf + init->level[1] + (init->level[1] ? 1 : 0) + init->level[2] + (init->level[2] ? 1 : 0); - (*path)->manager = NULL; - (*path)->loader = NULL; - (*path)->conf_file = NULL; - if (ngx_add_path(cf, path) != NGX_OK) { return NGX_CONF_ERROR; } From vbart at nginx.com Mon Sep 16 15:07:22 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 15:07:22 +0000 Subject: [nginx] Removed surplus initializations from ngx_conf_set_path_s... Message-ID: details: http://hg.nginx.org/nginx/rev/dd9cb4edf499 branches: changeset: 5376:dd9cb4edf499 user: Valentin Bartenev date: Mon Sep 16 18:49:22 2013 +0400 description: Removed surplus initializations from ngx_conf_set_path_slot(). An instance of ngx_path_t is already zeroed by ngx_pcalloc(). diffstat: src/core/ngx_file.c | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diffs (13 lines): diff -r 7d8770196436 -r dd9cb4edf499 src/core/ngx_file.c --- a/src/core/ngx_file.c Mon Sep 16 18:49:10 2013 +0400 +++ b/src/core/ngx_file.c Mon Sep 16 18:49:22 2013 +0400 @@ -359,9 +359,6 @@ ngx_conf_set_path_slot(ngx_conf_t *cf, n return NULL; } - path->len = 0; - path->manager = NULL; - path->loader = NULL; path->conf_file = cf->conf_file->file.name.data; path->line = cf->conf_file->line; From vbart at nginx.com Mon Sep 16 15:07:23 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 16 Sep 2013 15:07:23 +0000 Subject: [nginx] Improved check for duplicate path names in ngx_add_path(). Message-ID: details: http://hg.nginx.org/nginx/rev/cec155f07c84 branches: changeset: 5377:cec155f07c84 user: Valentin Bartenev date: Mon Sep 16 18:49:23 2013 +0400 description: Improved check for duplicate path names in ngx_add_path(). The same path names with different "data" context should not be allowed. In particular it rejects configurations like this: proxy_cache_path /var/cache/ keys_zone=one:10m max_size=1g inactive=5m; proxy_cache_path /var/cache/ keys_zone=two:20m max_size=4m inactive=30s; diffstat: src/core/ngx_file.c | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-) diffs (18 lines): diff -r dd9cb4edf499 -r cec155f07c84 src/core/ngx_file.c --- a/src/core/ngx_file.c Mon Sep 16 18:49:22 2013 +0400 +++ b/src/core/ngx_file.c Mon Sep 16 18:49:23 2013 +0400 @@ -501,6 +501,14 @@ ngx_add_path(ngx_conf_t *cf, ngx_path_t if (p[i]->name.len == path->name.len && ngx_strcmp(p[i]->name.data, path->name.data) == 0) { + if (p[i]->data != path->data) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "the same path name \"%V\" " + "used in %s:%ui and", + &p[i]->name, p[i]->conf_file, p[i]->line); + return NGX_ERROR; + } + for (n = 0; n < 3; n++) { if (p[i]->level[n] != path->level[n]) { if (path->conf_file == NULL) { From piotr at cloudflare.com Mon Sep 16 21:33:44 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 16 Sep 2013 14:33:44 -0700 Subject: [PATCH] SSL: guard use of SSL_OP_MSIE_SSLV2_RSA_PADDING. Message-ID: Hello, while OpenSSL-1.0.1f isn't released just yet, the change that removes SSL_OP_MSIE_SSLV2_RSA_PADDING is already backported to OpenSSL_1_0_1-stable branch and I believe that it's better to proactively guard against this than to wait for people to complain that nginx doesn't compile with new OpenSSL. Best regards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1379366678 25200 # Mon Sep 16 14:24:38 2013 -0700 # Node ID a73678f5f96ffead0b616b2c03dfcfd5445d443b # Parent cec155f07c84953138455b65dfe678bb514e33ca SSL: guard use of SSL_OP_MSIE_SSLV2_RSA_PADDING. This option had no effect since 0.9.7h / 0.9.8b and it was removed in recent OpenSSL. Signed-off-by: Piotr Sikora diff -r cec155f07c84 -r a73678f5f96f src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Mon Sep 16 18:49:23 2013 +0400 +++ b/src/event/ngx_event_openssl.c Mon Sep 16 14:24:38 2013 -0700 @@ -185,8 +185,10 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG); SSL_CTX_set_options(ssl->ctx, SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER); +#ifdef SSL_OP_MSIE_SSLV2_RSA_PADDING /* this option allow a potential SSL 2.0 rollback (CAN-2005-2969) */ SSL_CTX_set_options(ssl->ctx, SSL_OP_MSIE_SSLV2_RSA_PADDING); +#endif SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLEAY_080_CLIENT_DH_BUG); SSL_CTX_set_options(ssl->ctx, SSL_OP_TLS_D5_BUG); From mdounin at mdounin.ru Tue Sep 17 02:33:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 02:33:14 +0000 Subject: [nginx] SSL: guard use of SSL_OP_MSIE_SSLV2_RSA_PADDING. Message-ID: details: http://hg.nginx.org/nginx/rev/a73678f5f96f branches: changeset: 5378:a73678f5f96f user: Piotr Sikora date: Mon Sep 16 14:24:38 2013 -0700 description: SSL: guard use of SSL_OP_MSIE_SSLV2_RSA_PADDING. This option had no effect since 0.9.7h / 0.9.8b and it was removed in recent OpenSSL. Signed-off-by: Piotr Sikora diffstat: src/event/ngx_event_openssl.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (14 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -185,8 +185,10 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLREF2_REUSE_CERT_TYPE_BUG); SSL_CTX_set_options(ssl->ctx, SSL_OP_MICROSOFT_BIG_SSLV3_BUFFER); +#ifdef SSL_OP_MSIE_SSLV2_RSA_PADDING /* this option allow a potential SSL 2.0 rollback (CAN-2005-2969) */ SSL_CTX_set_options(ssl->ctx, SSL_OP_MSIE_SSLV2_RSA_PADDING); +#endif SSL_CTX_set_options(ssl->ctx, SSL_OP_SSLEAY_080_CLIENT_DH_BUG); SSL_CTX_set_options(ssl->ctx, SSL_OP_TLS_D5_BUG); From mdounin at mdounin.ru Tue Sep 17 02:34:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 06:34:25 +0400 Subject: [PATCH] SSL: guard use of SSL_OP_MSIE_SSLV2_RSA_PADDING. In-Reply-To: References: Message-ID: <20130917023425.GM57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 02:33:44PM -0700, Piotr Sikora wrote: > Hello, > while OpenSSL-1.0.1f isn't released just yet, the change that > removes SSL_OP_MSIE_SSLV2_RSA_PADDING is already backported to > OpenSSL_1_0_1-stable branch and I believe that it's better to > proactively guard against this than to wait for people to > complain that nginx doesn't compile with new OpenSSL. Sure. Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From vadim.lazovskiy at gmail.com Tue Sep 17 06:58:18 2013 From: vadim.lazovskiy at gmail.com (Vadim Lazovskiy) Date: Tue, 17 Sep 2013 10:58:18 +0400 Subject: How to return back to handler routine Message-ID: Hello, I'm trying to implement some kind of streaming module. I've opened udp socket at worker process start and reading from it infinitely to a ring buffer. Now I would like to install location handler and stream the buffer contents to client. I'm unable to comprehend how to return back to the handler routine after sending and flushing a chain with a first buffer (ngx_http_output_filter). Would you please briefly explain how to process a request in this way or provide an example/piece of code that implements similar functionality. Thank you! -- Best Regards, Vadim Lazovskiy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 17 13:37:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 13:37:48 +0000 Subject: [nginx] nginx-1.5.5-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/60e0409b9ec7 branches: changeset: 5379:60e0409b9ec7 user: Maxim Dounin date: Tue Sep 17 17:31:00 2013 +0400 description: nginx-1.5.5-RELEASE diffstat: docs/xml/nginx/changes.xml | 79 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 79 insertions(+), 0 deletions(-) diffs (89 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,85 @@ + + + + +?????? nginx ?? ????????? ?????????? HTTP/1.0, +???? ????? ?????????? ???????? ?? ???????. + + +now nginx assumes HTTP/1.0 by default +if it is not able to detect protocol reliably. + + + + + +????????? disable_symlinks ?????? ?????????? O_PATH ?? Linux. + + +the "disable_symlinks" directive now uses O_PATH on Linux. + + + + + +??? ??????????? ????, ??? ?????? ?????? ??????????, +??? ????????????? ?????? epoll +?????? ???????????? ??????? EPOLLRDHUP. + + +now nginx uses EPOLLRDHUP events +to detect premature connection close by clients +if the "epoll" method is used. + + + + + +? ????????? valid_referers ??? ????????????? ????????? server_names. + + +in the "valid_referers" directive if the "server_names" parameter was used. + + + + + +?????????? $request_time ?? ???????? ? nginx/Windows. + + +the $request_time variable did not work in nginx/Windows. + + + + + +? ????????? image_filter.
+??????? Lanshun Zhou. +
+ +in the "image_filter" directive.
+Thanks to Lanshun Zhou. +
+
+ + + +????????????? ? OpenSSL 1.0.1f.
+??????? Piotr Sikora. +
+ +OpenSSL 1.0.1f compatibility.
+Thanks to Piotr Sikora. +
+
+ + +
+ + From mdounin at mdounin.ru Tue Sep 17 13:37:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 13:37:49 +0000 Subject: [nginx] release-1.5.5 tag Message-ID: details: http://hg.nginx.org/nginx/rev/15d823bf6d3e branches: changeset: 5380:15d823bf6d3e user: Maxim Dounin date: Tue Sep 17 17:31:00 2013 +0400 description: release-1.5.5 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -360,3 +360,4 @@ 99eed1a88fc33f32d66e2ec913874dfef3e12fcc 5bdca4812974011731e5719a6c398b54f14a6d61 release-1.5.2 644a079526295aca11c52c46cb81e3754e6ad4ad release-1.5.3 376a5e7694004048a9d073e4feb81bb54ee3ba91 release-1.5.4 +60e0409b9ec7ee194c6d8102f0656598cc4a6cfe release-1.5.5 From piotr at cloudflare.com Wed Sep 18 02:43:03 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Sep 2013 19:43:03 -0700 Subject: [PATCH] MIME: added application/json MIME type. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1379472064 25200 # Tue Sep 17 19:41:04 2013 -0700 # Node ID 95fe40e55d7fddc1c38ac309d4128bae3d9da485 # Parent 15d823bf6d3e3df5e202222b3a1832f67c024bfe MIME: added application/json MIME type. Signed-off-by: Piotr Sikora diff -r 15d823bf6d3e -r 95fe40e55d7f conf/mime.types --- a/conf/mime.types Tue Sep 17 17:31:00 2013 +0400 +++ b/conf/mime.types Tue Sep 17 19:41:04 2013 -0700 @@ -6,6 +6,7 @@ types { image/gif gif; image/jpeg jpeg jpg; application/javascript js; + application/json json; application/atom+xml atom; application/rss+xml rss; From mdounin at mdounin.ru Wed Sep 18 14:54:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Sep 2013 18:54:51 +0400 Subject: [PATCH] MIME: added application/json MIME type. In-Reply-To: References: Message-ID: <20130918145451.GM57081@mdounin.ru> Hello! On Tue, Sep 17, 2013 at 07:43:03PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1379472064 25200 > # Tue Sep 17 19:41:04 2013 -0700 > # Node ID 95fe40e55d7fddc1c38ac309d4128bae3d9da485 > # Parent 15d823bf6d3e3df5e202222b3a1832f67c024bfe > MIME: added application/json MIME type. > > Signed-off-by: Piotr Sikora > > diff -r 15d823bf6d3e -r 95fe40e55d7f conf/mime.types > --- a/conf/mime.types Tue Sep 17 17:31:00 2013 +0400 > +++ b/conf/mime.types Tue Sep 17 19:41:04 2013 -0700 > @@ -6,6 +6,7 @@ types { > image/gif gif; > image/jpeg jpeg jpg; > application/javascript js; > + application/json json; > application/atom+xml atom; > application/rss+xml rss; I'm not sure we need this at all (but likely we do), but at least this seems to be a wrong section of the file. -- Maxim Dounin http://nginx.org/en/donation.html From pluknet at nginx.com Wed Sep 18 14:57:37 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 18 Sep 2013 14:57:37 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/3adbd23bf79e branches: changeset: 5381:3adbd23bf79e user: Sergey Kandaurov date: Wed Sep 18 18:53:24 2013 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 15d823bf6d3e -r 3adbd23bf79e src/core/nginx.h --- a/src/core/nginx.h Tue Sep 17 17:31:00 2013 +0400 +++ b/src/core/nginx.h Wed Sep 18 18:53:24 2013 +0400 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1005005 -#define NGINX_VERSION "1.5.5" +#define nginx_version 1005006 +#define NGINX_VERSION "1.5.6" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" From pluknet at nginx.com Wed Sep 18 14:57:38 2013 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 18 Sep 2013 14:57:38 +0000 Subject: [nginx] Fixed response line formatting with empty reason phrase. Message-ID: details: http://hg.nginx.org/nginx/rev/e8d24b6d7f73 branches: changeset: 5382:e8d24b6d7f73 user: Sergey Kandaurov date: Wed Sep 18 18:53:26 2013 +0400 description: Fixed response line formatting with empty reason phrase. As per RFC 2616 sec 6.1 the response status code is always followed by SP. diffstat: src/http/ngx_http_header_filter_module.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (28 lines): diff -r 3adbd23bf79e -r e8d24b6d7f73 src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Wed Sep 18 18:53:24 2013 +0400 +++ b/src/http/ngx_http_header_filter_module.c Wed Sep 18 18:53:26 2013 +0400 @@ -264,13 +264,13 @@ ngx_http_header_filter(ngx_http_request_ len += ngx_http_status_lines[status].len; } else { - len += NGX_INT_T_LEN; + len += NGX_INT_T_LEN + 1 /* SP */; status_line = NULL; } if (status_line && status_line->len == 0) { status = r->headers_out.status; - len += NGX_INT_T_LEN; + len += NGX_INT_T_LEN + 1 /* SP */; status_line = NULL; } } @@ -451,7 +451,7 @@ ngx_http_header_filter(ngx_http_request_ b->last = ngx_copy(b->last, status_line->data, status_line->len); } else { - b->last = ngx_sprintf(b->last, "%03ui", status); + b->last = ngx_sprintf(b->last, "%03ui ", status); } *b->last++ = CR; *b->last++ = LF; From piotr at cloudflare.com Wed Sep 18 20:07:20 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 18 Sep 2013 13:07:20 -0700 Subject: [PATCH] MIME: added application/json MIME type. In-Reply-To: <20130918145451.GM57081@mdounin.ru> References: <20130918145451.GM57081@mdounin.ru> Message-ID: Hi Maxim, > I'm not sure we need this at all (but likely we do), but at least > this seems to be a wrong section of the file. It's probably more of a "nice to have" than "needed", but this MIME type is already being used by the image filter and JSON is definitely more popular than some of the other MIME types in that file, so I don't see a reason why it shouldn't be there. Definition moved to (hopefully) correct section of the file. Best regards, Piotr Sikora # HG changeset patch # User Piotr Sikora # Date 1379534387 25200 # Wed Sep 18 12:59:47 2013 -0700 # Node ID 1adfe7a260ebe6fa3519b5229847c53470321990 # Parent e8d24b6d7f7304df77ccde7fc8223434c91b5322 MIME: added application/json MIME type. Signed-off-by: Piotr Sikora diff -r e8d24b6d7f73 -r 1adfe7a260eb conf/mime.types --- a/conf/mime.types Wed Sep 18 18:53:26 2013 +0400 +++ b/conf/mime.types Wed Sep 18 12:59:47 2013 -0700 @@ -26,6 +26,7 @@ types { application/font-woff woff; application/java-archive jar war ear; + application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; From piotr at cloudflare.com Wed Sep 18 23:55:25 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 18 Sep 2013 16:55:25 -0700 Subject: [PATCH] SSL: fixed possible memory and file descriptor leak on HUP signal. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1379548290 25200 # Wed Sep 18 16:51:30 2013 -0700 # Node ID c0be8de389be2012875a19a812ebf3ccc66c147d # Parent e8d24b6d7f7304df77ccde7fc8223434c91b5322 SSL: fixed possible memory and file descriptor leak on HUP signal. The problem appeared in 386a06a22c40 (1.3.7). Signed-off-by: Piotr Sikora diff -r e8d24b6d7f73 -r c0be8de389be src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Sep 18 18:53:26 2013 +0400 +++ b/src/event/ngx_event_openssl.c Wed Sep 18 16:51:30 2013 -0700 @@ -280,6 +280,8 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_set_ex_data() failed"); + X509_free(x509); + BIO_free(bio); return NGX_ERROR; } From mdounin at mdounin.ru Thu Sep 19 13:26:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 13:26:24 +0000 Subject: [nginx] MIME: added application/json MIME type. Message-ID: details: http://hg.nginx.org/nginx/rev/1adfe7a260eb branches: changeset: 5383:1adfe7a260eb user: Piotr Sikora date: Wed Sep 18 12:59:47 2013 -0700 description: MIME: added application/json MIME type. Signed-off-by: Piotr Sikora diffstat: conf/mime.types | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/conf/mime.types b/conf/mime.types --- a/conf/mime.types +++ b/conf/mime.types @@ -26,6 +26,7 @@ types { application/font-woff woff; application/java-archive jar war ear; + application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; From mdounin at mdounin.ru Thu Sep 19 13:26:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 13:26:25 +0000 Subject: [nginx] SSL: fixed possible memory and file descriptor leak on H... Message-ID: details: http://hg.nginx.org/nginx/rev/cfbf1d1cc233 branches: changeset: 5384:cfbf1d1cc233 user: Piotr Sikora date: Wed Sep 18 16:51:30 2013 -0700 description: SSL: fixed possible memory and file descriptor leak on HUP signal. The problem appeared in 386a06a22c40 (1.3.7). Signed-off-by: Piotr Sikora diffstat: src/event/ngx_event_openssl.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -280,6 +280,8 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_set_ex_data() failed"); + X509_free(x509); + BIO_free(bio); return NGX_ERROR; } From mdounin at mdounin.ru Thu Sep 19 13:26:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 17:26:41 +0400 Subject: [PATCH] SSL: fixed possible memory and file descriptor leak on HUP signal. In-Reply-To: References: Message-ID: <20130919132641.GV57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 04:55:25PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1379548290 25200 > # Wed Sep 18 16:51:30 2013 -0700 > # Node ID c0be8de389be2012875a19a812ebf3ccc66c147d > # Parent e8d24b6d7f7304df77ccde7fc8223434c91b5322 > SSL: fixed possible memory and file descriptor leak on HUP signal. > > The problem appeared in 386a06a22c40 (1.3.7). > > Signed-off-by: Piotr Sikora > > diff -r e8d24b6d7f73 -r c0be8de389be src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c Wed Sep 18 18:53:26 2013 +0400 > +++ b/src/event/ngx_event_openssl.c Wed Sep 18 16:51:30 2013 -0700 > @@ -280,6 +280,8 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ > { > ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, > "SSL_CTX_set_ex_data() failed"); > + X509_free(x509); > + BIO_free(bio); > return NGX_ERROR; > } Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Sep 19 13:26:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 17:26:32 +0400 Subject: [PATCH] MIME: added application/json MIME type. In-Reply-To: References: <20130918145451.GM57081@mdounin.ru> Message-ID: <20130919132632.GU57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 01:07:20PM -0700, Piotr Sikora wrote: > Hi Maxim, > > > I'm not sure we need this at all (but likely we do), but at least > > this seems to be a wrong section of the file. > > It's probably more of a "nice to have" than "needed", but this > MIME type is already being used by the image filter and JSON > is definitely more popular than some of the other MIME types > in that file, so I don't see a reason why it shouldn't be there. The json is mostly used as a MIME type of dynamic responses, and it's popularity as a MIME type of files is very unclear for me. But locate suggests that my own notebook has more than 1k .json files, so it probably worth adding. :) > # HG changeset patch > # User Piotr Sikora > # Date 1379534387 25200 > # Wed Sep 18 12:59:47 2013 -0700 > # Node ID 1adfe7a260ebe6fa3519b5229847c53470321990 > # Parent e8d24b6d7f7304df77ccde7fc8223434c91b5322 > MIME: added application/json MIME type. > > Signed-off-by: Piotr Sikora Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From defan at nginx.com Thu Sep 19 14:31:56 2013 From: defan at nginx.com (Andrei Belov) Date: Thu, 19 Sep 2013 14:31:56 +0000 Subject: [nginx] Proxy: added the "proxy_ssl_protocols" directive. Message-ID: details: http://hg.nginx.org/nginx/rev/7c1f4977d8a0 branches: changeset: 5385:7c1f4977d8a0 user: Andrei Belov date: Thu Sep 19 18:30:33 2013 +0400 description: Proxy: added the "proxy_ssl_protocols" directive. diffstat: src/http/modules/ngx_http_proxy_module.c | 50 +++++++++++++++++++++++++------ 1 files changed, 40 insertions(+), 10 deletions(-) diffs (109 lines): diff -r cfbf1d1cc233 -r 7c1f4977d8a0 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Sep 18 16:51:30 2013 -0700 +++ b/src/http/modules/ngx_http_proxy_module.c Thu Sep 19 18:30:33 2013 +0400 @@ -76,6 +76,11 @@ typedef struct { ngx_uint_t headers_hash_max_size; ngx_uint_t headers_hash_bucket_size; + +#if (NGX_HTTP_SSL) + ngx_uint_t ssl; + ngx_uint_t ssl_protocols; +#endif } ngx_http_proxy_loc_conf_t; @@ -186,6 +191,20 @@ static ngx_conf_bitmask_t ngx_http_prox }; +#if (NGX_HTTP_SSL) + +static ngx_conf_bitmask_t ngx_http_proxy_ssl_protocols[] = { + { ngx_string("SSLv2"), NGX_SSL_SSLv2 }, + { ngx_string("SSLv3"), NGX_SSL_SSLv3 }, + { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, + { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, + { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, + { ngx_null_string, 0 } +}; + +#endif + + static ngx_conf_enum_t ngx_http_proxy_http_version[] = { { ngx_string("1.0"), NGX_HTTP_VERSION_10 }, { ngx_string("1.1"), NGX_HTTP_VERSION_11 }, @@ -512,6 +531,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_session_reuse), NULL }, + { ngx_string("proxy_ssl_protocols"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_1MORE, + ngx_conf_set_bitmask_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols), + &ngx_http_proxy_ssl_protocols }, + #endif ngx_null_command @@ -2386,6 +2412,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->body_set = NULL; * conf->body_source = { 0, NULL }; * conf->redirects = NULL; + * conf->ssl = 0; + * conf->ssl_protocols = 0; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2701,6 +2729,15 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t #if (NGX_HTTP_SSL) ngx_conf_merge_value(conf->upstream.ssl_session_reuse, prev->upstream.ssl_session_reuse, 1); + + ngx_conf_merge_bitmask_value(conf->ssl_protocols, prev->ssl_protocols, + (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3 + |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 + |NGX_SSL_TLSv1_2)); + + if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { + return NGX_CONF_ERROR; + } #endif ngx_conf_merge_value(conf->redirect, prev->redirect, 1); @@ -3146,9 +3183,7 @@ ngx_http_proxy_pass(ngx_conf_t *cf, ngx_ } #if (NGX_HTTP_SSL) - if (ngx_http_proxy_set_ssl(cf, plcf) != NGX_OK) { - return NGX_CONF_ERROR; - } + plcf->ssl = 1; #endif return NGX_CONF_OK; @@ -3161,9 +3196,7 @@ ngx_http_proxy_pass(ngx_conf_t *cf, ngx_ } else if (ngx_strncasecmp(url->data, (u_char *) "https://", 8) == 0) { #if (NGX_HTTP_SSL) - if (ngx_http_proxy_set_ssl(cf, plcf) != NGX_OK) { - return NGX_CONF_ERROR; - } + plcf->ssl = 1; add = 8; port = 443; @@ -3745,10 +3778,7 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n plcf->upstream.ssl->log = cf->log; - if (ngx_ssl_create(plcf->upstream.ssl, - NGX_SSL_SSLv2|NGX_SSL_SSLv3|NGX_SSL_TLSv1 - |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2, - NULL) + if (ngx_ssl_create(plcf->upstream.ssl, plcf->ssl_protocols, NULL) != NGX_OK) { return NGX_ERROR; From savages at mozapps.com Fri Sep 20 16:39:39 2013 From: savages at mozapps.com (sv) Date: Fri, 20 Sep 2013 09:39:39 -0700 Subject: auth and security In-Reply-To: References: Message-ID: <523C7A4B.2010306@mozapps.com> I have a configuration that is working but I would like a second( third, fourth...) opinion. what I want to do it protect a location /zot. zot contains static pages /zot/ws is a web socket connection only with a cookie can a person access /zot and /zot/ws if no cookie rewrite to /login login serves a page to login. the reply is back to /login if login is accessed with $args rewite to /auth /auth validates the credentials and returns /zot/index and cookie not valid returns /login /zot/ws is websocket connection /auth is a cgi /login servers login static pages /zot returns static app pages that is the basic idea. code ********************** location /auth { if ($login = '') { return 403; } proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # maybe all cookies? proxy_set_header Cookie $cookie_hzc; # I tried database, did not work, $nextval was always nothing #postgres_pass database; #postgres_query HEAD GET "select nextval('nextsession')"; #postgres_rewrite no_rows 403; #postgres_output text; #postgres_set $nextval 0 0 required; #set $args $args&sess=$nextval; # if I could authorize and make a secure cookie here # that the back end knows about is OK proxy_pass http://localhost:8088; } location /login { if ($args) { # maybe check the args? user=??? passwd=""" * lua here OK set $login 1; rewrite ^/login/login(.*)$ /auth$1; } alias /var/www/login; } location /zot/ws { if ($http_cookie !~* 'hzc') { # maybe check cookie? * lua is OK rewrite ^/hzc(.*)$ /login$1; } # maybe all cookies? proxy_set_header Cookie $cookie_hzc; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_pass http://localhost:8088; } location /zot { if ($http_cookie !~* 'hzc') { # maybe check cookie? * lua is OK rewrite ^/hzc(.*)$ /login$1; } alias /var/www/zot; } From Markus.Linnala at cybercom.com Fri Sep 20 19:48:03 2013 From: Markus.Linnala at cybercom.com (Markus Linnala) Date: Fri, 20 Sep 2013 22:48:03 +0300 Subject: [PATCH] Core: fix misallocation at ngx_crypt_apr1 Message-ID: <4e7279d4c9c418168337.1379706483@maage-hp-ep.localdomain> # HG changeset patch # User Markus Linnala # Date 1379689041 -10800 # Fri Sep 20 17:57:21 2013 +0300 # Node ID 4e7279d4c9c4181683373df3947749a7727b89a4 # Parent 7c1f4977d8a0bf49075139c4b8ac4fbd7bef4a63 Core: fix misallocation at ngx_crypt_apr1 Found by using auth_basic.t from mdounin nginx-tests under valgrind. ==10470== Invalid write of size 1 ==10470== at 0x43603D: ngx_crypt_to64 (ngx_crypt.c:168) ==10470== by 0x43648E: ngx_crypt (ngx_crypt.c:153) ==10470== by 0x489D8B: ngx_http_auth_basic_crypt_handler (ngx_http_auth_basic_module.c:297) ==10470== by 0x48A24A: ngx_http_auth_basic_handler (ngx_http_auth_basic_module.c:240) ==10470== by 0x44EAB9: ngx_http_core_access_phase (ngx_http_core_module.c:1121) ==10470== by 0x44A822: ngx_http_core_run_phases (ngx_http_core_module.c:895) ==10470== by 0x44A932: ngx_http_handler (ngx_http_core_module.c:878) ==10470== by 0x455EEF: ngx_http_process_request (ngx_http_request.c:1852) ==10470== by 0x456527: ngx_http_process_request_headers (ngx_http_request.c:1283) ==10470== by 0x456A91: ngx_http_process_request_line (ngx_http_request.c:964) ==10470== by 0x457097: ngx_http_wait_request_handler (ngx_http_request.c:486) ==10470== by 0x4411EE: ngx_epoll_process_events (ngx_epoll_module.c:691) ==10470== Address 0x5866fab is 0 bytes after a block of size 27 alloc'd ==10470== at 0x4A074CD: malloc (vg_replace_malloc.c:236) ==10470== by 0x43B251: ngx_alloc (ngx_alloc.c:22) ==10470== by 0x421B0D: ngx_malloc (ngx_palloc.c:119) ==10470== by 0x421B65: ngx_pnalloc (ngx_palloc.c:147) ==10470== by 0x436368: ngx_crypt (ngx_crypt.c:140) ==10470== by 0x489D8B: ngx_http_auth_basic_crypt_handler (ngx_http_auth_basic_module.c:297) ==10470== by 0x48A24A: ngx_http_auth_basic_handler (ngx_http_auth_basic_module.c:240) ==10470== by 0x44EAB9: ngx_http_core_access_phase (ngx_http_core_module.c:1121) ==10470== by 0x44A822: ngx_http_core_run_phases (ngx_http_core_module.c:895) ==10470== by 0x44A932: ngx_http_handler (ngx_http_core_module.c:878) ==10470== by 0x455EEF: ngx_http_process_request (ngx_http_request.c:1852) ==10470== by 0x456527: ngx_http_process_request_headers (ngx_http_request.c:1283) ==10470== This fixes ticket #412 diff -r 7c1f4977d8a0 -r 4e7279d4c9c4 src/core/ngx_crypt.c --- a/src/core/ngx_crypt.c Thu Sep 19 18:30:33 2013 +0400 +++ b/src/core/ngx_crypt.c Fri Sep 20 17:57:21 2013 +0300 @@ -137,7 +137,7 @@ /* output */ - *encrypted = ngx_pnalloc(pool, sizeof("$apr1$") - 1 + saltlen + 16 + 1); + *encrypted = ngx_pnalloc(pool, sizeof("$apr1$") - 1 + saltlen + 1 + 22 + 1); if (*encrypted == NULL) { return NGX_ERROR; } From Markus.Linnala at cybercom.com Fri Sep 20 19:48:12 2013 From: Markus.Linnala at cybercom.com (Markus Linnala) Date: Fri, 20 Sep 2013 22:48:12 +0300 Subject: [PATCH] Mail: fix STARTTLS misalloc Message-ID: <79cea900573997a74400.1379706492@maage-hp-ep.localdomain> # HG changeset patch # User Markus Linnala # Date 1379691757 -10800 # Fri Sep 20 18:42:37 2013 +0300 # Node ID 79cea900573997a74400dcef925de41ec6c150e7 # Parent 4e7279d4c9c4181683373df3947749a7727b89a4 Mail: fix STARTTLS misalloc Found by mail_imap.t from mdounin nginx-tests when running under valgrind. ==10647== Invalid write of size 1 ==10647== at 0x4B1493: ngx_mail_smtp_merge_srv_conf (ngx_mail_smtp_module.c:280) ==10647== by 0x4AB363: ngx_mail_block (ngx_mail.c:209) ==10647== by 0x4303BE: ngx_conf_parse (ngx_conf_file.c:391) ==10647== by 0x42DF03: ngx_init_cycle (ngx_cycle.c:265) ==10647== by 0x4206A9: main (nginx.c:333) ==10647== Address 0x550fb84 is 0 bytes after a block of size 68 alloc'd ==10647== at 0x4A074CD: malloc (vg_replace_malloc.c:236) ==10647== by 0x43B251: ngx_alloc (ngx_alloc.c:22) ==10647== by 0x421B0D: ngx_malloc (ngx_palloc.c:119) ==10647== by 0x421B65: ngx_pnalloc (ngx_palloc.c:147) ==10647== by 0x4B1447: ngx_mail_smtp_merge_srv_conf (ngx_mail_smtp_module.c:269) ==10647== by 0x4AB363: ngx_mail_block (ngx_mail.c:209) ==10647== by 0x4303BE: ngx_conf_parse (ngx_conf_file.c:391) ==10647== by 0x42DF03: ngx_init_cycle (ngx_cycle.c:265) ==10647== by 0x4206A9: main (nginx.c:333) ==10647== I choose to retain extra CRLF as I could not test protocol change easily. As per RFC 2487 there is no extra CRLF. But it was not obvious why it was there from history. This fixes ticket #411 diff -r 4e7279d4c9c4 -r 79cea9005739 src/mail/ngx_mail_smtp_module.c --- a/src/mail/ngx_mail_smtp_module.c Fri Sep 20 17:57:21 2013 +0300 +++ b/src/mail/ngx_mail_smtp_module.c Fri Sep 20 18:42:37 2013 +0300 @@ -264,7 +264,7 @@ last[3] = ' '; } - size += sizeof("250 STARTTLS" CRLF) - 1; + size += sizeof("250 STARTTLS" CRLF CRLF) - 1; p = ngx_pnalloc(cf->pool, size); if (p == NULL) { @@ -276,8 +276,7 @@ p = ngx_cpymem(p, conf->capability.data, conf->capability.len); - p = ngx_cpymem(p, "250 STARTTLS" CRLF, sizeof("250 STARTTLS" CRLF) - 1); - *p++ = CR; *p = LF; + p = ngx_cpymem(p, "250 STARTTLS" CRLF CRLF, sizeof("250 STARTTLS" CRLF CRLF) - 1); p = conf->starttls_capability.data + (last - conf->capability.data) + 3; From eowner at gmail.com Sat Sep 21 00:13:14 2013 From: eowner at gmail.com (Kohei Ozaki) Date: Sat, 21 Sep 2013 09:13:14 +0900 Subject: Patch to handle 204 on limit_req_status Message-ID: Hello nginx-devel guys, I've made small changes in the bounds-check of limit_req_status/limit_conn_status directives. In OpenRTB (Real-time Bidding) protocol, "No-Bids" on all impressions are indicated as HTTP 204 response. It would be useful to handle 204 on limit_req_status/limit_conn_status directives. * ref: "OpenRTB API Specification" http://www.iab.net/media/file/OpenRTB-API-Specification-Version-2-1-FINAL.pdf I hope it will be accepted if it is suitable. The patch to handle HTTP 2xx (not include 200) response is the following: ```` diff --git a/src/http/modules/ngx_http_limit_conn_module.c b/src/http/modules/ngx_http_limit_conn_module.c index 7f0eea7..0d29f6b 100644 --- a/src/http/modules/ngx_http_limit_conn_module.c +++ b/src/http/modules/ngx_http_limit_conn_module.c @@ -76,7 +76,7 @@ static ngx_conf_enum_t ngx_http_limit_conn_log_levels[] = { static ngx_conf_num_bounds_t ngx_http_limit_conn_status_bounds = { - ngx_conf_check_num_bounds, 400, 599 + ngx_conf_check_num_bounds, 201, 599 }; diff --git a/src/http/modules/ngx_http_limit_req_module.c b/src/http/modules/ngx_http_limit_req_module.c index 90434c9..f1899ef 100644 --- a/src/http/modules/ngx_http_limit_req_module.c +++ b/src/http/modules/ngx_http_limit_req_module.c @@ -86,7 +86,7 @@ static ngx_conf_enum_t ngx_http_limit_req_log_levels[] = { static ngx_conf_num_bounds_t ngx_http_limit_req_status_bounds = { - ngx_conf_check_num_bounds, 400, 599 + ngx_conf_check_num_bounds, 201, 599 }; ```` Regards, Kohei Ozaki -------------- next part -------------- A non-text attachment was scrubbed... Name: Nginx_limit_req_status_handle_204_res.patch Type: application/octet-stream Size: 949 bytes Desc: not available URL: From mdounin at mdounin.ru Sat Sep 21 01:20:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Sep 2013 05:20:21 +0400 Subject: Patch to handle 204 on limit_req_status In-Reply-To: References: Message-ID: <20130921012021.GA57081@mdounin.ru> Hello! On Sat, Sep 21, 2013 at 09:13:14AM +0900, Kohei Ozaki wrote: > Hello nginx-devel guys, > > I've made small changes in the bounds-check of > limit_req_status/limit_conn_status directives. > > In OpenRTB (Real-time Bidding) protocol, "No-Bids" on all impressions > are indicated as HTTP 204 response. > It would be useful to handle 204 on limit_req_status/limit_conn_status > directives. > > * ref: "OpenRTB API Specification" > http://www.iab.net/media/file/OpenRTB-API-Specification-Version-2-1-FINAL.pdf > > I hope it will be accepted if it is suitable. > The patch to handle HTTP 2xx (not include 200) response is the following: No, please. If you want to return 204, just use appropriate error_page redirection. -- Maxim Dounin http://nginx.org/en/donation.html From eowner at gmail.com Sat Sep 21 02:43:45 2013 From: eowner at gmail.com (Kohei Ozaki) Date: Sat, 21 Sep 2013 11:43:45 +0900 Subject: Patch to handle 204 on limit_req_status In-Reply-To: <20130921012021.GA57081@mdounin.ru> References: <20130921012021.GA57081@mdounin.ru> Message-ID: > No, please. If you want to return 204, just use appropriate > error_page redirection. That's right! I didn't notice the `error_page redirection`. Thanks to answer my request. -- Kohei Ozaki On Sat, Sep 21, 2013 at 10:20 AM, Maxim Dounin wrote: > Hello! > > On Sat, Sep 21, 2013 at 09:13:14AM +0900, Kohei Ozaki wrote: > >> Hello nginx-devel guys, >> >> I've made small changes in the bounds-check of >> limit_req_status/limit_conn_status directives. >> >> In OpenRTB (Real-time Bidding) protocol, "No-Bids" on all impressions >> are indicated as HTTP 204 response. >> It would be useful to handle 204 on limit_req_status/limit_conn_status >> directives. >> >> * ref: "OpenRTB API Specification" >> http://www.iab.net/media/file/OpenRTB-API-Specification-Version-2-1-FINAL.pdf >> >> I hope it will be accepted if it is suitable. >> The patch to handle HTTP 2xx (not include 200) response is the following: > > No, please. If you want to return 204, just use appropriate > error_page redirection. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Sat Sep 21 03:13:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Sep 2013 07:13:04 +0400 Subject: [PATCH] Mail: fix STARTTLS misalloc In-Reply-To: <79cea900573997a74400.1379706492@maage-hp-ep.localdomain> References: <79cea900573997a74400.1379706492@maage-hp-ep.localdomain> Message-ID: <20130921031304.GC57081@mdounin.ru> Hello! On Fri, Sep 20, 2013 at 10:48:12PM +0300, Markus Linnala wrote: > # HG changeset patch > # User Markus Linnala > # Date 1379691757 -10800 > # Fri Sep 20 18:42:37 2013 +0300 > # Node ID 79cea900573997a74400dcef925de41ec6c150e7 > # Parent 4e7279d4c9c4181683373df3947749a7727b89a4 > Mail: fix STARTTLS misalloc Trailing dot, please. > Found by mail_imap.t from mdounin nginx-tests when running under valgrind. > > ==10647== Invalid write of size 1 > ==10647== at 0x4B1493: ngx_mail_smtp_merge_srv_conf (ngx_mail_smtp_module.c:280) > ==10647== by 0x4AB363: ngx_mail_block (ngx_mail.c:209) > ==10647== by 0x4303BE: ngx_conf_parse (ngx_conf_file.c:391) > ==10647== by 0x42DF03: ngx_init_cycle (ngx_cycle.c:265) > ==10647== by 0x4206A9: main (nginx.c:333) > ==10647== Address 0x550fb84 is 0 bytes after a block of size 68 alloc'd > ==10647== at 0x4A074CD: malloc (vg_replace_malloc.c:236) > ==10647== by 0x43B251: ngx_alloc (ngx_alloc.c:22) > ==10647== by 0x421B0D: ngx_malloc (ngx_palloc.c:119) > ==10647== by 0x421B65: ngx_pnalloc (ngx_palloc.c:147) > ==10647== by 0x4B1447: ngx_mail_smtp_merge_srv_conf (ngx_mail_smtp_module.c:269) > ==10647== by 0x4AB363: ngx_mail_block (ngx_mail.c:209) > ==10647== by 0x4303BE: ngx_conf_parse (ngx_conf_file.c:391) > ==10647== by 0x42DF03: ngx_init_cycle (ngx_cycle.c:265) > ==10647== by 0x4206A9: main (nginx.c:333) > ==10647== > > I choose to retain extra CRLF as I could not test protocol change easily. > As per RFC 2487 there is no extra CRLF. But it was not obvious why it was > there from history. As I already said in the ticket, you patch looks wrong for me. It doesn't retain extra CRLF but rather adds one to the output - previously, size (and hence conf->starttls_capability.len) was correct, and the problem was unneeded overrun of unallocated memory. With your patch, size becomes wrong - and the output changes. > > This fixes ticket #411 Just a "... (ticket #411)." in a summary line, please. > > diff -r 4e7279d4c9c4 -r 79cea9005739 src/mail/ngx_mail_smtp_module.c > --- a/src/mail/ngx_mail_smtp_module.c Fri Sep 20 17:57:21 2013 +0300 > +++ b/src/mail/ngx_mail_smtp_module.c Fri Sep 20 18:42:37 2013 +0300 > @@ -264,7 +264,7 @@ > last[3] = ' '; > } > > - size += sizeof("250 STARTTLS" CRLF) - 1; > + size += sizeof("250 STARTTLS" CRLF CRLF) - 1; > > p = ngx_pnalloc(cf->pool, size); > if (p == NULL) { > @@ -276,8 +276,7 @@ > > p = ngx_cpymem(p, conf->capability.data, conf->capability.len); > > - p = ngx_cpymem(p, "250 STARTTLS" CRLF, sizeof("250 STARTTLS" CRLF) - 1); > - *p++ = CR; *p = LF; > + p = ngx_cpymem(p, "250 STARTTLS" CRLF CRLF, sizeof("250 STARTTLS" CRLF CRLF) - 1); > > p = conf->starttls_capability.data > + (last - conf->capability.data) + 3; See above. As I already suggested, correct patch seems to be: --- a/src/mail/ngx_mail_smtp_module.c +++ b/src/mail/ngx_mail_smtp_module.c @@ -277,7 +277,6 @@ ngx_mail_smtp_merge_srv_conf(ngx_conf_t p = ngx_cpymem(p, conf->capability.data, conf->capability.len); p = ngx_cpymem(p, "250 STARTTLS" CRLF, sizeof("250 STARTTLS" CRLF) - 1); - *p++ = CR; *p = LF; p = conf->starttls_capability.data + (last - conf->capability.data) + 3; -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Sep 21 03:17:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Sep 2013 03:17:55 +0000 Subject: [nginx] Core: fix misallocation at ngx_crypt_apr1 (ticket #412). Message-ID: details: http://hg.nginx.org/nginx/rev/2d947c2e3ea1 branches: changeset: 5386:2d947c2e3ea1 user: Markus Linnala date: Fri Sep 20 17:57:21 2013 +0300 description: Core: fix misallocation at ngx_crypt_apr1 (ticket #412). Found by using auth_basic.t from mdounin nginx-tests under valgrind. ==10470== Invalid write of size 1 ==10470== at 0x43603D: ngx_crypt_to64 (ngx_crypt.c:168) ==10470== by 0x43648E: ngx_crypt (ngx_crypt.c:153) ==10470== by 0x489D8B: ngx_http_auth_basic_crypt_handler (ngx_http_auth_basic_module.c:297) ==10470== by 0x48A24A: ngx_http_auth_basic_handler (ngx_http_auth_basic_module.c:240) ==10470== by 0x44EAB9: ngx_http_core_access_phase (ngx_http_core_module.c:1121) ==10470== by 0x44A822: ngx_http_core_run_phases (ngx_http_core_module.c:895) ==10470== by 0x44A932: ngx_http_handler (ngx_http_core_module.c:878) ==10470== by 0x455EEF: ngx_http_process_request (ngx_http_request.c:1852) ==10470== by 0x456527: ngx_http_process_request_headers (ngx_http_request.c:1283) ==10470== by 0x456A91: ngx_http_process_request_line (ngx_http_request.c:964) ==10470== by 0x457097: ngx_http_wait_request_handler (ngx_http_request.c:486) ==10470== by 0x4411EE: ngx_epoll_process_events (ngx_epoll_module.c:691) ==10470== Address 0x5866fab is 0 bytes after a block of size 27 alloc'd ==10470== at 0x4A074CD: malloc (vg_replace_malloc.c:236) ==10470== by 0x43B251: ngx_alloc (ngx_alloc.c:22) ==10470== by 0x421B0D: ngx_malloc (ngx_palloc.c:119) ==10470== by 0x421B65: ngx_pnalloc (ngx_palloc.c:147) ==10470== by 0x436368: ngx_crypt (ngx_crypt.c:140) ==10470== by 0x489D8B: ngx_http_auth_basic_crypt_handler (ngx_http_auth_basic_module.c:297) ==10470== by 0x48A24A: ngx_http_auth_basic_handler (ngx_http_auth_basic_module.c:240) ==10470== by 0x44EAB9: ngx_http_core_access_phase (ngx_http_core_module.c:1121) ==10470== by 0x44A822: ngx_http_core_run_phases (ngx_http_core_module.c:895) ==10470== by 0x44A932: ngx_http_handler (ngx_http_core_module.c:878) ==10470== by 0x455EEF: ngx_http_process_request (ngx_http_request.c:1852) ==10470== by 0x456527: ngx_http_process_request_headers (ngx_http_request.c:1283) ==10470== diffstat: src/core/ngx_crypt.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/core/ngx_crypt.c b/src/core/ngx_crypt.c --- a/src/core/ngx_crypt.c +++ b/src/core/ngx_crypt.c @@ -137,7 +137,7 @@ ngx_crypt_apr1(ngx_pool_t *pool, u_char /* output */ - *encrypted = ngx_pnalloc(pool, sizeof("$apr1$") - 1 + saltlen + 16 + 1); + *encrypted = ngx_pnalloc(pool, sizeof("$apr1$") - 1 + saltlen + 1 + 22 + 1); if (*encrypted == NULL) { return NGX_ERROR; } From mdounin at mdounin.ru Sat Sep 21 03:19:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Sep 2013 07:19:07 +0400 Subject: [PATCH] Core: fix misallocation at ngx_crypt_apr1 In-Reply-To: <4e7279d4c9c418168337.1379706483@maage-hp-ep.localdomain> References: <4e7279d4c9c418168337.1379706483@maage-hp-ep.localdomain> Message-ID: <20130921031906.GD57081@mdounin.ru> Hello! On Fri, Sep 20, 2013 at 10:48:03PM +0300, Markus Linnala wrote: > # HG changeset patch > # User Markus Linnala > # Date 1379689041 -10800 > # Fri Sep 20 17:57:21 2013 +0300 > # Node ID 4e7279d4c9c4181683373df3947749a7727b89a4 > # Parent 7c1f4977d8a0bf49075139c4b8ac4fbd7bef4a63 > Core: fix misallocation at ngx_crypt_apr1 This one committed with minor description changes, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From al-nginx at none.at Sat Sep 21 21:25:22 2013 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 21 Sep 2013 23:25:22 +0200 Subject: Question about another SSL-Library Message-ID: Hi all. Are there any plans to add another SSL-Library into nginx? [ ] axtls http://axtls.sourceforge.net/ [ ] cyassl http://www.wolfssl.com/yaSSL/Home.html [ ] gnutls http://www.gnutls.org/ [ ] polarssl https://polarssl.org/ [ ] other: ... Best regards Aleksandar Lazic From piotr at cloudflare.com Mon Sep 23 05:37:21 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sun, 22 Sep 2013 22:37:21 -0700 Subject: [PATCH] SSL: stop loading configs with invalid "ssl_ciphers" values. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1379914571 25200 # Sun Sep 22 22:36:11 2013 -0700 # Node ID 0fbcfab0bfd72dbc40c3ee75665e81a08ed2fa0b # Parent 2d947c2e3ea1b3144239f028c8e2af895d95fff4 SSL: stop loading configs with invalid "ssl_ciphers" values. While there, remove unnecessary check in ngx_mail_ssl_module. Signed-off-by: Piotr Sikora diff -r 2d947c2e3ea1 -r 0fbcfab0bfd7 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Fri Sep 20 17:57:21 2013 +0300 +++ b/src/http/modules/ngx_http_ssl_module.c Sun Sep 22 22:36:11 2013 -0700 @@ -561,6 +561,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, "SSL_CTX_set_cipher_list(\"%V\") failed", &conf->ciphers); + return NGX_CONF_ERROR; } if (conf->verify) { diff -r 2d947c2e3ea1 -r 0fbcfab0bfd7 src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c Fri Sep 20 17:57:21 2013 +0300 +++ b/src/mail/ngx_mail_ssl_module.c Sun Sep 22 22:36:11 2013 -0700 @@ -287,15 +287,14 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, return NGX_CONF_ERROR; } - if (conf->ciphers.len) { - if (SSL_CTX_set_cipher_list(conf->ssl.ctx, - (const char *) conf->ciphers.data) - == 0) - { - ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, - "SSL_CTX_set_cipher_list(\"%V\") failed", - &conf->ciphers); - } + if (SSL_CTX_set_cipher_list(conf->ssl.ctx, + (const char *) conf->ciphers.data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, + "SSL_CTX_set_cipher_list(\"%V\") failed", + &conf->ciphers); + return NGX_CONF_ERROR; } if (conf->prefer_server_ciphers) { From piotr at cloudflare.com Mon Sep 23 05:40:23 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sun, 22 Sep 2013 22:40:23 -0700 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. Message-ID: # HG changeset patch # User Piotr Sikora # Date 1379914582 25200 # Sun Sep 22 22:36:22 2013 -0700 # Node ID 1039d5b5365dd553a5cc3fbca95a6f3aa9ff6dc2 # Parent 0fbcfab0bfd72dbc40c3ee75665e81a08ed2fa0b Proxy: added the "proxy_ssl_ciphers" directive. Signed-off-by: Piotr Sikora diff -r 0fbcfab0bfd7 -r 1039d5b5365d src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Sun Sep 22 22:36:11 2013 -0700 +++ b/src/http/modules/ngx_http_proxy_module.c Sun Sep 22 22:36:22 2013 -0700 @@ -10,6 +10,9 @@ #include +#define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" + + typedef struct ngx_http_proxy_rewrite_s ngx_http_proxy_rewrite_t; typedef ngx_int_t (*ngx_http_proxy_rewrite_pt)(ngx_http_request_t *r, @@ -80,6 +83,7 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_uint_t ssl; ngx_uint_t ssl_protocols; + ngx_str_t ssl_ciphers; #endif } ngx_http_proxy_loc_conf_t; @@ -538,6 +542,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols), &ngx_http_proxy_ssl_protocols }, + { ngx_string("proxy_ssl_ciphers"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers), + NULL }, + #endif ngx_null_command @@ -2414,6 +2425,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->redirects = NULL; * conf->ssl = 0; * conf->ssl_protocols = 0; + * conf->ssl_ciphers = { 0, NULL }; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2735,6 +2747,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 |NGX_SSL_TLSv1_2)); + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, + NGX_DEFAULT_CIPHERS); + if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -3784,6 +3799,16 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n return NGX_ERROR; } + if (SSL_CTX_set_cipher_list(plcf->upstream.ssl->ctx, + (const char *) plcf->ssl_ciphers.data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, + "SSL_CTX_set_cipher_list(\"%V\") failed", + &plcf->ssl_ciphers); + return NGX_ERROR; + } + cln = ngx_pool_cleanup_add(cf->pool, 0); if (cln == NULL) { return NGX_ERROR; From piotr at cloudflare.com Mon Sep 23 05:47:05 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sun, 22 Sep 2013 22:47:05 -0700 Subject: [nginx] Proxy: added the "proxy_ssl_protocols" directive. In-Reply-To: References: Message-ID: Hi Andrei, > +#if (NGX_HTTP_SSL) > + > +static ngx_conf_bitmask_t ngx_http_proxy_ssl_protocols[] = { > + { ngx_string("SSLv2"), NGX_SSL_SSLv2 }, > + { ngx_string("SSLv3"), NGX_SSL_SSLv3 }, > + { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, > + { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, > + { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, > + { ngx_null_string, 0 } > +}; > + > +#endif I'm a bit biased, because I was cleaning up patchset with "proxy_ssl_protocols" and "proxy_ssl_ciphers" directives to send to the mailing list when you committed this, but wouldn't it make more sense to either expose & reuse ngx_http_ssl_protocols or ideally push this and other definitions back to ngx_event_openssl module instead of having exactly the same bitmask & NGX_DEFAULT_CIPHERS defined in 3 different places (ngx_http_ssl_module, ngx_http_proxy_ssl_module & ngx_mail_ssl_module)? Best regards, Piotr Sikora From mdounin at mdounin.ru Mon Sep 23 13:06:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 17:06:57 +0400 Subject: [nginx] Proxy: added the "proxy_ssl_protocols" directive. In-Reply-To: References: Message-ID: <20130923130657.GD2170@mdounin.ru> Hello! On Sun, Sep 22, 2013 at 10:47:05PM -0700, Piotr Sikora wrote: > Hi Andrei, > > > +#if (NGX_HTTP_SSL) > > + > > +static ngx_conf_bitmask_t ngx_http_proxy_ssl_protocols[] = { > > + { ngx_string("SSLv2"), NGX_SSL_SSLv2 }, > > + { ngx_string("SSLv3"), NGX_SSL_SSLv3 }, > > + { ngx_string("TLSv1"), NGX_SSL_TLSv1 }, > > + { ngx_string("TLSv1.1"), NGX_SSL_TLSv1_1 }, > > + { ngx_string("TLSv1.2"), NGX_SSL_TLSv1_2 }, > > + { ngx_null_string, 0 } > > +}; > > + > > +#endif > > I'm a bit biased, because I was cleaning up patchset with > "proxy_ssl_protocols" and "proxy_ssl_ciphers" directives to send to > the mailing list when you committed this, but wouldn't it make more > sense to either expose & reuse ngx_http_ssl_protocols or ideally push > this and other definitions back to ngx_event_openssl module instead of > having exactly the same bitmask & NGX_DEFAULT_CIPHERS defined in 3 > different places (ngx_http_ssl_module, ngx_http_proxy_ssl_module & > ngx_mail_ssl_module)? As of now, ngx_event_openssl.c mostly doesn't know about configuration parsing (the only exception seems to be ngx_conf_t used by ngx_ssl_certificate() and others to expand file name). Please also note that ngx_event_openssl isn't a module, but rather an SSL-library interface. While moving ssl protocols list into ngx_event_openssl.[ch] is possible, it's certainly not how things are currently done. BTW, could you please clarify reasons for proxy_ssl_ciphers? Andrei added proxy_ssl_protocols mostly as a workaround, because previously used default resulted in connection failures with some backends as seen by our customer. Not sure if adding proxy_ssl_ciphers worth the effort from this point of view, and actually that's why I stopped myself from asking him to add it. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 23 14:27:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 18:27:27 +0400 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: Message-ID: <20130923142727.GF2170@mdounin.ru> Hello! On Sun, Sep 22, 2013 at 10:40:23PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1379914582 25200 > # Sun Sep 22 22:36:22 2013 -0700 > # Node ID 1039d5b5365dd553a5cc3fbca95a6f3aa9ff6dc2 > # Parent 0fbcfab0bfd72dbc40c3ee75665e81a08ed2fa0b > Proxy: added the "proxy_ssl_ciphers" directive. Already asked in another thread if it really worth adding. [...] > +#define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5" [...] > + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, > + NGX_DEFAULT_CIPHERS); This modifies current behaviour, and only allows to use HIGH:!aNULL:!MD5 chipers by default. Are there any specific reasons to? The "!aNULL" looks especially wierd, as we don't check peers certificates anyway. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 23 15:23:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 15:23:51 +0000 Subject: [nginx] SSL: stop loading configs with invalid "ssl_ciphers" val... Message-ID: details: http://hg.nginx.org/nginx/rev/0fbcfab0bfd7 branches: changeset: 5387:0fbcfab0bfd7 user: Piotr Sikora date: Sun Sep 22 22:36:11 2013 -0700 description: SSL: stop loading configs with invalid "ssl_ciphers" values. While there, remove unnecessary check in ngx_mail_ssl_module. Signed-off-by: Piotr Sikora diffstat: src/http/modules/ngx_http_ssl_module.c | 1 + src/mail/ngx_mail_ssl_module.c | 17 ++++++++--------- 2 files changed, 9 insertions(+), 9 deletions(-) diffs (38 lines): diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -561,6 +561,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, "SSL_CTX_set_cipher_list(\"%V\") failed", &conf->ciphers); + return NGX_CONF_ERROR; } if (conf->verify) { diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -287,15 +287,14 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, return NGX_CONF_ERROR; } - if (conf->ciphers.len) { - if (SSL_CTX_set_cipher_list(conf->ssl.ctx, - (const char *) conf->ciphers.data) - == 0) - { - ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, - "SSL_CTX_set_cipher_list(\"%V\") failed", - &conf->ciphers); - } + if (SSL_CTX_set_cipher_list(conf->ssl.ctx, + (const char *) conf->ciphers.data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, + "SSL_CTX_set_cipher_list(\"%V\") failed", + &conf->ciphers); + return NGX_CONF_ERROR; } if (conf->prefer_server_ciphers) { From mdounin at mdounin.ru Mon Sep 23 15:24:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 19:24:18 +0400 Subject: [PATCH] SSL: stop loading configs with invalid "ssl_ciphers" values. In-Reply-To: References: Message-ID: <20130923152418.GH2170@mdounin.ru> Hello! On Sun, Sep 22, 2013 at 10:37:21PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1379914571 25200 > # Sun Sep 22 22:36:11 2013 -0700 > # Node ID 0fbcfab0bfd72dbc40c3ee75665e81a08ed2fa0b > # Parent 2d947c2e3ea1b3144239f028c8e2af895d95fff4 > SSL: stop loading configs with invalid "ssl_ciphers" values. > > While there, remove unnecessary check in ngx_mail_ssl_module. > > Signed-off-by: Piotr Sikora [...] Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 23 17:01:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 17:01:11 +0000 Subject: [nginx] Caseless location tree construction (ticket #90). Message-ID: details: http://hg.nginx.org/nginx/rev/fbaae7d1c033 branches: changeset: 5388:fbaae7d1c033 user: Maxim Dounin date: Mon Sep 23 19:37:06 2013 +0400 description: Caseless location tree construction (ticket #90). Location tree was always constructed using case-sensitive comparison, even on case-insensitive systems. This resulted in incorrect operation if uppercase letters were used in location directives. Notably, the following config: location /a { ... } location /B { ... } failed to properly map requests to "/B" into "location /B". diffstat: src/http/ngx_http.c | 11 +++++++---- src/http/ngx_http_core_module.c | 4 ++-- 2 files changed, 9 insertions(+), 6 deletions(-) diffs (50 lines): diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -949,7 +949,8 @@ ngx_http_cmp_locations(const ngx_queue_t #endif - rc = ngx_strcmp(first->name.data, second->name.data); + rc = ngx_filename_cmp(first->name.data, second->name.data, + ngx_min(first->name.len, second->name.len) + 1); if (rc == 0 && !first->exact_match && second->exact_match) { /* an exact match must be before the same inclusive one */ @@ -975,8 +976,10 @@ ngx_http_join_exact_locations(ngx_conf_t lq = (ngx_http_location_queue_t *) q; lx = (ngx_http_location_queue_t *) x; - if (ngx_strcmp(lq->name->data, lx->name->data) == 0) { - + if (lq->name->len == lx->name->len + && ngx_filename_cmp(lq->name->data, lx->name->data, lx->name->len) + == 0) + { if ((lq->exact && lx->exact) || (lq->inclusive && lx->inclusive)) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "duplicate location \"%V\" in %s:%ui", @@ -1028,7 +1031,7 @@ ngx_http_create_locations_list(ngx_queue lx = (ngx_http_location_queue_t *) x; if (len > lx->name->len - || (ngx_strncmp(name, lx->name->data, len) != 0)) + || ngx_filename_cmp(name, lx->name->data, len) != 0) { break; } diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3219,9 +3219,9 @@ ngx_http_core_location(ngx_conf_t *cf, n #if (NGX_PCRE) if (clcf->regex == NULL - && ngx_strncmp(clcf->name.data, pclcf->name.data, len) != 0) + && ngx_filename_cmp(clcf->name.data, pclcf->name.data, len) != 0) #else - if (ngx_strncmp(clcf->name.data, pclcf->name.data, len) != 0) + if (ngx_filename_cmp(clcf->name.data, pclcf->name.data, len) != 0) #endif { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, From mdounin at mdounin.ru Mon Sep 23 17:01:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 17:01:12 +0000 Subject: [nginx] Added ngx_filename_cmp() with "/" sorted to the left. Message-ID: details: http://hg.nginx.org/nginx/rev/72e31d88defa branches: changeset: 5389:72e31d88defa user: Maxim Dounin date: Mon Sep 23 19:37:13 2013 +0400 description: Added ngx_filename_cmp() with "/" sorted to the left. This patch fixes incorrect handling of auto redirect in configurations like: location /0 { } location /a- { } location /a/ { proxy_pass ... } With previously used sorting, this resulted in the following locations tree (as "-" is less than "/"): "/a-" "/0" "/a/" and a request to "/a" didn't match "/a/" with auto_redirect, as it didn't traverse relevant tree node during lookup (it tested "/a-", then "/0", and then falled back to null location). To preserve locale use for non-ASCII characters on case-insensetive systems, libc's tolower() used. diffstat: src/core/ngx_string.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/core/ngx_string.h | 1 + src/os/unix/ngx_darwin_config.h | 1 + src/os/unix/ngx_files.h | 11 ----------- src/os/unix/ngx_freebsd_config.h | 1 + src/os/unix/ngx_linux_config.h | 1 + src/os/unix/ngx_posix_config.h | 1 + src/os/unix/ngx_solaris_config.h | 1 + src/os/win32/ngx_files.h | 5 ----- src/os/win32/ngx_win32_config.h | 6 +++++- 10 files changed, 51 insertions(+), 17 deletions(-) diffs (182 lines): diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c --- a/src/core/ngx_string.c +++ b/src/core/ngx_string.c @@ -853,6 +853,46 @@ ngx_dns_strcmp(u_char *s1, u_char *s2) ngx_int_t +ngx_filename_cmp(u_char *s1, u_char *s2, size_t n) +{ + ngx_uint_t c1, c2; + + while (n) { + c1 = (ngx_uint_t) *s1++; + c2 = (ngx_uint_t) *s2++; + +#if (NGX_HAVE_CASELESS_FILESYSTEM) + c1 = tolower(c1); + c2 = tolower(c2); +#endif + + if (c1 == c2) { + + if (c1) { + n--; + continue; + } + + return 0; + } + + /* we need '/' to be the lowest character */ + + if (c1 == 0 || c2 == 0) { + return c1 - c2; + } + + c1 = (c1 == '/') ? 0 : c1; + c2 = (c2 == '/') ? 0 : c2; + + return c1 - c2; + } + + return 0; +} + + +ngx_int_t ngx_atoi(u_char *line, size_t n) { ngx_int_t value; diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h --- a/src/core/ngx_string.h +++ b/src/core/ngx_string.h @@ -167,6 +167,7 @@ ngx_int_t ngx_rstrncmp(u_char *s1, u_cha ngx_int_t ngx_rstrncasecmp(u_char *s1, u_char *s2, size_t n); ngx_int_t ngx_memn2cmp(u_char *s1, u_char *s2, size_t n1, size_t n2); ngx_int_t ngx_dns_strcmp(u_char *s1, u_char *s2); +ngx_int_t ngx_filename_cmp(u_char *s1, u_char *s2, size_t n); ngx_int_t ngx_atoi(u_char *line, size_t n); ngx_int_t ngx_atofp(u_char *line, size_t n, size_t point); diff --git a/src/os/unix/ngx_darwin_config.h b/src/os/unix/ngx_darwin_config.h --- a/src/os/unix/ngx_darwin_config.h +++ b/src/os/unix/ngx_darwin_config.h @@ -20,6 +20,7 @@ #include /* offsetof() */ #include #include +#include #include #include #include diff --git a/src/os/unix/ngx_files.h b/src/os/unix/ngx_files.h --- a/src/os/unix/ngx_files.h +++ b/src/os/unix/ngx_files.h @@ -192,17 +192,6 @@ ngx_int_t ngx_create_file_mapping(ngx_fi void ngx_close_file_mapping(ngx_file_mapping_t *fm); -#if (NGX_HAVE_CASELESS_FILESYSTEM) - -#define ngx_filename_cmp(s1, s2, n) strncasecmp((char *) s1, (char *) s2, n) - -#else - -#define ngx_filename_cmp ngx_memcmp - -#endif - - #define ngx_realpath(p, r) (u_char *) realpath((char *) p, (char *) r) #define ngx_realpath_n "realpath()" #define ngx_getcwd(buf, size) (getcwd((char *) buf, size) != NULL) diff --git a/src/os/unix/ngx_freebsd_config.h b/src/os/unix/ngx_freebsd_config.h --- a/src/os/unix/ngx_freebsd_config.h +++ b/src/os/unix/ngx_freebsd_config.h @@ -16,6 +16,7 @@ #include /* offsetof() */ #include #include +#include #include #include #include diff --git a/src/os/unix/ngx_linux_config.h b/src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h +++ b/src/os/unix/ngx_linux_config.h @@ -22,6 +22,7 @@ #include /* offsetof() */ #include #include +#include #include #include #include diff --git a/src/os/unix/ngx_posix_config.h b/src/os/unix/ngx_posix_config.h --- a/src/os/unix/ngx_posix_config.h +++ b/src/os/unix/ngx_posix_config.h @@ -39,6 +39,7 @@ #include /* offsetof() */ #include #include +#include #include #include #include diff --git a/src/os/unix/ngx_solaris_config.h b/src/os/unix/ngx_solaris_config.h --- a/src/os/unix/ngx_solaris_config.h +++ b/src/os/unix/ngx_solaris_config.h @@ -22,6 +22,7 @@ #include /* offsetof() */ #include #include +#include #include #include #include diff --git a/src/os/win32/ngx_files.h b/src/os/win32/ngx_files.h --- a/src/os/win32/ngx_files.h +++ b/src/os/win32/ngx_files.h @@ -172,11 +172,6 @@ ngx_int_t ngx_create_file_mapping(ngx_fi void ngx_close_file_mapping(ngx_file_mapping_t *fm); -#define NGX_HAVE_CASELESS_FILESYSTEM 1 - -#define ngx_filename_cmp(s1, s2, n) _strnicmp((char *) s1, (char *) s2, n) - - u_char *ngx_realpath(u_char *path, u_char *resolved); #define ngx_realpath_n "" #define ngx_getcwd(buf, size) GetCurrentDirectory(size, (char *) buf) diff --git a/src/os/win32/ngx_win32_config.h b/src/os/win32/ngx_win32_config.h --- a/src/os/win32/ngx_win32_config.h +++ b/src/os/win32/ngx_win32_config.h @@ -45,6 +45,7 @@ #include #include #include +#include #include #ifdef __WATCOMC__ @@ -123,7 +124,6 @@ typedef unsigned __int32 uint32_t; typedef __int32 int32_t; typedef unsigned __int16 uint16_t; #define ngx_libc_cdecl __cdecl -#define _strnicmp strnicmp #else /* __WATCOMC__ */ typedef unsigned int uint32_t; @@ -196,6 +196,10 @@ typedef int sig_atomic_t #define NGX_HAVE_INHERITED_NONBLOCK 1 #endif +#ifndef NGX_HAVE_CASELESS_FILESYSTEM +#define NGX_HAVE_CASELESS_FILESYSTEM 1 +#endif + #ifndef NGX_HAVE_WIN32_TRANSMITPACKETS #define NGX_HAVE_WIN32_TRANSMITPACKETS 1 #define NGX_HAVE_WIN32_TRANSMITFILE 0 From piotr at cloudflare.com Mon Sep 23 22:55:36 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 23 Sep 2013 15:55:36 -0700 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: <20130923142727.GF2170@mdounin.ru> References: <20130923142727.GF2170@mdounin.ru> Message-ID: Hi Maxim, >> Proxy: added the "proxy_ssl_ciphers" directive. > > Already asked in another thread if it really worth adding. Yes, it is, and in my experience this one is much more useful than "proxy_ssl_protocols". Basically, there are 2 categories of broken SSL servers: 1. cannot accept ClientHello that's > 255 bytes, 2. cannot downgrade gracefully to a common supported TLS version. First category wasn't an issue until recently, because clients using anything older than TLS 1.2 fit nicely below that limit (ClientHello size is 205 bytes for "DEFAULT" / 233 bytes for "ALL" without SNI). However, starting with OpenSSL-1.0.1, clients started talking TLS 1.2 and advertising support for much bigger list of cipher suites, which doesn't fit within that limit anymore (316 bytes for "DEFAULT" / 358 bytes for "ALL" without SNI). Broken servers just drop such packets and time-out. Offenders in that category include F5 load balancers (without fix for that issue applied) and some ancient OpenSSL versions. While lowering TLS version via "proxy_ssl_protocols" decreases number of advertised cipher suites and brings ClientHello size < 256 bytes, it's suboptimal solution, because the same result can be achieved by limiting number of advertised cipher suites via "proxy_ssl_ciphers" while still using TLS 1.2 and hence providing much better security. Chrome does the same thing, btw. Servers from the second category accept TLS 1.2 ClientHello and downgrade to TLS 1.0 via ServerHello, but send corrupted packets afterwards. The only offender in that category, that I know of, is Oracle's KSSL and that's the only case when you need to use "proxy_ssl_protocols" to lower TLS version. > This modifies current behaviour, and only allows to use > HIGH:!aNULL:!MD5 chipers by default. Are there any specific > reasons to? > > The "!aNULL" looks especially wierd, as we don't check peers > certificates anyway. Good catch! Because of the issues above, we specify our own (rather limited) list of cipher suites that we advertise to the backend servers during SSL handshake, so I didn't notice that the defaults I provided are much stricter than necessary. In that case, I'd probably stick with "DEFAULT" (updated patch will follow)... Just keep in mind that nginx compiled against OpenSSL-1.0.1 will be sending ClientHello that's 316 bytes in size and will have issue with broken SSL servers... Whether or not that's something that nginx should worry about it's another matter, but just to give you some perspective, last time I checked it was ~0.15% of servers that didn't like big ClientHello messages. Best regards, Piotr Sikora From piotr at cloudflare.com Mon Sep 23 22:59:37 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 23 Sep 2013 15:59:37 -0700 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: <20130923142727.GF2170@mdounin.ru> Message-ID: # HG changeset patch # User Piotr Sikora # Date 1379977108 25200 # Mon Sep 23 15:58:28 2013 -0700 # Node ID 80ae4ce8a7a08393e09458cf74cc4f469218679f # Parent 72e31d88defadc94a17ce208c487aac98632e8f2 Proxy: added the "proxy_ssl_ciphers" directive. Signed-off-by: Piotr Sikora diff -r 72e31d88defa -r 80ae4ce8a7a0 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Mon Sep 23 19:37:13 2013 +0400 +++ b/src/http/modules/ngx_http_proxy_module.c Mon Sep 23 15:58:28 2013 -0700 @@ -10,6 +10,9 @@ #include +#define NGX_DEFAULT_CIPHERS "DEFAULT" + + typedef struct ngx_http_proxy_rewrite_s ngx_http_proxy_rewrite_t; typedef ngx_int_t (*ngx_http_proxy_rewrite_pt)(ngx_http_request_t *r, @@ -80,6 +83,7 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_uint_t ssl; ngx_uint_t ssl_protocols; + ngx_str_t ssl_ciphers; #endif } ngx_http_proxy_loc_conf_t; @@ -538,6 +542,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols), &ngx_http_proxy_ssl_protocols }, + { ngx_string("proxy_ssl_ciphers"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers), + NULL }, + #endif ngx_null_command @@ -2414,6 +2425,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->redirects = NULL; * conf->ssl = 0; * conf->ssl_protocols = 0; + * conf->ssl_ciphers = { 0, NULL }; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2735,6 +2747,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 |NGX_SSL_TLSv1_2)); + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, + NGX_DEFAULT_CIPHERS); + if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -3784,6 +3799,16 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n return NGX_ERROR; } + if (SSL_CTX_set_cipher_list(plcf->upstream.ssl->ctx, + (const char *) plcf->ssl_ciphers.data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, + "SSL_CTX_set_cipher_list(\"%V\") failed", + &plcf->ssl_ciphers); + return NGX_ERROR; + } + cln = ngx_pool_cleanup_add(cf->pool, 0); if (cln == NULL) { return NGX_ERROR; From piotr at cloudflare.com Mon Sep 23 23:16:30 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 23 Sep 2013 16:16:30 -0700 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: <20130923142727.GF2170@mdounin.ru> Message-ID: Hi Maxim, >> This modifies current behaviour, and only allows to use >> HIGH:!aNULL:!MD5 chipers by default. Are there any specific >> reasons to? >> >> The "!aNULL" looks especially wierd, as we don't check peers >> certificates anyway. > > (...) > > In that case, I'd probably stick with "DEFAULT" (updated patch will > follow)... Just keep in mind that nginx compiled against OpenSSL-1.0.1 > will be sending ClientHello that's 316 bytes in size and will have > issue with broken SSL servers... Whether or not that's something that > nginx should worry about it's another matter, but just to give you > some perspective, last time I checked it was ~0.15% of servers that > didn't like big ClientHello messages. Forgot to mention - "DEFAULT" is the value OpenSSL uses when you don't specify cipher list yourself (i.e. current behavior) and it's defined as "ALL:!aNULL:!eNULL", which means that "!aNULL" is there already. Best regards, Piotr Sikora From mdounin at mdounin.ru Tue Sep 24 13:37:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Sep 2013 17:37:56 +0400 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: <20130923142727.GF2170@mdounin.ru> Message-ID: <20130924133756.GN2170@mdounin.ru> Hello! On Mon, Sep 23, 2013 at 03:55:36PM -0700, Piotr Sikora wrote: > Hi Maxim, > > >> Proxy: added the "proxy_ssl_ciphers" directive. > > > > Already asked in another thread if it really worth adding. > > Yes, it is, and in my experience this one is much more useful than > "proxy_ssl_protocols". > > Basically, there are 2 categories of broken SSL servers: > 1. cannot accept ClientHello that's > 255 bytes, > 2. cannot downgrade gracefully to a common supported TLS version. Fair enough, thanks for detailed answer. [...] > > This modifies current behaviour, and only allows to use > > HIGH:!aNULL:!MD5 chipers by default. Are there any specific > > reasons to? > > > > The "!aNULL" looks especially wierd, as we don't check peers > > certificates anyway. > > Good catch! Because of the issues above, we specify our own (rather > limited) list of cipher suites that we advertise to the backend > servers during SSL handshake, so I didn't notice that the defaults I > provided are much stricter than necessary. > > In that case, I'd probably stick with "DEFAULT" (updated patch will > follow)... Just keep in mind that nginx compiled against OpenSSL-1.0.1 > will be sending ClientHello that's 316 bytes in size and will have > issue with broken SSL servers... Whether or not that's something that > nginx should worry about it's another matter, but just to give you > some perspective, last time I checked it was ~0.15% of servers that > didn't like big ClientHello messages. Given the fact that even with "HIGH:!aNULL:!MD5" nginx with recent OpenSSL results in the 300+ bytes client hello messages, preserving "DEFAULT" is probably good enough. We may consider adding relevant hints to the documentation if there will be many problem reports. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Sep 24 13:38:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Sep 2013 17:38:35 +0400 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: <20130923142727.GF2170@mdounin.ru> Message-ID: <20130924133835.GO2170@mdounin.ru> Hello! On Mon, Sep 23, 2013 at 03:59:37PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1379977108 25200 > # Mon Sep 23 15:58:28 2013 -0700 > # Node ID 80ae4ce8a7a08393e09458cf74cc4f469218679f > # Parent 72e31d88defadc94a17ce208c487aac98632e8f2 > Proxy: added the "proxy_ssl_ciphers" directive. > > Signed-off-by: Piotr Sikora > > diff -r 72e31d88defa -r 80ae4ce8a7a0 src/http/modules/ngx_http_proxy_module.c > --- a/src/http/modules/ngx_http_proxy_module.c Mon Sep 23 19:37:13 2013 +0400 > +++ b/src/http/modules/ngx_http_proxy_module.c Mon Sep 23 15:58:28 2013 -0700 > @@ -10,6 +10,9 @@ > #include > > > +#define NGX_DEFAULT_CIPHERS "DEFAULT" > + > + I tend to think it would be better to omit this for clarity, and just use the "DEFAULT" string constant in ngx_conf_merge_str_value(): --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -10,9 +10,6 @@ #include -#define NGX_DEFAULT_CIPHERS "DEFAULT" - - typedef struct ngx_http_proxy_rewrite_s ngx_http_proxy_rewrite_t; typedef ngx_int_t (*ngx_http_proxy_rewrite_pt)(ngx_http_request_t *r, @@ -2748,7 +2745,7 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t |NGX_SSL_TLSv1_2)); ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, - NGX_DEFAULT_CIPHERS); + "DEFAULT"); if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; You are ok with this? If yes, I'll just push the fixed version. -- Maxim Dounin http://nginx.org/en/donation.html From alex.garzao at azion.com Tue Sep 24 21:15:36 2013 From: alex.garzao at azion.com (=?ISO-8859-1?Q?Alex_Garz=E3o?=) Date: Tue, 24 Sep 2013 18:15:36 -0300 Subject: Sharing data when download the same object from upstream (take 2) Message-ID: Hello guys, I have some doubts, and I will appreciate if someone help me :-) I posted something about this some days ago [1]. Basically, in the node tree that keeps the objects in the cache, I inserted a list that keeps all listeners [2] and a FD that points to tempfile. In ngx_http_upstream.c, after the call to ngx_event_pipe(), I try to send data to all listeners. The list and FD are in shared memory, and all processs can use them. When NGINX starts with 1 worker, my patch works well. But, when it starts with more than 1 worker, I have some problems. In fact, if a request is in process 1 (P1), and requests from other process (P2) are added in the listeners list, P1 can iterate in this list, but it cannot send data to requests from P2 :-/ If possible, I would like some suggestions about how can I address this issue. Thanks for your attention. [1] - http://mailman.nginx.org/pipermail/nginx-devel/2013-August/004112.html [2] - Listeners are requests that are waiting data from upstream [3] - Request that connected to upstream -- Alex Garz?o Projetista de Software Azion Technologies alex.garzao (at) azion.com From piotr at cloudflare.com Wed Sep 25 00:02:45 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 24 Sep 2013 17:02:45 -0700 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: <20130924133835.GO2170@mdounin.ru> References: <20130923142727.GF2170@mdounin.ru> <20130924133835.GO2170@mdounin.ru> Message-ID: Hi Maxim, >> +#define NGX_DEFAULT_CIPHERS "DEFAULT" >> + >> + > > I tend to think it would be better to omit this for clarity, and > just use the "DEFAULT" string constant in > ngx_conf_merge_str_value(): > > (...) > > You are ok with this? If yes, I'll just push the fixed version. I think that's more consistent with ngx_{mail,http}_ssl_module this way, but I don't feel strongly about it, so feel free to commit your version... I won't mind :) Best regards, Piotr Sikora From B22173 at freescale.com Wed Sep 25 04:25:26 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Wed, 25 Sep 2013 04:25:26 +0000 Subject: JSON configuration APIs for NGINX Message-ID: Hi, Are there any JSON APIs defined for Nginx Configuration? Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at tvdw.eu Wed Sep 25 06:39:32 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Wed, 25 Sep 2013 08:39:32 +0200 Subject: JSON configuration APIs for NGINX In-Reply-To: References: Message-ID: <5DF948D5-8FDD-4BE2-81B1-1F85C7530CAB@tvdw.eu> No. Nginx doesn't do anything with dynamic configuration or JSON. Write some logic that stores the config files and then run 'nginx -s reload'. Works for me, just avoid doing it too often. Tom > On 25 sep. 2013, at 06:25, Myla John-B22173 wrote: > > Hi, > > Are there any JSON APIs defined for Nginx Configuration? > > Regards, > John > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 25 12:41:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Sep 2013 12:41:49 +0000 Subject: [nginx] Proxy: added the "proxy_ssl_ciphers" directive. Message-ID: details: http://hg.nginx.org/nginx/rev/919d230ecdbe branches: changeset: 5390:919d230ecdbe user: Piotr Sikora date: Mon Sep 23 15:58:28 2013 -0700 description: Proxy: added the "proxy_ssl_ciphers" directive. Signed-off-by: Piotr Sikora diffstat: src/http/modules/ngx_http_proxy_module.c | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diffs (60 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -80,6 +80,7 @@ typedef struct { #if (NGX_HTTP_SSL) ngx_uint_t ssl; ngx_uint_t ssl_protocols; + ngx_str_t ssl_ciphers; #endif } ngx_http_proxy_loc_conf_t; @@ -538,6 +539,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, ssl_protocols), &ngx_http_proxy_ssl_protocols }, + { ngx_string("proxy_ssl_ciphers"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers), + NULL }, + #endif ngx_null_command @@ -2414,6 +2422,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->redirects = NULL; * conf->ssl = 0; * conf->ssl_protocols = 0; + * conf->ssl_ciphers = { 0, NULL }; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2735,6 +2744,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t |NGX_SSL_TLSv1|NGX_SSL_TLSv1_1 |NGX_SSL_TLSv1_2)); + ngx_conf_merge_str_value(conf->ssl_ciphers, prev->ssl_ciphers, + "DEFAULT"); + if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -3784,6 +3796,16 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n return NGX_ERROR; } + if (SSL_CTX_set_cipher_list(plcf->upstream.ssl->ctx, + (const char *) plcf->ssl_ciphers.data) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, cf->log, 0, + "SSL_CTX_set_cipher_list(\"%V\") failed", + &plcf->ssl_ciphers); + return NGX_ERROR; + } + cln = ngx_pool_cleanup_add(cf->pool, 0); if (cln == NULL) { return NGX_ERROR; From mdounin at mdounin.ru Wed Sep 25 12:43:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Sep 2013 16:43:53 +0400 Subject: [PATCH] Proxy: added the "proxy_ssl_ciphers" directive. In-Reply-To: References: <20130923142727.GF2170@mdounin.ru> <20130924133835.GO2170@mdounin.ru> Message-ID: <20130925124353.GW2170@mdounin.ru> Hello! On Tue, Sep 24, 2013 at 05:02:45PM -0700, Piotr Sikora wrote: > Hi Maxim, > > >> +#define NGX_DEFAULT_CIPHERS "DEFAULT" > >> + > >> + > > > > I tend to think it would be better to omit this for clarity, and > > just use the "DEFAULT" string constant in > > ngx_conf_merge_str_value(): > > > > (...) > > > > You are ok with this? If yes, I'll just push the fixed version. > > I think that's more consistent with ngx_{mail,http}_ssl_module this > way, but I don't feel strongly about it, so feel free to commit your > version... I won't mind :) I don't like the idea of NGX_DEFAULT_CIPHERS being defined into different strings in different modules, hence the difference with ngx_mail_ssl_module/ngx_http_ssl_module looks reasonable for me. Committed, thanks. -- Maxim Dounin http://nginx.org/en/donation.html From lcolina at cenditel.gob.ve Wed Sep 11 13:23:08 2013 From: lcolina at cenditel.gob.ve (lcolina at cenditel.gob.ve) Date: Wed, 11 Sep 2013 08:53:08 -0430 Subject: [PATCH 0 of 1] Changes in the destination header in MOVE and DELETE methods for files and/or folders Message-ID: Dear nginx developers, While testing the ngx_http_dav_module, it was found that in rename and delete folder operations the destination path contains no final slash, causing the method DELETE generate a HTTP_CONFLICT and method MOVE generate a HTTP_BAD_REQUEST. It was also found conflict with the destination header in rename files and/or operations folders, because Nautilus client sends the URI with the user name, but not supported by the method MOVE the user name in the destination header. A small patch that fixes these issues follows. Best regards, Laura Colina Centro Nacional de Desarrollo e Investigaci?n en Tecnolog?as Libres (CENDITEL) Ministerio del Poder Popular para Ciencia, Tecnolog?a e Innovaci?n Rep?blica Bolivariana de Venezuela From lcolina at cenditel.gob.ve Wed Sep 11 13:23:09 2013 From: lcolina at cenditel.gob.ve (lcolina at cenditel.gob.ve) Date: Wed, 11 Sep 2013 08:53:09 -0430 Subject: [PATCH 1 of 1] Changes in the destination header in MOVE and DELETE methods for files and/or folders In-Reply-To: References: Message-ID: # HG changeset patch # User Laura Colina # Date 1378836243 16200 # Node ID c6e3ea382a3ab5f98350c5810c3ebc080ae0f0ae # Parent 72e31d88defadc94a17ce208c487aac98632e8f2 Changes in the destination header in MOVE and DELETE methods for files and/or folders diff -r 72e31d88defa -r c6e3ea382a3a src/http/modules/ngx_http_dav_module.c --- a/src/http/modules/ngx_http_dav_module.c Mon Sep 23 19:37:13 2013 +0400 +++ b/src/http/modules/ngx_http_dav_module.c Tue Sep 10 13:34:03 2013 -0430 @@ -338,10 +338,9 @@ if (ngx_is_dir(&fi)) { - if (r->uri.data[r->uri.len - 1] != '/') { - ngx_log_error(NGX_LOG_ERR, r->connection->log, NGX_EISDIR, - "DELETE \"%s\" failed", path.data); - return NGX_HTTP_CONFLICT; + if (path.data[path.len - 1] == '/') { + path.len--; + path.data[path.len] = '\0'; } depth = ngx_http_dav_depth(r, NGX_HTTP_DAV_INFINITY_DEPTH); @@ -352,7 +351,7 @@ return NGX_HTTP_BAD_REQUEST; } - path.len -= 2; /* omit "/\0" */ + path.len -= 1; /* omit "\0" */ dir = 1; @@ -579,6 +578,16 @@ host = dest->value.data + sizeof("http://") - 1; } + for (p = host; *p!='\0'; p++) { + if (*p == '/') { + break; + } + else if (*p == '@') { + host = p + 1; + break; + } + } + if (ngx_strncmp(host, r->headers_in.server.data, len) != 0) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "\"Destination\" URI \"%V\" is handled by " @@ -736,10 +745,9 @@ if (ngx_is_dir(&fi)) { - if (r->uri.data[r->uri.len - 1] != '/') { - ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, - "\"%V\" is collection", &r->uri); - return NGX_HTTP_BAD_REQUEST; + if (path.data[path.len - 1] == '/') { + path.len--; + path.data[path.len] = '\0'; } if (overwrite) { @@ -756,7 +764,7 @@ if (ngx_is_dir(&fi)) { - path.len -= 2; /* omit "/\0" */ + path.len -= 1; /* omit "\0" */ if (r->method == NGX_HTTP_MOVE) { if (ngx_rename_file(path.data, copy.path.data) != NGX_FILE_ERROR) { From mdounin at mdounin.ru Thu Sep 26 12:57:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Sep 2013 16:57:07 +0400 Subject: [PATCH 0 of 1] Changes in the destination header in MOVE and DELETE methods for files and/or folders In-Reply-To: References: Message-ID: <20130926125707.GD2271@mdounin.ru> Hello! On Wed, Sep 11, 2013 at 08:53:08AM -0430, lcolina at cenditel.gob.ve wrote: Just a side note: from the headers it looks like the message was sitting for two weeks on your host... Please don't blame us for a late reply. :) > Dear nginx developers, > > While testing the ngx_http_dav_module, it was found that in > rename and delete folder operations the destination path > contains no final slash, causing the method DELETE generate a > HTTP_CONFLICT and method MOVE generate a HTTP_BAD_REQUEST. > > It was also found conflict with the destination header in rename > files and/or operations folders, because Nautilus client sends > the URI with the user name, but not supported by the method MOVE > the user name in the destination header. > > A small patch that fixes these issues follows. Could you please be a bit more specific? I.e., what actually client sends, what happens, and why you think nginx behaviour is wrong and should be fixed? -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Sep 27 12:54:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 12:54:36 +0000 Subject: [nginx] Upstream: proxy_no_cache, fastcgi_no_cache warnings remo... Message-ID: details: http://hg.nginx.org/nginx/rev/e65be17e3a3e branches: changeset: 5391:e65be17e3a3e user: Maxim Dounin date: Fri Sep 27 16:50:13 2013 +0400 description: Upstream: proxy_no_cache, fastcgi_no_cache warnings removed. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 6 ------ src/http/modules/ngx_http_proxy_module.c | 6 ------ 2 files changed, 0 insertions(+), 12 deletions(-) diffs (32 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -2347,12 +2347,6 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_conf_merge_ptr_value(conf->upstream.no_cache, prev->upstream.no_cache, NULL); - if (conf->upstream.no_cache && conf->upstream.cache_bypass == NULL) { - ngx_log_error(NGX_LOG_WARN, cf->log, 0, - "\"fastcgi_no_cache\" functionality has been changed in 0.8.46, " - "now it should be used together with \"fastcgi_cache_bypass\""); - } - ngx_conf_merge_ptr_value(conf->upstream.cache_valid, prev->upstream.cache_valid, NULL); diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -2697,12 +2697,6 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_ptr_value(conf->upstream.no_cache, prev->upstream.no_cache, NULL); - if (conf->upstream.no_cache && conf->upstream.cache_bypass == NULL) { - ngx_log_error(NGX_LOG_WARN, cf->log, 0, - "\"proxy_no_cache\" functionality has been changed in 0.8.46, " - "now it should be used together with \"proxy_cache_bypass\""); - } - ngx_conf_merge_ptr_value(conf->upstream.cache_valid, prev->upstream.cache_valid, NULL); From mdounin at mdounin.ru Fri Sep 27 12:54:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 12:54:38 +0000 Subject: [nginx] Upstream: subrequest_in_memory fix. Message-ID: details: http://hg.nginx.org/nginx/rev/f1caf7b8ae1d branches: changeset: 5392:f1caf7b8ae1d user: Maxim Dounin date: Fri Sep 27 16:50:26 2013 +0400 description: Upstream: subrequest_in_memory fix. With previous code only part of u->buffer might be emptied in case of special responses, resulting in partial responses seen by SSI set in case of simple protocols, or spurious errors like "upstream sent invalid chunked response" in case of complex ones. diffstat: src/http/ngx_http_upstream.c | 10 ++++++---- 1 files changed, 6 insertions(+), 4 deletions(-) diffs (27 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1711,10 +1711,6 @@ ngx_http_upstream_process_header(ngx_htt if (u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) { - if (r->subrequest_in_memory) { - u->buffer.last = u->buffer.pos; - } - if (ngx_http_upstream_test_next(r, u) == NGX_OK) { return; } @@ -3464,6 +3460,12 @@ ngx_http_upstream_finalize_request(ngx_h #endif + if (r->subrequest_in_memory + && u->headers_in.status_n >= NGX_HTTP_SPECIAL_RESPONSE) + { + u->buffer.last = u->buffer.pos; + } + if (rc == NGX_DECLINED) { return; } From mdounin at mdounin.ru Fri Sep 27 12:54:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 12:54:39 +0000 Subject: [nginx] Upstream: subrequest_in_memory support for SCGI and uwsg... Message-ID: details: http://hg.nginx.org/nginx/rev/1a070e89b97a branches: changeset: 5393:1a070e89b97a user: Maxim Dounin date: Fri Sep 27 16:50:34 2013 +0400 description: Upstream: subrequest_in_memory support for SCGI and uwsgi enabled. This was missed in 9d59a8eda373 when non-buffered support was added to SCGI and uwsgi. diffstat: src/http/modules/ngx_http_scgi_module.c | 7 ------- src/http/modules/ngx_http_uwsgi_module.c | 7 ------- 2 files changed, 0 insertions(+), 14 deletions(-) diffs (34 lines): diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -394,13 +394,6 @@ ngx_http_scgi_handler(ngx_http_request_t ngx_http_upstream_t *u; ngx_http_scgi_loc_conf_t *scf; - if (r->subrequest_in_memory) { - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "ngx_http_scgi_module does not support " - "subrequests in memory"); - return NGX_HTTP_INTERNAL_SERVER_ERROR; - } - if (ngx_http_upstream_create(r) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -427,13 +427,6 @@ ngx_http_uwsgi_handler(ngx_http_request_ ngx_http_upstream_t *u; ngx_http_uwsgi_loc_conf_t *uwcf; - if (r->subrequest_in_memory) { - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "ngx_http_uwsgi_module does not support " - "subrequests in memory"); - return NGX_HTTP_INTERNAL_SERVER_ERROR; - } - if (ngx_http_upstream_create(r) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } From mdounin at mdounin.ru Fri Sep 27 12:54:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 12:54:40 +0000 Subject: [nginx] FastCGI: non-buffered mode support. Message-ID: details: http://hg.nginx.org/nginx/rev/8c827bb1b2b6 branches: changeset: 5394:8c827bb1b2b6 user: Maxim Dounin date: Fri Sep 27 16:50:40 2013 +0400 description: FastCGI: non-buffered mode support. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 238 ++++++++++++++++++++++++++++- 1 files changed, 230 insertions(+), 8 deletions(-) diffs (290 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -138,6 +138,8 @@ static ngx_int_t ngx_http_fastcgi_proces static ngx_int_t ngx_http_fastcgi_input_filter_init(void *data); static ngx_int_t ngx_http_fastcgi_input_filter(ngx_event_pipe_t *p, ngx_buf_t *buf); +static ngx_int_t ngx_http_fastcgi_non_buffered_filter(void *data, + ssize_t bytes); static ngx_int_t ngx_http_fastcgi_process_record(ngx_http_request_t *r, ngx_http_fastcgi_ctx_t *f); static void ngx_http_fastcgi_abort_request(ngx_http_request_t *r); @@ -233,6 +235,13 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, upstream.store_access), NULL }, + { ngx_string("fastcgi_buffering"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.buffering), + NULL }, + { ngx_string("fastcgi_ignore_client_abort"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -579,13 +588,6 @@ ngx_http_fastcgi_handler(ngx_http_reques ngx_http_fastcgi_ctx_t *f; ngx_http_fastcgi_loc_conf_t *flcf; - if (r->subrequest_in_memory) { - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, - "ngx_http_fastcgi_module does not support " - "subrequest in memory"); - return NGX_HTTP_INTERNAL_SERVER_ERROR; - } - if (ngx_http_upstream_create(r) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -622,7 +624,7 @@ ngx_http_fastcgi_handler(ngx_http_reques u->finalize_request = ngx_http_fastcgi_finalize_request; r->state = 0; - u->buffering = 1; + u->buffering = flcf->upstream.buffering; u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t)); if (u->pipe == NULL) { @@ -633,6 +635,8 @@ ngx_http_fastcgi_handler(ngx_http_reques u->pipe->input_ctx = r; u->input_filter_init = ngx_http_fastcgi_input_filter_init; + u->input_filter = ngx_http_fastcgi_non_buffered_filter; + u->input_filter_ctx = r; rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); @@ -1915,6 +1919,222 @@ ngx_http_fastcgi_input_filter(ngx_event_ static ngx_int_t +ngx_http_fastcgi_non_buffered_filter(void *data, ssize_t bytes) +{ + u_char *m, *msg; + ngx_int_t rc; + ngx_buf_t *b, *buf; + ngx_chain_t *cl, **ll; + ngx_http_request_t *r; + ngx_http_upstream_t *u; + ngx_http_fastcgi_ctx_t *f; + + r = data; + f = ngx_http_get_module_ctx(r, ngx_http_fastcgi_module); + + u = r->upstream; + buf = &u->buffer; + + buf->pos = buf->last; + buf->last += bytes; + + for (cl = u->out_bufs, ll = &u->out_bufs; cl; cl = cl->next) { + ll = &cl->next; + } + + f->pos = buf->pos; + f->last = buf->last; + + for ( ;; ) { + if (f->state < ngx_http_fastcgi_st_data) { + + rc = ngx_http_fastcgi_process_record(r, f); + + if (rc == NGX_AGAIN) { + break; + } + + if (rc == NGX_ERROR) { + return NGX_ERROR; + } + + if (f->type == NGX_HTTP_FASTCGI_STDOUT && f->length == 0) { + f->state = ngx_http_fastcgi_st_padding; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http fastcgi closed stdout"); + + continue; + } + } + + if (f->state == ngx_http_fastcgi_st_padding) { + + if (f->type == NGX_HTTP_FASTCGI_END_REQUEST) { + + if (f->pos + f->padding < f->last) { + u->length = 0; + break; + } + + if (f->pos + f->padding == f->last) { + u->length = 0; + u->keepalive = 1; + break; + } + + f->padding -= f->last - f->pos; + + break; + } + + if (f->pos + f->padding < f->last) { + f->state = ngx_http_fastcgi_st_version; + f->pos += f->padding; + + continue; + } + + if (f->pos + f->padding == f->last) { + f->state = ngx_http_fastcgi_st_version; + + break; + } + + f->padding -= f->last - f->pos; + + break; + } + + + /* f->state == ngx_http_fastcgi_st_data */ + + if (f->type == NGX_HTTP_FASTCGI_STDERR) { + + if (f->length) { + + if (f->pos == f->last) { + break; + } + + msg = f->pos; + + if (f->pos + f->length <= f->last) { + f->pos += f->length; + f->length = 0; + f->state = ngx_http_fastcgi_st_padding; + + } else { + f->length -= f->last - f->pos; + f->pos = f->last; + } + + for (m = f->pos - 1; msg < m; m--) { + if (*m != LF && *m != CR && *m != '.' && *m != ' ') { + break; + } + } + + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "FastCGI sent in stderr: \"%*s\"", + m + 1 - msg, msg); + + } else { + f->state = ngx_http_fastcgi_st_padding; + } + + continue; + } + + if (f->type == NGX_HTTP_FASTCGI_END_REQUEST) { + + if (f->pos + f->length <= f->last) { + f->state = ngx_http_fastcgi_st_padding; + f->pos += f->length; + + continue; + } + + f->length -= f->last - f->pos; + + break; + } + + + /* f->type == NGX_HTTP_FASTCGI_STDOUT */ + + if (f->pos == f->last) { + break; + } + + cl = ngx_chain_get_free_buf(r->pool, &u->free_bufs); + if (cl == NULL) { + return NGX_ERROR; + } + + *ll = cl; + ll = &cl->next; + + b = cl->buf; + + b->flush = 1; + b->memory = 1; + + b->pos = f->pos; + b->tag = u->output.tag; + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http fastcgi output buf %p", b->pos); + + if (f->pos + f->length <= f->last) { + f->state = ngx_http_fastcgi_st_padding; + f->pos += f->length; + b->last = f->pos; + + continue; + } + + f->length -= f->last - f->pos; + b->last = f->last; + + break; + } + + /* provide continuous buffer for subrequests in memory */ + + if (r->subrequest_in_memory) { + + cl = u->out_bufs; + + if (cl) { + buf->pos = cl->buf->pos; + } + + buf->last = buf->pos; + + for (cl = u->out_bufs; cl; cl = cl->next) { + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http fastcgi in memory %p-%p %uz", + cl->buf->pos, cl->buf->last, ngx_buf_size(cl->buf)); + + if (buf->last == cl->buf->pos) { + buf->last = cl->buf->last; + continue; + } + + buf->last = ngx_movemem(buf->last, cl->buf->pos, + cl->buf->last - cl->buf->pos); + + cl->buf->pos = buf->last - (cl->buf->last - cl->buf->pos); + cl->buf->last = buf->last; + } + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_fastcgi_process_record(ngx_http_request_t *r, ngx_http_fastcgi_ctx_t *f) { @@ -2126,6 +2346,8 @@ ngx_http_fastcgi_create_loc_conf(ngx_con /* "fastcgi_cyclic_temp_file" is disabled */ conf->upstream.cyclic_temp_file = 0; + conf->upstream.change_buffering = 1; + conf->catch_stderr = NGX_CONF_UNSET_PTR; conf->keep_conn = NGX_CONF_UNSET; From mdounin at mdounin.ru Fri Sep 27 15:39:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 15:39:51 +0000 Subject: [nginx] SSL: adjust buffer used by OpenSSL during handshake (tic... Message-ID: details: http://hg.nginx.org/nginx/rev/a720f0b0e083 branches: changeset: 5395:a720f0b0e083 user: Maxim Dounin date: Fri Sep 27 19:39:33 2013 +0400 description: SSL: adjust buffer used by OpenSSL during handshake (ticket #413). diffstat: src/event/ngx_event_openssl.c | 26 ++++++++++++++++++++++++++ src/event/ngx_event_openssl.h | 1 + 2 files changed, 27 insertions(+), 0 deletions(-) diffs (54 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -521,6 +521,7 @@ ngx_ssl_verify_callback(int ok, X509_STO static void ngx_ssl_info_callback(const ngx_ssl_conn_t *ssl_conn, int where, int ret) { + BIO *rbio, *wbio; ngx_connection_t *c; if (where & SSL_CB_HANDSHAKE_START) { @@ -531,6 +532,31 @@ ngx_ssl_info_callback(const ngx_ssl_conn ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "SSL renegotiation"); } } + + if ((where & SSL_CB_ACCEPT_LOOP) == SSL_CB_ACCEPT_LOOP) { + c = ngx_ssl_get_connection((ngx_ssl_conn_t *) ssl_conn); + + if (!c->ssl->handshake_buffer_set) { + /* + * By default OpenSSL uses 4k buffer during a handshake, + * which is too low for long certificate chains and might + * result in extra round-trips. + * + * To adjust a buffer size we detect that buffering was added + * to write side of the connection by comparing rbio and wbio. + * If they are different, we assume that it's due to buffering + * added to wbio, and set buffer size. + */ + + rbio = SSL_get_rbio(ssl_conn); + wbio = SSL_get_wbio(ssl_conn); + + if (rbio != wbio) { + (void) BIO_set_write_buffer_size(wbio, NGX_SSL_BUFSIZE); + c->ssl->handshake_buffer_set = 1; + } + } + } } diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h +++ b/src/event/ngx_event_openssl.h @@ -48,6 +48,7 @@ typedef struct { unsigned buffer:1; unsigned no_wait_shutdown:1; unsigned no_send_shutdown:1; + unsigned handshake_buffer_set:1; } ngx_ssl_connection_t; From mat999 at gmail.com Fri Sep 27 16:07:09 2013 From: mat999 at gmail.com (SplitIce) Date: Sat, 28 Sep 2013 01:37:09 +0930 Subject: [nginx] Fixed ngx_http_test_reading() to finalize request properly. In-Reply-To: References: Message-ID: I know this patch was made for 1.5.x however I patched our 1.4.x build (internal modules are in the process of being upgraded currently). However I am still getting 000 in the logs. I am currently crawling the change logs for similar patches. Any chance you can remember any similar issue being resolved in the 1.5.x branch? Thanks, Mathew On Thu, Jul 25, 2013 at 9:28 PM, Maxim Dounin wrote: > details: http://hg.nginx.org/nginx/rev/aadfadd5af2b > branches: > changeset: 5289:aadfadd5af2b > user: Maxim Dounin > date: Fri Jun 14 20:56:07 2013 +0400 > description: > Fixed ngx_http_test_reading() to finalize request properly. > > Previous code called ngx_http_finalize_request() with rc = 0. This is > ok if a response status was already set, but resulted in "000" being > logged if it wasn't. In particular this happened with limit_req > if a connection was prematurely closed during limit_req delay. > > diffstat: > > src/http/ngx_http_request.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diffs (12 lines): > > diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c > --- a/src/http/ngx_http_request.c > +++ b/src/http/ngx_http_request.c > @@ -2733,7 +2733,7 @@ closed: > ngx_log_error(NGX_LOG_INFO, c->log, err, > "client prematurely closed connection"); > > - ngx_http_finalize_request(r, 0); > + ngx_http_finalize_request(r, NGX_HTTP_CLIENT_CLOSED_REQUEST); > } > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Sat Sep 28 03:52:28 2013 From: kyprizel at gmail.com (kyprizel) Date: Sat, 28 Sep 2013 07:52:28 +0400 Subject: Distributed SSL session cache In-Reply-To: <20130916133727.GF57081@mdounin.ru> References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: Ok, made some kind of patch, testing it now: https://github.com/kyprizel/nginx_ssl_ticket_keys Not sure about server behaviour in case of invalid key file - should it be emergency or alert only. -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Sat Sep 28 09:55:36 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sat, 28 Sep 2013 02:55:36 -0700 Subject: [PATCH] SSL: added support for TLS Session Tickets (RFC5077). Message-ID: # HG changeset patch # User Piotr Sikora # Date 1380361691 25200 # Sat Sep 28 02:48:11 2013 -0700 # Node ID 6d3710969a18e2d0d817e297c2e17f941a58cd40 # Parent a720f0b0e08345ebb01353250f4031bb6e141385 SSL: added support for TLS Session Tickets (RFC5077). Signed-off-by: Piotr Sikora diff -r a720f0b0e083 -r 6d3710969a18 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Fri Sep 27 19:39:33 2013 +0400 +++ b/src/event/ngx_event_openssl.c Sat Sep 28 02:48:11 2013 -0700 @@ -38,6 +38,12 @@ static void ngx_ssl_expire_sessions(ngx_ static void ngx_ssl_session_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +static int ngx_ssl_session_ticket_key_callback(ngx_ssl_conn_t *ssl_conn, + unsigned char *name, unsigned char *iv, EVP_CIPHER_CTX *ectx, + HMAC_CTX *hctx, int enc); +#endif + static void *ngx_openssl_create_conf(ngx_cycle_t *cycle); static char *ngx_openssl_engine(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static void ngx_openssl_exit(ngx_cycle_t *cycle); @@ -84,6 +90,9 @@ int ngx_ssl_server_conf_index; int ngx_ssl_session_cache_index; int ngx_ssl_certificate_index; int ngx_ssl_stapling_index; +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +int ngx_ssl_session_ticket_keys_index; +#endif ngx_int_t @@ -155,6 +164,18 @@ ngx_ssl_init(ngx_log_t *log) return NGX_ERROR; } +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + + ngx_ssl_session_ticket_keys_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, + NULL, NULL); + if (ngx_ssl_session_ticket_keys_index == -1) { + ngx_ssl_error(NGX_LOG_ALERT, log, 0, + "SSL_CTX_get_ex_new_index() failed"); + return NGX_ERROR; + } + +#endif + return NGX_OK; } @@ -2240,6 +2261,122 @@ ngx_ssl_session_rbtree_insert_value(ngx_ } +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + +ngx_int_t +ngx_ssl_session_ticket_keys(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_session_ticket_keys_t *keys, time_t timeout) +{ + if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_session_ticket_keys_index, keys) + == 0) + { + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, + "SSL_CTX_set_ex_data() failed"); + return NGX_ERROR; + } + + SSL_CTX_set_timeout(ssl->ctx, (long) timeout); + + if (SSL_CTX_set_tlsext_ticket_key_cb(ssl->ctx, + ngx_ssl_session_ticket_key_callback) + == 0) + { + ngx_log_error(NGX_LOG_WARN, cf->log, 0, + "nginx was built with Session Tickets support, however, " + "now it is linked dynamically to an OpenSSL library " + "which has no tlsext support, therefore Session Tickets " + "are not available"); + } + + return NGX_OK; +} + + +static int +ngx_ssl_session_ticket_key_callback(ngx_ssl_conn_t *ssl_conn, + unsigned char *name, unsigned char *iv, EVP_CIPHER_CTX *ectx, + HMAC_CTX *hctx, int enc) +{ + int rc; + SSL_CTX *ssl_ctx; + ngx_uint_t i; + ngx_ssl_session_ticket_key_t *key; + ngx_ssl_session_ticket_keys_t *keys; +#if (NGX_DEBUG) + ngx_connection_t *c; +#endif + + ssl_ctx = SSL_get_SSL_CTX(ssl_conn); + + keys = SSL_CTX_get_ex_data(ssl_ctx, ngx_ssl_session_ticket_keys_index); + if (keys == NULL) { + return -1; + } + +#if (NGX_DEBUG) + c = ngx_ssl_get_connection(ssl_conn); +#endif + + if (enc == 1) { + /* encrypt session ticket */ + + key = keys->default_key; + + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ssl session ticket encrypt, key: \"%*s\" (%s session)", + key->name_len, key->name, + SSL_session_reused(ssl_conn) ? "reused" : "new"); + + RAND_pseudo_bytes(iv, 16); + EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key->aes_key, iv); + HMAC_Init_ex(hctx, key->hmac_key, 16, ngx_ssl_session_ticket_md(), + NULL); + memcpy(name, key->name, 16); + + return 0; + + } else { + /* decrypt session ticket */ + + if (ngx_strncmp(name, keys->default_key->name, 16)) { + + key = keys->keys.elts; + for (i = 0; i < keys->keys.nelts; i++) { + if (ngx_strncmp(name, key[i].name, 16) == 0) { + break; + } + } + + if (i == keys->keys.nelts) { + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ssl session ticket decrypt, key: \"%*s\" " + "not found", 16, name); + return 0; + } + + key = &key[i]; + rc = 2; /* success, renew */ + + } else { + key = keys->default_key; + rc = 1; /* success */ + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ssl session ticket decrypt, key: \"%*s\"", + key->name_len, key->name); + + HMAC_Init_ex(hctx, key->hmac_key, 16, ngx_ssl_session_ticket_md(), + NULL); + EVP_DecryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key->aes_key, iv); + + return rc; + } +} + +#endif + + void ngx_ssl_cleanup_ctx(void *data) { diff -r a720f0b0e083 -r 6d3710969a18 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Fri Sep 27 19:39:33 2013 +0400 +++ b/src/event/ngx_event_openssl.h Sat Sep 28 02:48:11 2013 -0700 @@ -83,6 +83,24 @@ typedef struct { } ngx_ssl_session_cache_t; +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + +typedef struct { + size_t name_len; + u_char name[16]; + + u_char aes_key[16]; + u_char hmac_key[16]; +} ngx_ssl_session_ticket_key_t; + + +typedef struct { + ngx_ssl_session_ticket_key_t *default_key; + ngx_array_t keys; +} ngx_ssl_session_ticket_keys_t; + +#endif + #define NGX_SSL_SSLv2 0x0002 #define NGX_SSL_SSLv3 0x0004 @@ -136,6 +154,16 @@ ngx_int_t ngx_ssl_set_session(ngx_connec || n == X509_V_ERR_CERT_UNTRUSTED \ || n == X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE) +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +ngx_int_t ngx_ssl_session_ticket_keys(ngx_conf_t *cf, ngx_ssl_t *ssl, + ngx_ssl_session_ticket_keys_t *keys, time_t timeout); + +#ifdef OPENSSL_NO_SHA256 +#define ngx_ssl_session_ticket_md EVP_sha1 +#else +#define ngx_ssl_session_ticket_md EVP_sha256 +#endif +#endif ngx_int_t ngx_ssl_get_protocol(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s); @@ -175,6 +203,9 @@ extern int ngx_ssl_server_conf_index; extern int ngx_ssl_session_cache_index; extern int ngx_ssl_certificate_index; extern int ngx_ssl_stapling_index; +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +extern int ngx_ssl_session_ticket_keys_index; +#endif #endif /* _NGX_EVENT_OPENSSL_H_INCLUDED_ */ diff -r a720f0b0e083 -r 6d3710969a18 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Fri Sep 27 19:39:33 2013 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Sat Sep 28 02:48:11 2013 -0700 @@ -38,6 +38,11 @@ static char *ngx_http_ssl_enable(ngx_con static char *ngx_http_ssl_session_cache(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +static char *ngx_http_ssl_session_ticket_key(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); +#endif + static ngx_int_t ngx_http_ssl_init(ngx_conf_t *cf); @@ -153,6 +158,17 @@ static ngx_command_t ngx_http_ssl_comma 0, NULL }, +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + + { ngx_string("ssl_session_ticket_key"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE23, + ngx_http_ssl_session_ticket_key, + NGX_HTTP_SRV_CONF_OFFSET, + 0, + NULL }, + +#endif + { ngx_string("ssl_session_timeout"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_sec_slot, @@ -413,6 +429,7 @@ ngx_http_ssl_create_srv_conf(ngx_conf_t * sscf->shm_zone = NULL; * sscf->stapling_file = { 0, NULL }; * sscf->stapling_responder = { 0, NULL }; + * sscf->session_ticket_keys = NULL; */ sscf->enable = NGX_CONF_UNSET; @@ -634,6 +651,30 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * } +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + + if (conf->session_ticket_keys == NULL) { + conf->session_ticket_keys = prev->session_ticket_keys; + } + + if (conf->session_ticket_keys) { + if (conf->session_ticket_keys->default_key == NULL) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "default \"ssl_session_ticket_key\" is not defined"); + return NGX_CONF_ERROR; + } + + if (ngx_ssl_session_ticket_keys(cf, &conf->ssl, + conf->session_ticket_keys, + conf->session_timeout) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + } + +#endif + return NGX_CONF_OK; } @@ -769,6 +810,146 @@ invalid: return NGX_CONF_ERROR; } +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + +static char * +ngx_http_ssl_session_ticket_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_ssl_srv_conf_t *sscf = conf; + + char *rc; + u_char buf[32]; + ssize_t n; + ngx_str_t *value; + ngx_file_t file; + ngx_uint_t i; + ngx_file_info_t fi; + ngx_ssl_session_ticket_key_t *key, *k; + + if (sscf->session_ticket_keys == NULL) { + sscf->session_ticket_keys = ngx_pcalloc(cf->pool, + sizeof(ngx_ssl_session_ticket_keys_t)); + if (sscf->session_ticket_keys == NULL) { + return NGX_CONF_ERROR; + } + + if (ngx_array_init(&sscf->session_ticket_keys->keys, cf->pool, 4, + sizeof(ngx_ssl_session_ticket_key_t)) + != NGX_OK) + { + return NGX_CONF_ERROR; + } + } + + key = ngx_array_push(&sscf->session_ticket_keys->keys); + if (key == NULL) { + return NGX_CONF_ERROR; + } + + value = cf->args->elts; + + if (value[1].len > 16) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"ssl_session_ticket_key\" name \"%V\" too long, " + "it cannot exceed 16 characters", &value[1]); + return NGX_CONF_ERROR; + } + + if (cf->args->nelts == 4) { + + if (ngx_strcmp(value[3].data, "default")) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid parameter \"%V\"", &value[3]); + return NGX_CONF_ERROR; + } + + if (sscf->session_ticket_keys->default_key) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "default \"ssl_session_ticket_key\" is already " + "defined"); + return NGX_CONF_ERROR; + } + + sscf->session_ticket_keys->default_key = key; + } + + ngx_memzero(key->name, 16); + key->name_len = ngx_cpymem(key->name, value[1].data, value[1].len) + - key->name; + + k = sscf->session_ticket_keys->keys.elts; + for (i = 0; i < sscf->session_ticket_keys->keys.nelts - 1; i++) { + if (ngx_strncmp(key->name, k[i].name, 16) == 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"ssl_session_ticket_key\" named \"%V\" " + "is already defined", &value[1]); + return NGX_CONF_ERROR; + } + } + + if (ngx_conf_full_name(cf->cycle, &value[2], 1) != NGX_OK) { + return NGX_CONF_ERROR; + } + + ngx_memzero(&file, sizeof(ngx_file_t)); + file.name = value[2]; + file.log = cf->log; + + file.fd = ngx_open_file(file.name.data, NGX_FILE_RDONLY, 0, 0); + if (file.fd == NGX_INVALID_FILE) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, ngx_errno, + ngx_open_file_n " \"%V\" failed", &file.name); + + return NGX_CONF_ERROR; + } + + rc = NGX_CONF_ERROR; + + if (ngx_fd_info(file.fd, &fi) == NGX_FILE_ERROR) { + ngx_conf_log_error(NGX_LOG_CRIT, cf, ngx_errno, + ngx_fd_info_n " \"%V\" failed", &file.name); + goto failed; + } + + if (ngx_file_size(&fi) != 32) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"%V\" must be 32 bytes", &file.name); + goto failed; + } + + n = ngx_read_file(&file, buf, 32, 0); + + if (n == NGX_ERROR) { + ngx_conf_log_error(NGX_LOG_CRIT, cf, ngx_errno, + ngx_read_file_n " \"%V\" failed", &file.name); + goto failed; + } + + if (n != 32) { + ngx_conf_log_error(NGX_LOG_CRIT, cf, 0, + ngx_read_file_n " \"%V\" returned only %z bytes " + "instead of 32", &file.name, n); + goto failed; + } + + ngx_memcpy(key->aes_key, buf, 16); + ngx_memcpy(key->hmac_key, buf + 16, 16); + + rc = NGX_CONF_OK; + +failed: + + if (file.fd != NGX_INVALID_FILE) { + if (ngx_close_file(file.fd) == NGX_FILE_ERROR) { + ngx_log_error(NGX_LOG_ALERT, cf->log, ngx_errno, + ngx_close_file_n " \"%V\" failed", &file.name); + } + } + + return rc; +} + +#endif static ngx_int_t ngx_http_ssl_init(ngx_conf_t *cf) diff -r a720f0b0e083 -r 6d3710969a18 src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h Fri Sep 27 19:39:33 2013 +0400 +++ b/src/http/modules/ngx_http_ssl_module.h Sat Sep 28 02:48:11 2013 -0700 @@ -49,6 +49,10 @@ typedef struct { u_char *file; ngx_uint_t line; + +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB + ngx_ssl_session_ticket_keys_t *session_ticket_keys; +#endif } ngx_http_ssl_srv_conf_t; From piotr at cloudflare.com Sat Sep 28 10:03:54 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sat, 28 Sep 2013 03:03:54 -0700 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: Hi, > Ok, made some kind of patch, testing it now: > https://github.com/kyprizel/nginx_ssl_ticket_keys > > Not sure about server behaviour in case of invalid key file - should it be > emergency or alert only. I've just pushed code that's been sitting in my tree for the last few months: http://mailman.nginx.org/pipermail/nginx-devel/2013-September/004290.html It's rather thoroughly tested, but it handles key rollover in different fashion than your code (multiple files with a single session key each vs single file with multiple session keys). Hopefully, it will be helpful. Best regards, Piotr Sikora From mdounin at mdounin.ru Sat Sep 28 11:09:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 28 Sep 2013 15:09:45 +0400 Subject: [PATCH] SSL: added support for TLS Session Tickets (RFC5077). In-Reply-To: References: Message-ID: <20130928110945.GP2271@mdounin.ru> Hello! On Sat, Sep 28, 2013 at 02:55:36AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1380361691 25200 > # Sat Sep 28 02:48:11 2013 -0700 > # Node ID 6d3710969a18e2d0d817e297c2e17f941a58cd40 > # Parent a720f0b0e08345ebb01353250f4031bb6e141385 > SSL: added support for TLS Session Tickets (RFC5077). I haven't looked into the code yet, but commit log is certainly misleading. There is support for TLS session tickets already. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Sep 28 11:34:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 28 Sep 2013 15:34:46 +0400 Subject: [nginx] Fixed ngx_http_test_reading() to finalize request properly. In-Reply-To: References: Message-ID: <20130928113445.GQ2271@mdounin.ru> Hello! On Sat, Sep 28, 2013 at 01:37:09AM +0930, SplitIce wrote: > I know this patch was made for 1.5.x however I patched our 1.4.x build > (internal modules are in the process of being upgraded currently). However > I am still getting 000 in the logs. I am currently crawling the change logs > for similar patches. > > Any chance you can remember any similar issue being resolved in the 1.5.x > branch? I don't think I remember similar issues. On the other hand, the "000" code appears if a request is terminated without proper status code set, and this can easily happen - e.g. due to minor problems in your internal modules. Or it may be even legitimate due to fatal problems with a connection. Debug logs will likely help to trace the problem. -- Maxim Dounin http://nginx.org/en/donation.html From mat999 at gmail.com Sat Sep 28 14:35:30 2013 From: mat999 at gmail.com (SplitIce) Date: Sun, 29 Sep 2013 00:05:30 +0930 Subject: [nginx] Fixed ngx_http_test_reading() to finalize request properly. In-Reply-To: <20130928113445.GQ2271@mdounin.ru> References: <20130928113445.GQ2271@mdounin.ru> Message-ID: Maxim, it happened during a flood so that is likely. The flood has ceased now, and I didn't have time to find the cause during the incident. I am not sure what the conditions are for replicating, it doesn't appear to be limit_req related. I fixed the issue for us by adding a guard to the access log to not log such cases (since our problems where related to the sheer amount of log entries being written). Perhaps nginx should not log these cases to the access log, or provide an option for ignoring such cases. Thanks for your infomation, I thought I was going crazy with all the patches I was reading. Thanks, Mathew On Sat, Sep 28, 2013 at 9:04 PM, Maxim Dounin wrote: > Hello! > > On Sat, Sep 28, 2013 at 01:37:09AM +0930, SplitIce wrote: > > > I know this patch was made for 1.5.x however I patched our 1.4.x build > > (internal modules are in the process of being upgraded currently). > However > > I am still getting 000 in the logs. I am currently crawling the change > logs > > for similar patches. > > > > Any chance you can remember any similar issue being resolved in the 1.5.x > > branch? > > I don't think I remember similar issues. > > On the other hand, the "000" code appears if a request is > terminated without proper status code set, and this can easily > happen - e.g. due to minor problems in your internal modules. Or > it may be even legitimate due to fatal problems with a connection. > > Debug logs will likely help to trace the problem. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Sat Sep 28 16:55:28 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sat, 28 Sep 2013 09:55:28 -0700 Subject: [PATCH] SSL: added support for TLS Session Tickets (RFC5077). In-Reply-To: <20130928110945.GP2271@mdounin.ru> References: <20130928110945.GP2271@mdounin.ru> Message-ID: Hi Maxim, > I haven't looked into the code yet, but commit log is certainly > misleading. There is support for TLS session tickets already. You're right. That's what I get for changing commit message at the last minute. - SSL: added support for TLS Session Tickets (RFC5077). + SSL: added ability to set keys used for TLS Session Tickets (RFC5077). Best regards, Piotr Sikora From kyprizel at gmail.com Sat Sep 28 17:53:23 2013 From: kyprizel at gmail.com (kyprizel) Date: Sat, 28 Sep 2013 21:53:23 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: Piotr, thanks for the share! Will your patch be accepted to the main tree or I've a chance? ;) My patch was designed not to use multiple keyfiles and keynames in nginx config so it's able to rotate keys with simple logic, only updating keyfile. On Sat, Sep 28, 2013 at 2:03 PM, Piotr Sikora wrote: > Hi, > > > Ok, made some kind of patch, testing it now: > > https://github.com/kyprizel/nginx_ssl_ticket_keys > > > > Not sure about server behaviour in case of invalid key file - should it > be > > emergency or alert only. > > I've just pushed code that's been sitting in my tree for the last few > months: > http://mailman.nginx.org/pipermail/nginx-devel/2013-September/004290.html > > It's rather thoroughly tested, but it handles key rollover in > different fashion than your code (multiple files with a single session > key each vs single file with multiple session keys). > > Hopefully, it will be helpful. > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Sat Sep 28 18:14:20 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Sat, 28 Sep 2013 11:14:20 -0700 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: Hi, > My patch was designed not to use multiple keyfiles and keynames in nginx > config so it's able to rotate keys with simple logic, only updating keyfile. IMHO, that makes the key rollover much harder than it should be, that is: you need to regenerate keyfile with number of older keys + new one vs just add new key (and optionally remove some of the old ones). Best regards, Piotr Sikora From kyprizel at gmail.com Sat Sep 28 18:37:39 2013 From: kyprizel at gmail.com (kyprizel) Date: Sat, 28 Sep 2013 22:37:39 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: On Sat, Sep 28, 2013 at 10:14 PM, Piotr Sikora wrote: > Hi, > > > My patch was designed not to use multiple keyfiles and keynames in nginx > > config so it's able to rotate keys with simple logic, only updating > keyfile. > > IMHO, that makes the key rollover much harder than it should be, that > is: you need to regenerate keyfile with number of older keys + new one > vs just add new key (and optionally remove some of the old ones). > > That depends on key distribution scheme - you can distribute only new keys and store old keys on nginx server only. But with your patch you should also rotate "default" key in nginx config and it complicates the logic (in my schema) a bit. Anyway - I'm not sure if keyname is meaningful parameter in periodic key rotation scheme. For me - it is not. > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 30 14:26:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:26:55 +0400 Subject: [PATCH] SSL: added support for TLS Session Tickets (RFC5077). In-Reply-To: References: Message-ID: <20130930142655.GD56438@mdounin.ru> Hello! On Sat, Sep 28, 2013 at 02:55:36AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1380361691 25200 > # Sat Sep 28 02:48:11 2013 -0700 > # Node ID 6d3710969a18e2d0d817e297c2e17f941a58cd40 > # Parent a720f0b0e08345ebb01353250f4031bb6e141385 > SSL: added support for TLS Session Tickets (RFC5077). As previously noted, the patch description is wrong. It also make sense to add some description of the directive added. See below for some code comments. [...] > +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB > + > +typedef struct { > + size_t name_len; > + u_char name[16]; > + > + u_char aes_key[16]; > + u_char hmac_key[16]; > +} ngx_ssl_session_ticket_key_t; > + > + > +typedef struct { > + ngx_ssl_session_ticket_key_t *default_key; > + ngx_array_t keys; > +} ngx_ssl_session_ticket_keys_t; > + > +#endif This looks needlessly complicated, see below. [...] > @@ -153,6 +158,17 @@ static ngx_command_t ngx_http_ssl_comma > 0, > NULL }, > > +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB > + > + { ngx_string("ssl_session_ticket_key"), > + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE23, > + ngx_http_ssl_session_ticket_key, > + NGX_HTTP_SRV_CONF_OFFSET, > + 0, > + NULL }, > + > +#endif > + This makes the directive unavailable without any meaningfull diagnostics if nginx was build with old OpenSSL, which isn't very user-friendly. [...] > @@ -769,6 +810,146 @@ invalid: > return NGX_CONF_ERROR; > } > > +#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB Style, there should be 2 blank lines before #ifdef. [...] > + if (value[1].len > 16) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "\"ssl_session_ticket_key\" name \"%V\" too long, " > + "it cannot exceed 16 characters", &value[1]); > + return NGX_CONF_ERROR; > + } > + > + if (cf->args->nelts == 4) { > + > + if (ngx_strcmp(value[3].data, "default")) { Style: as ngx_strcmp() doesn't return a boolean value, I would recommend using "!= 0" test instead. But actually I doubt we at all need an explicit mark for default key. Just using first one for encryption would probably be good enough. I also think it would be better to don't rely on an explicitly written name, which will make automatic key rotation a pain - as one will have to update both name in a configuration file and a file with keys. E.g. Apache uses a binary file with 48 bytes of random data, which is much easier to generate and rotate if needed. [...] > + if (ngx_conf_full_name(cf->cycle, &value[2], 1) != NGX_OK) { > + return NGX_CONF_ERROR; > + } > + > + ngx_memzero(&file, sizeof(ngx_file_t)); > + file.name = value[2]; > + file.log = cf->log; > + > + file.fd = ngx_open_file(file.name.data, NGX_FILE_RDONLY, 0, 0); > + if (file.fd == NGX_INVALID_FILE) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, ngx_errno, > + ngx_open_file_n " \"%V\" failed", &file.name); > + > + return NGX_CONF_ERROR; > + } Not sure if this code should be here. Other file operations are handled in the ngx_event_openssl.c, and doing the same for session tickets might be a good idea as well. Especially if you'll consider adding relevant directives to the mail module. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 30 14:50:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:50:41 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> Message-ID: <20130930145041.GF56438@mdounin.ru> Hello! On Sat, Sep 28, 2013 at 10:37:39PM +0400, kyprizel wrote: > On Sat, Sep 28, 2013 at 10:14 PM, Piotr Sikora wrote: > > > Hi, > > > > > My patch was designed not to use multiple keyfiles and keynames in nginx > > > config so it's able to rotate keys with simple logic, only updating > > keyfile. > > > > IMHO, that makes the key rollover much harder than it should be, that > > is: you need to regenerate keyfile with number of older keys + new one > > vs just add new key (and optionally remove some of the old ones). > > > > > That depends on key distribution scheme - you can distribute only new keys > and store old keys on nginx server only. > But with your patch you should also rotate "default" key in nginx config > and it complicates the logic (in my schema) a bit. > Anyway - I'm not sure if keyname is meaningful parameter in periodic key > rotation scheme. For me - it is not. I agree that logic suggested by Piotr looks a bit too complicated. On the other hand, the one in your patch doesn't looks easy for automation as well. I don't think it would be trivial to generate keys in PEM format (feel free to prove I'm wrong), and rotate them once they are in a single file. BTW, just in case somebody haven't seen this before, here is a link for relevant Apache directive which uses 48-byte binary file: http://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslsessionticketkeyfile -- Maxim Dounin http://nginx.org/en/donation.html From kyprizel at gmail.com Mon Sep 30 15:14:59 2013 From: kyprizel at gmail.com (kyprizel) Date: Mon, 30 Sep 2013 19:14:59 +0400 Subject: Distributed SSL session cache In-Reply-To: <20130930145041.GF56438@mdounin.ru> References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> <20130930145041.GF56438@mdounin.ru> Message-ID: $ openssl rand -base64 48 | awk '{print "-----BEGIN SESSION TICKET KEY-----"; print; print "-----END SESSION TICKET KEY-----"}' >> ticket.key.new && cat ticket.key >> ticket.key.new && mv ticket.key.new ticket.key There is no difference b/w binary and PEM form here, but I prefer to see config files in printable characters. On Mon, Sep 30, 2013 at 6:50 PM, Maxim Dounin wrote: > Hello! > > On Sat, Sep 28, 2013 at 10:37:39PM +0400, kyprizel wrote: > > > On Sat, Sep 28, 2013 at 10:14 PM, Piotr Sikora > wrote: > > > > > Hi, > > > > > > > My patch was designed not to use multiple keyfiles and keynames in > nginx > > > > config so it's able to rotate keys with simple logic, only updating > > > keyfile. > > > > > > IMHO, that makes the key rollover much harder than it should be, that > > > is: you need to regenerate keyfile with number of older keys + new one > > > vs just add new key (and optionally remove some of the old ones). > > > > > > > > That depends on key distribution scheme - you can distribute only new > keys > > and store old keys on nginx server only. > > But with your patch you should also rotate "default" key in nginx config > > and it complicates the logic (in my schema) a bit. > > Anyway - I'm not sure if keyname is meaningful parameter in periodic key > > rotation scheme. For me - it is not. > > I agree that logic suggested by Piotr looks a bit too complicated. > On the other hand, the one in your patch doesn't looks easy for > automation as well. I don't think it would be trivial to generate > keys in PEM format (feel free to prove I'm wrong), and rotate them > once they are in a single file. > > BTW, just in case somebody haven't seen this before, here is a > link for relevant Apache directive which uses 48-byte binary file: > > http://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslsessionticketkeyfile > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 30 15:31:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 19:31:36 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> <20130930145041.GF56438@mdounin.ru> Message-ID: <20130930153136.GI56438@mdounin.ru> Hello! On Mon, Sep 30, 2013 at 07:14:59PM +0400, kyprizel wrote: > $ openssl rand -base64 48 | awk '{print "-----BEGIN SESSION TICKET > KEY-----"; print; print "-----END SESSION TICKET KEY-----"}' >> > ticket.key.new && cat ticket.key >> ticket.key.new && mv ticket.key.new > ticket.key > > There is no difference b/w binary and PEM form here, but I prefer to see > config files in printable characters. I would prefer printable configs as well. But I don't really think that adding PEM header/footer with awk counts as a trivial way to do things. It's not something an ordinary admin can do with at least 50% chance of getting a correct result for the first time. And, BTW, your key rotation lacks removing of an old key, which makes it unusable. Correct implementation will require keeping each key in it's own file - which essentially makes "single file per key" aproach more natural. -- Maxim Dounin http://nginx.org/en/donation.html From kyprizel at gmail.com Mon Sep 30 16:15:34 2013 From: kyprizel at gmail.com (kyprizel) Date: Mon, 30 Sep 2013 20:15:34 +0400 Subject: Distributed SSL session cache In-Reply-To: <20130930153136.GI56438@mdounin.ru> References: <20130916115526.GA57081@mdounin.ru> <460598205.71.1379337685487.JavaMail.root@zimbra.lentz.com.au> <20130916133727.GF57081@mdounin.ru> <20130930145041.GF56438@mdounin.ru> <20130930153136.GI56438@mdounin.ru> Message-ID: $ openssl rand -base64 48 | awk '{print "-----BEGIN SESSION TICKET KEY-----"; print; print "-----END SESSION TICKET KEY-----"}' >> ticket.key.new && cat ticket.key | awk 'sa==1{n++;sa=1}/-----BEGIN SESSION TICKET KEY-----/{sa=1;X=2}{if(n<3*X){print;}}' >> ticket.key.new && mv ticket.key.new ticket.key store not more than X=2 old keys + new one, you can add it to cron file. I know it's weird to use awk, but I only try to illustrate that it's not a big problem to rotate keys with my schema ;) But you can' rotate keys with oneliner if you use "one key per file schema" - there'll be too big probability of mistake during nginx config parsing. On Mon, Sep 30, 2013 at 7:31 PM, Maxim Dounin wrote: > Hello! > > On Mon, Sep 30, 2013 at 07:14:59PM +0400, kyprizel wrote: > > > $ openssl rand -base64 48 | awk '{print "-----BEGIN SESSION TICKET > > KEY-----"; print; print "-----END SESSION TICKET KEY-----"}' >> > > ticket.key.new && cat ticket.key >> ticket.key.new && mv ticket.key.new > > ticket.key > > > > There is no difference b/w binary and PEM form here, but I prefer to see > > config files in printable characters. > > I would prefer printable configs as well. But I don't really > think that adding PEM header/footer with awk counts as a trivial > way to do things. It's not something an ordinary admin can do > with at least 50% chance of getting a correct result for the first > time. > > And, BTW, your key rotation lacks removing of an old key, which > makes it unusable. Correct implementation will require keeping > each key in it's own file - which essentially makes "single file > per key" aproach more natural. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 30 18:00:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 22:00:14 +0400 Subject: Distributed SSL session cache In-Reply-To: References: <20130916133727.GF57081@mdounin.ru> <20130930145041.GF56438@mdounin.ru> <20130930153136.GI56438@mdounin.ru> Message-ID: <20130930180014.GM56438@mdounin.ru> Hello! On Mon, Sep 30, 2013 at 08:15:34PM +0400, kyprizel wrote: > $ openssl rand -base64 48 | awk '{print "-----BEGIN SESSION TICKET > KEY-----"; print; print "-----END SESSION TICKET KEY-----"}' >> > ticket.key.new && cat ticket.key | awk 'sa==1{n++;sa=1}/-----BEGIN SESSION > TICKET KEY-----/{sa=1;X=2}{if(n<3*X){print;}}' >> ticket.key.new && mv > ticket.key.new ticket.key > > store not more than X=2 old keys + new one, you can add it to cron file. > > I know it's weird to use awk, but I only try to illustrate that it's not a > big problem to rotate keys with my schema ;) While it's not a big problem, it's certainly not something trivial. > But you can' rotate keys with > oneliner if you use "one key per file schema" - there'll be too big > probability of mistake during nginx config parsing. Huh? Even trivial $ mv key.new key.old && openssl rand 48 > key.new would be fine as in a worst case a new configuration will just fail to load. And $ cp key.new key.old.tmp && mv key.old.tmp key.old \ && openssl rand 48 > key.new.tmp && mv key.new.tmp key.new is atomic. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 30 18:10:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:10:55 +0000 Subject: [nginx] Mail: added session close on smtp_greeting_delay violation. Message-ID: details: http://hg.nginx.org/nginx/rev/42f874c0b970 branches: changeset: 5396:42f874c0b970 user: Maxim Dounin date: Mon Sep 30 22:09:50 2013 +0400 description: Mail: added session close on smtp_greeting_delay violation. A server MUST send greeting before other replies, while before this change in case of smtp_greeting_delay violation the 220 greeting was sent after several 503 replies to commands received before greeting, resulting in protocol synchronization loss. Moreover, further commands were accepted after the greeting. While closing a connection isn't strictly RFC compliant (RFC 5321 requires servers to wait for a QUIT before closing a connection), it's probably good enough for practial uses. diffstat: src/mail/ngx_mail_smtp_handler.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c --- a/src/mail/ngx_mail_smtp_handler.c +++ b/src/mail/ngx_mail_smtp_handler.c @@ -321,6 +321,7 @@ ngx_mail_smtp_invalid_pipelining(ngx_eve } ngx_str_set(&s->out, smtp_invalid_pipelining); + s->quit = 1; } ngx_mail_send(c->write); From mdounin at mdounin.ru Mon Sep 30 18:10:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:10:56 +0000 Subject: [nginx] Mail: mail dependencies are now honored while building a... Message-ID: details: http://hg.nginx.org/nginx/rev/ae73d7a4fcde branches: changeset: 5397:ae73d7a4fcde user: Maxim Dounin date: Mon Sep 30 22:09:54 2013 +0400 description: Mail: mail dependencies are now honored while building addons. diffstat: auto/modules | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diffs (12 lines): diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -483,6 +483,8 @@ if [ $MAIL = YES ]; then modules="$modules $MAIL_PROXY_MODULE" MAIL_SRCS="$MAIL_SRCS $MAIL_PROXY_SRCS" + + NGX_ADDON_DEPS="$NGX_ADDON_DEPS \$(MAIL_DEPS)" fi From mdounin at mdounin.ru Mon Sep 30 18:10:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:10:57 +0000 Subject: [nginx] Mail: smtp pipelining support. Message-ID: details: http://hg.nginx.org/nginx/rev/04e43d03e153 branches: changeset: 5398:04e43d03e153 user: Maxim Dounin date: Mon Sep 30 22:09:57 2013 +0400 description: Mail: smtp pipelining support. Basically, this does the following two changes (and corresponding modifications of related code): 1. Does not reset session buffer unless it's reached it's end, and always wait for LF to terminate command (even if we detected invalid command). 2. Record command name to make it available for handlers (since now we can't assume that command starts from s->buffer->start). diffstat: src/mail/ngx_mail.h | 2 + src/mail/ngx_mail_handler.c | 12 ++++- src/mail/ngx_mail_parse.c | 30 +++++++++++++- src/mail/ngx_mail_proxy_module.c | 7 ++- src/mail/ngx_mail_smtp_handler.c | 83 ++++++++++++--------------------------- 5 files changed, 71 insertions(+), 63 deletions(-) diffs (272 lines): diff --git a/src/mail/ngx_mail.h b/src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h +++ b/src/mail/ngx_mail.h @@ -234,6 +234,8 @@ typedef struct { ngx_str_t smtp_from; ngx_str_t smtp_to; + ngx_str_t cmd; + ngx_uint_t command; ngx_array_t args; diff --git a/src/mail/ngx_mail_handler.c b/src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c +++ b/src/mail/ngx_mail_handler.c @@ -620,7 +620,9 @@ ngx_mail_read_command(ngx_mail_session_t return NGX_ERROR; } - return NGX_AGAIN; + if (s->buffer->pos == s->buffer->last) { + return NGX_AGAIN; + } } cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); @@ -661,8 +663,12 @@ void ngx_mail_auth(ngx_mail_session_t *s, ngx_connection_t *c) { s->args.nelts = 0; - s->buffer->pos = s->buffer->start; - s->buffer->last = s->buffer->start; + + if (s->buffer->pos == s->buffer->last) { + s->buffer->pos = s->buffer->start; + s->buffer->last = s->buffer->start; + } + s->state = 0; if (c->read->timer_set) { diff --git a/src/mail/ngx_mail_parse.c b/src/mail/ngx_mail_parse.c --- a/src/mail/ngx_mail_parse.c +++ b/src/mail/ngx_mail_parse.c @@ -626,6 +626,8 @@ ngx_mail_smtp_parse_command(ngx_mail_ses ngx_str_t *arg; enum { sw_start = 0, + sw_command, + sw_invalid, sw_spaces_before_argument, sw_argument, sw_almost_done @@ -640,8 +642,14 @@ ngx_mail_smtp_parse_command(ngx_mail_ses /* SMTP command */ case sw_start: + s->cmd_start = p; + state = sw_command; + + /* fall through */ + + case sw_command: if (ch == ' ' || ch == CR || ch == LF) { - c = s->buffer->start; + c = s->cmd_start; if (p - c == 4) { @@ -719,6 +727,9 @@ ngx_mail_smtp_parse_command(ngx_mail_ses goto invalid; } + s->cmd.data = s->cmd_start; + s->cmd.len = p - s->cmd_start; + switch (ch) { case ' ': state = sw_spaces_before_argument; @@ -738,6 +749,9 @@ ngx_mail_smtp_parse_command(ngx_mail_ses break; + case sw_invalid: + goto invalid; + case sw_spaces_before_argument: switch (ch) { case ' ': @@ -824,9 +838,21 @@ done: invalid: - s->state = sw_start; + s->state = sw_invalid; s->arg_start = NULL; + /* skip invalid command till LF */ + + for (p = s->buffer->pos; p < s->buffer->last; p++) { + if (*p == LF) { + s->state = sw_start; + p++; + break; + } + } + + s->buffer->pos = p; + return NGX_MAIL_PARSE_INVALID_COMMAND; } diff --git a/src/mail/ngx_mail_proxy_module.c b/src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c +++ b/src/mail/ngx_mail_proxy_module.c @@ -657,7 +657,12 @@ ngx_mail_proxy_smtp_handler(ngx_event_t c->log->action = NULL; ngx_log_error(NGX_LOG_INFO, c->log, 0, "client logged in"); - ngx_mail_proxy_handler(s->connection->write); + if (s->buffer->pos == s->buffer->last) { + ngx_mail_proxy_handler(s->connection->write); + + } else { + ngx_mail_proxy_handler(c->write); + } return; diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c --- a/src/mail/ngx_mail_smtp_handler.c +++ b/src/mail/ngx_mail_smtp_handler.c @@ -486,6 +486,10 @@ ngx_mail_smtp_auth_state(ngx_event_t *re } } + if (s->buffer->pos < s->buffer->last) { + s->blocked = 1; + } + switch (rc) { case NGX_DONE: @@ -505,11 +509,14 @@ ngx_mail_smtp_auth_state(ngx_event_t *re case NGX_OK: s->args.nelts = 0; - s->buffer->pos = s->buffer->start; - s->buffer->last = s->buffer->start; + + if (s->buffer->pos == s->buffer->last) { + s->buffer->pos = s->buffer->start; + s->buffer->last = s->buffer->start; + } if (s->state) { - s->arg_start = s->buffer->start; + s->arg_start = s->buffer->pos; } ngx_mail_send(c->write); @@ -652,9 +659,7 @@ ngx_mail_smtp_auth(ngx_mail_session_t *s static ngx_int_t ngx_mail_smtp_mail(ngx_mail_session_t *s, ngx_connection_t *c) { - u_char ch; - ngx_str_t l; - ngx_uint_t i; + ngx_str_t *arg, cmd; ngx_mail_smtp_srv_conf_t *sscf; sscf = ngx_mail_get_module_srv_conf(s, ngx_mail_smtp_module); @@ -672,37 +677,20 @@ ngx_mail_smtp_mail(ngx_mail_session_t *s return NGX_OK; } - l.len = s->buffer->last - s->buffer->start; - l.data = s->buffer->start; + arg = s->args.elts; + arg += s->args.nelts - 1; - for (i = 0; i < l.len; i++) { - ch = l.data[i]; + cmd.len = arg->data + arg->len - s->cmd.data; + cmd.data = s->cmd.data; - if (ch != CR && ch != LF) { - continue; - } + s->smtp_from.len = cmd.len; - l.data[i] = ' '; - } - - while (i) { - if (l.data[i - 1] != ' ') { - break; - } - - i--; - } - - l.len = i; - - s->smtp_from.len = l.len; - - s->smtp_from.data = ngx_pnalloc(c->pool, l.len); + s->smtp_from.data = ngx_pnalloc(c->pool, cmd.len); if (s->smtp_from.data == NULL) { return NGX_ERROR; } - ngx_memcpy(s->smtp_from.data, l.data, l.len); + ngx_memcpy(s->smtp_from.data, cmd.data, cmd.len); ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, "smtp mail from:\"%V\"", &s->smtp_from); @@ -716,46 +704,27 @@ ngx_mail_smtp_mail(ngx_mail_session_t *s static ngx_int_t ngx_mail_smtp_rcpt(ngx_mail_session_t *s, ngx_connection_t *c) { - u_char ch; - ngx_str_t l; - ngx_uint_t i; + ngx_str_t *arg, cmd; if (s->smtp_from.len == 0) { ngx_str_set(&s->out, smtp_bad_sequence); return NGX_OK; } - l.len = s->buffer->last - s->buffer->start; - l.data = s->buffer->start; + arg = s->args.elts; + arg += s->args.nelts - 1; - for (i = 0; i < l.len; i++) { - ch = l.data[i]; + cmd.len = arg->data + arg->len - s->cmd.data; + cmd.data = s->cmd.data; - if (ch != CR && ch != LF) { - continue; - } + s->smtp_to.len = cmd.len; - l.data[i] = ' '; - } - - while (i) { - if (l.data[i - 1] != ' ') { - break; - } - - i--; - } - - l.len = i; - - s->smtp_to.len = l.len; - - s->smtp_to.data = ngx_pnalloc(c->pool, l.len); + s->smtp_to.data = ngx_pnalloc(c->pool, cmd.len); if (s->smtp_to.data == NULL) { return NGX_ERROR; } - ngx_memcpy(s->smtp_to.data, l.data, l.len); + ngx_memcpy(s->smtp_to.data, cmd.data, cmd.len); ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, "smtp rcpt to:\"%V\"", &s->smtp_to); From mdounin at mdounin.ru Mon Sep 30 18:10:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:10:59 +0000 Subject: [nginx] Mail: handle smtp multiline replies. Message-ID: details: http://hg.nginx.org/nginx/rev/d3e09aa03a7a branches: changeset: 5399:d3e09aa03a7a user: Maxim Dounin date: Mon Sep 30 22:10:03 2013 +0400 description: Mail: handle smtp multiline replies. See here for details: http://nginx.org/pipermail/nginx/2010-August/021713.html http://nginx.org/pipermail/nginx/2010-August/021784.html http://nginx.org/pipermail/nginx/2010-August/021785.html diffstat: src/mail/ngx_mail_proxy_module.c | 21 ++++++++++++++++++++- 1 files changed, 20 insertions(+), 1 deletions(-) diffs (38 lines): diff --git a/src/mail/ngx_mail_proxy_module.c b/src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c +++ b/src/mail/ngx_mail_proxy_module.c @@ -707,7 +707,7 @@ ngx_mail_proxy_dummy_handler(ngx_event_t static ngx_int_t ngx_mail_proxy_read_response(ngx_mail_session_t *s, ngx_uint_t state) { - u_char *p; + u_char *p, *m; ssize_t n; ngx_buf_t *b; ngx_mail_proxy_conf_t *pcf; @@ -784,6 +784,25 @@ ngx_mail_proxy_read_response(ngx_mail_se break; default: /* NGX_MAIL_SMTP_PROTOCOL */ + + if (p[3] == '-') { + /* multiline reply, check if we got last line */ + + m = b->last - (sizeof(CRLF "200" CRLF) - 1); + + while (m > p) { + if (m[0] == CR && m[1] == LF) { + break; + } + + m--; + } + + if (m <= p || m[5] == '-') { + return NGX_AGAIN; + } + } + switch (state) { case ngx_smtp_start: From mdounin at mdounin.ru Mon Sep 30 18:11:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:11:00 +0000 Subject: [nginx] Mail: fixed overrun of allocated memory (ticket #411). Message-ID: details: http://hg.nginx.org/nginx/rev/baa705805138 branches: changeset: 5400:baa705805138 user: Maxim Dounin date: Mon Sep 30 22:10:08 2013 +0400 description: Mail: fixed overrun of allocated memory (ticket #411). Reported by Markus Linnala. diffstat: src/mail/ngx_mail_smtp_module.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff --git a/src/mail/ngx_mail_smtp_module.c b/src/mail/ngx_mail_smtp_module.c --- a/src/mail/ngx_mail_smtp_module.c +++ b/src/mail/ngx_mail_smtp_module.c @@ -277,7 +277,6 @@ ngx_mail_smtp_merge_srv_conf(ngx_conf_t p = ngx_cpymem(p, conf->capability.data, conf->capability.len); p = ngx_cpymem(p, "250 STARTTLS" CRLF, sizeof("250 STARTTLS" CRLF) - 1); - *p++ = CR; *p = LF; p = conf->starttls_capability.data + (last - conf->capability.data) + 3; From mdounin at mdounin.ru Mon Sep 30 18:11:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 18:11:02 +0000 Subject: [nginx] Mail: fixed segfault with ssl/starttls at mail{} level a... Message-ID: details: http://hg.nginx.org/nginx/rev/09fc4598fc8e branches: changeset: 5401:09fc4598fc8e user: Maxim Dounin date: Mon Sep 30 22:10:13 2013 +0400 description: Mail: fixed segfault with ssl/starttls at mail{} level and no cert. A configuration like "mail { starttls on; server {}}" triggered NULL pointer dereference in ngx_mail_ssl_merge_conf() as conf->file was not set. diffstat: src/mail/ngx_mail_ssl_module.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff --git a/src/mail/ngx_mail_ssl_module.c b/src/mail/ngx_mail_ssl_module.c --- a/src/mail/ngx_mail_ssl_module.c +++ b/src/mail/ngx_mail_ssl_module.c @@ -235,6 +235,11 @@ ngx_mail_ssl_merge_conf(ngx_conf_t *cf, mode = ""; } + if (conf->file == NULL) { + conf->file = prev->file; + conf->line = prev->line; + } + if (*mode) { if (conf->certificate.len == 0) { From vbart at nginx.com Mon Sep 30 20:39:55 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 30 Sep 2013 20:39:55 +0000 Subject: [nginx] SPDY: fixed connection leak while waiting for request body. Message-ID: details: http://hg.nginx.org/nginx/rev/4d0c70541784 branches: changeset: 5402:4d0c70541784 user: Valentin Bartenev date: Tue Oct 01 00:00:57 2013 +0400 description: SPDY: fixed connection leak while waiting for request body. If an error occurs in a SPDY connection, the c->error flag is set on every fake request connection, and its read or write event handler is called, in order to finalize it. But while waiting for a request body, it was a no-op since the read event handler ngx_http_request_handler() calls r->read_event_handler that had been set to ngx_http_block_reading(). diffstat: src/http/ngx_http_spdy.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (21 lines): diff -r 09fc4598fc8e -r 4d0c70541784 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Mon Sep 30 22:10:13 2013 +0400 +++ b/src/http/ngx_http_spdy.c Tue Oct 01 00:00:57 2013 +0400 @@ -1214,6 +1214,7 @@ ngx_http_spdy_state_data(ngx_http_spdy_c } if (rb->post_handler) { + r->read_event_handler = ngx_http_block_reading; rb->post_handler(r); } } @@ -2607,6 +2608,9 @@ ngx_http_spdy_read_request_body(ngx_http r->request_body->post_handler = post_handler; + r->read_event_handler = ngx_http_test_reading; + r->write_event_handler = ngx_http_request_empty_handler; + return NGX_AGAIN; } From vbart at nginx.com Mon Sep 30 20:39:57 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 30 Sep 2013 20:39:57 +0000 Subject: [nginx] SPDY: fixed connection leak while waiting for request he... Message-ID: details: http://hg.nginx.org/nginx/rev/7e062646da6f branches: changeset: 5403:7e062646da6f user: Valentin Bartenev date: Tue Oct 01 00:04:00 2013 +0400 description: SPDY: fixed connection leak while waiting for request headers. If an error occurs in a SPDY connection, the c->error flag is set on every fake request connection, and its read or write event handler is called, in order to finalize it. But while waiting for request headers, it was a no-op since the read event handler had been set to ngx_http_empty_handler(). diffstat: src/http/ngx_http_spdy.c | 20 +++++++++++++++++++- 1 files changed, 19 insertions(+), 1 deletions(-) diffs (44 lines): diff -r 4d0c70541784 -r 7e062646da6f src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Oct 01 00:00:57 2013 +0400 +++ b/src/http/ngx_http_spdy.c Tue Oct 01 00:04:00 2013 +0400 @@ -145,6 +145,8 @@ static ngx_int_t ngx_http_spdy_construct static void ngx_http_spdy_run_request(ngx_http_request_t *r); static ngx_int_t ngx_http_spdy_init_request_body(ngx_http_request_t *r); +static void ngx_http_spdy_close_stream_handler(ngx_event_t *ev); + static void ngx_http_spdy_handle_connection_handler(ngx_event_t *rev); static void ngx_http_spdy_keepalive_handler(ngx_event_t *rev); static void ngx_http_spdy_finalize_connection(ngx_http_spdy_connection_t *sc, @@ -1825,7 +1827,7 @@ ngx_http_spdy_create_stream(ngx_http_spd rev->data = fc; rev->ready = 1; - rev->handler = ngx_http_empty_handler; + rev->handler = ngx_http_spdy_close_stream_handler; rev->log = log; ngx_memcpy(wev, rev, sizeof(ngx_event_t)); @@ -2615,6 +2617,22 @@ ngx_http_spdy_read_request_body(ngx_http } +static void +ngx_http_spdy_close_stream_handler(ngx_event_t *ev) +{ + ngx_connection_t *fc; + ngx_http_request_t *r; + + fc = ev->data; + r = fc->data; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "spdy close stream handler"); + + ngx_http_spdy_close_stream(r->spdy_stream, 0); +} + + void ngx_http_spdy_close_stream(ngx_http_spdy_stream_t *stream, ngx_int_t rc) { From vbart at nginx.com Mon Sep 30 20:39:58 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 30 Sep 2013 20:39:58 +0000 Subject: [nginx] SPDY: set empty write handler during connection finaliza... Message-ID: details: http://hg.nginx.org/nginx/rev/db85dacfa013 branches: changeset: 5404:db85dacfa013 user: Valentin Bartenev date: Tue Oct 01 00:12:30 2013 +0400 description: SPDY: set empty write handler during connection finalization. While ngx_http_spdy_write_handler() should not make any harm with current code, calling it during finalization of SPDY connection was not intended. diffstat: src/http/ngx_http_spdy.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r 7e062646da6f -r db85dacfa013 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Tue Oct 01 00:04:00 2013 +0400 +++ b/src/http/ngx_http_spdy.c Tue Oct 01 00:12:30 2013 +0400 @@ -2832,6 +2832,7 @@ ngx_http_spdy_finalize_connection(ngx_ht c->error = 1; c->read->handler = ngx_http_empty_handler; + c->write->handler = ngx_http_empty_handler; sc->last_out = NULL; From vbart at nginx.com Mon Sep 30 20:40:00 2013 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 30 Sep 2013 20:40:00 +0000 Subject: [nginx] SPDY: ignore priority when queuing blocked frames. Message-ID: details: http://hg.nginx.org/nginx/rev/620808518349 branches: changeset: 5405:620808518349 user: Valentin Bartenev date: Tue Oct 01 00:14:37 2013 +0400 description: SPDY: ignore priority when queuing blocked frames. With this change all such frames will be added in front of the output queue, and will be sent first. It prevents HOL blocking when response with higher priority is blocked by response with lower priority in the middle of the queue because the order of their SYN_REPLY frames cannot be changed. Proposed by Yury Kirpichev. diffstat: src/http/ngx_http_spdy.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (15 lines): diff -r db85dacfa013 -r 620808518349 src/http/ngx_http_spdy.h --- a/src/http/ngx_http_spdy.h Tue Oct 01 00:12:30 2013 +0400 +++ b/src/http/ngx_http_spdy.h Tue Oct 01 00:14:37 2013 +0400 @@ -173,9 +173,9 @@ ngx_http_spdy_queue_blocked_frame(ngx_ht { ngx_http_spdy_out_frame_t **out; - for (out = &sc->last_out; *out && !(*out)->blocked; out = &(*out)->next) + for (out = &sc->last_out; *out; out = &(*out)->next) { - if (frame->priority >= (*out)->priority) { + if ((*out)->blocked) { break; } }