From wandenberg at gmail.com Sun Jun 1 02:43:33 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 31 May 2014 23:43:33 -0300 Subject: Help with shared memory usage In-Reply-To: <20140528184116.GH1849@mdounin.ru> References: <20130729171109.GA2130@mdounin.ru> <20130730100931.GD2130@mdounin.ru> <20131006093708.GY62063@mdounin.ru> <20131220164923.GK95113@mdounin.ru> <20140122165150.GP1835@mdounin.ru> <20140528184116.GH1849@mdounin.ru> Message-ID: Hello Maxim, I executed my tests again and seems that your improved patch version is working fine too. Did you plan to merge it on nginx core soon? Regards On Wed, May 28, 2014 at 3:41 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 22, 2014 at 08:51:50PM +0400, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Jan 22, 2014 at 01:39:54AM -0200, Wandenberg Peixoto wrote: > > > > > Hello Maxim, > > > > > > did you have opportunity to take a look on this last patch? > > > > It looks more or less correct, though I don't happy with the > > checks done, and there are various style issues. I'm planning to > > look into it and build a better version as time permits. > > Sorry for long delay, see here for an improved patch: > > http://mailman.nginx.org/pipermail/nginx-devel/2014-May/005406.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Sun Jun 1 02:46:28 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 31 May 2014 23:46:28 -0300 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) In-Reply-To: <20140528183836.GG1849@mdounin.ru> References: <20140528183836.GG1849@mdounin.ru> Message-ID: Hello Maxim, I executed my tests again and seems that your improved patch version is working fine too. Did you plan to merge it on nginx core soon? -agentzh Did you have opportunity to check if it works for you? Regards On Wed, May 28, 2014 at 3:38 PM, Maxim Dounin wrote: > Hello! > > On Sun, May 11, 2014 at 10:13:52PM -0700, Yichun Zhang (agentzh) wrote: > > > Hello! > > > > On Mon, Jul 29, 2013 at 10:11 AM, Maxim Dounin wrote: > > > Additionally, doing a full merge of all free blocks on a free > > > operation looks too much. It might be something we want to do on > > > allocation failure, but not on a normal path in > > > ngx_slab_free_pages(). And/or something lightweight may be done > > > in ngx_slab_free_pages(), e.g., checking if pages following pages > > > we are freeing are free too, and merging them in this case. > > > > > > > I'd propose an alternative patch taking the second approach, that is, > > merging adjacent free pages (for both the previous and next blocks) in > > ngx_slab_free_pages(). This approach has the following advantages: > > > > 1. It can effectively distribute the merging computations across all > > the page free operations, which can prevent potential frequent and > > long stalls when actually running out of large enough free blocks > > along the "free" list that is already very long for large zones (which > > usually consists of tons of one-page blocks upon allocation > > failures). > > > > 2. it can also make multi-page allocations generally faster because > > we're merging pages immediately when we can and thus it's more likely > > to find large enough free blocks along the (relatively short) free > > list for ngx_slab_alloc_pages(). > > > > The only downside is that I have to introduce an extra field > > "prev_slab" (8-byte for x86_64) in ngx_slab_page_t in my patch, which > > makes the slab page metadata a bit larger. > > Below is a patch which does mostly the same without introducing > any additional per-page fields. Please take a look if it works > for you. > > # HG changeset patch > # User Maxim Dounin > # Date 1401302011 -14400 > # Wed May 28 22:33:31 2014 +0400 > # Node ID 7fb45c6042324e6cd92b0fb230c67a9c8c75681c > # Parent 80bd391c90d11de707a05fcd0c9aa2a09c62877f > Core: slab allocator defragmentation. > > Large allocations from a slab pool result in free page blocks being > fragmented, > eventually leading to a situation when no further allocation larger than a > page > size are possible from the pool. While this isn't a problem for nginx > itself, > it is known to be bad for various 3rd party modules. Fix is to merge > adjacent > blocks of free pages in the ngx_slab_free_pages() function. > > Prodded by Wandenberg Peixoto and Yichun Zhang. > > diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c > --- a/src/core/ngx_slab.c > +++ b/src/core/ngx_slab.c > @@ -129,6 +129,8 @@ ngx_slab_init(ngx_slab_pool_t *pool) > pool->pages->slab = pages; > } > > + pool->last = pool->pages + pages; > + > pool->log_nomem = 1; > pool->log_ctx = &pool->zero; > pool->zero = '\0'; > @@ -626,6 +628,8 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po > if (page->slab >= pages) { > > if (page->slab > pages) { > + page[page->slab - 1].prev = (uintptr_t) &page[pages]; > + > page[pages].slab = page->slab - pages; > page[pages].next = page->next; > page[pages].prev = page->prev; > @@ -672,7 +676,8 @@ static void > ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page, > ngx_uint_t pages) > { > - ngx_slab_page_t *prev; > + ngx_uint_t type; > + ngx_slab_page_t *prev, *join; > > page->slab = pages--; > > @@ -686,6 +691,53 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo > page->next->prev = page->prev; > } > > + join = page + page->slab; > + > + if (join < pool->last) { > + type = join->prev & NGX_SLAB_PAGE_MASK; > + > + if (type == NGX_SLAB_PAGE && join->next != NULL) { > + pages += join->slab; > + page->slab += join->slab; > + > + prev = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); > + prev->next = join->next; > + join->next->prev = join->prev; > + > + join->slab = NGX_SLAB_PAGE_FREE; > + join->next = NULL; > + join->prev = NGX_SLAB_PAGE; > + } > + } > + > + if (page > pool->pages) { > + join = page - 1; > + type = join->prev & NGX_SLAB_PAGE_MASK; > + > + if (type == NGX_SLAB_PAGE && join->slab == NGX_SLAB_PAGE_FREE) { > + join = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); > + } > + > + if (type == NGX_SLAB_PAGE && join->next != NULL) { > + pages += join->slab; > + join->slab += page->slab; > + > + prev = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); > + prev->next = join->next; > + join->next->prev = join->prev; > + > + page->slab = NGX_SLAB_PAGE_FREE; > + page->next = NULL; > + page->prev = NGX_SLAB_PAGE; > + > + page = join; > + } > + } > + > + if (pages) { > + page[pages].prev = (uintptr_t) page; > + } > + > page->prev = (uintptr_t) &pool->free; > page->next = pool->free.next; > > diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h > --- a/src/core/ngx_slab.h > +++ b/src/core/ngx_slab.h > @@ -29,6 +29,7 @@ typedef struct { > size_t min_shift; > > ngx_slab_page_t *pages; > + ngx_slab_page_t *last; > ngx_slab_page_t free; > > u_char *start; > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sun Jun 1 04:03:06 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 31 May 2014 21:03:06 -0700 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) In-Reply-To: <20140528183836.GG1849@mdounin.ru> References: <20140528183836.GG1849@mdounin.ru> Message-ID: Hi Maxim! On Wed, May 28, 2014 at 11:38 AM, Maxim Dounin wrote: > > Below is a patch which does mostly the same without introducing > any additional per-page fields. Please take a look if it works > for you. > Thank you for looking into this! I've run my local test suite for this issue against an nginx with your patch and it is passing completely. We haven't tried it out in production though :) Best regards, -agentzh From s.martin49 at gmail.com Sun Jun 1 15:17:26 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 01 Jun 2014 17:17:26 +0200 Subject: [PATCH 0 of 4 v2] Cross-compilation support improvement Message-ID: Hi all, Here is the 2nd round of this short series improving nginx build-system support for cross-compilation. This series takes into account Maxim's comments, still tries to be as less intrusive as possible, and intends to make easier integration in cross-compilataion frameworks such as Buildroot [1]. These patches include most of the changes needed for porperly supporting the cross-compilation, except the sys_nerr guessing part [2], which cannot merged in nginx because it is too unix-oriented and based on fragile assumptions [1] http://buildroot.net/ [2] https://github.com/tSed/buildroot/blob/nginx-integration/package/nginx/nginx-0005-auto-unix-make-sys_nerr-guessing-cross-friendly.patch Yours, Samuel From s.martin49 at gmail.com Sun Jun 1 15:17:27 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 01 Jun 2014 17:17:27 +0200 Subject: [PATCH 1 of 4 v2] auto/type/sizeof: rework autotest to be cross-compilation friendly In-Reply-To: References: Message-ID: # HG changeset patch # User Samuel Martin # Date 1401633266 -7200 # Sun Jun 01 16:34:26 2014 +0200 # Branch sma/cross-compilation # Node ID e52b6d1510a8ce2570d29c56738189485e9c9a1e # Parent b9553b4b8e670a0231afc0484f23953c0d8b5f22 auto/type/sizeof: rework autotest to be cross-compilation friendly Rework the sizeof test to do the checks at compile time instead of at runtime. This way, it does not break when cross-compiling for a different CPU architecture. diff -r b9553b4b8e67 -r e52b6d1510a8 auto/types/sizeof --- a/auto/types/sizeof Tue Apr 29 22:22:38 2014 +0200 +++ b/auto/types/sizeof Sun Jun 01 16:34:26 2014 +0200 @@ -14,7 +14,7 @@ ngx_size= -cat << END > $NGX_AUTOTEST.c +cat << _EOF > $NGX_AUTOTEST.c #include #include @@ -25,28 +25,42 @@ $NGX_INCLUDE_INTTYPES_H $NGX_INCLUDE_AUTO_CONFIG_H -int main() { - printf("%d", (int) sizeof($ngx_type)); +#if !defined( PASTE) +#define PASTE2( x, y) x##y +#define PASTE( x, y) PASTE2( x, y) +#endif /* PASTE */ + +#define SAY_IF_SIZEOF( typename, type, size) \\ + static char PASTE( PASTE( PASTE( sizeof_, typename), _is_), size) \\ + [(sizeof(type) == (size)) ? 1 : -1] + +SAY_IF_SIZEOF(TEST_TYPENAME, TEST_TYPE, TEST_SIZE); + +int main(void) +{ return 0; } -END +_EOF - -ngx_test="$CC $CC_TEST_FLAGS $CC_AUX_FLAGS \ - -o $NGX_AUTOTEST $NGX_AUTOTEST.c $NGX_LD_OPT $ngx_feature_libs" - -eval "$ngx_test >> $NGX_AUTOCONF_ERR 2>&1" - - -if [ -x $NGX_AUTOTEST ]; then - ngx_size=`$NGX_AUTOTEST` - echo " $ngx_size bytes" -fi - +_ngx_typename=`echo "$ngx_type" | sed 's/ /_/g;s/\*/p/'` +ngx_size="-1" +ngx_size=`for i in 1 2 4 8 16 ; do \ + $CC $CC_TEST_FLAGS $CC_AUX_FLAGS \ + -DTEST_TYPENAME="$_ngx_typename" -DTEST_TYPE="$ngx_type" -DTEST_SIZE="$i" \ + $NGX_AUTOTEST.c -o $NGX_AUTOTEST \ + $NGX_LD_OPT $ngx_feature_libs &>/dev/null || continue ;\ + echo $i ; break ; done` rm -rf $NGX_AUTOTEST* +if test -z $ngx_size ; then + ngx_size=-1 +fi + +if [ $ngx_size -gt 0 ]; then + echo " $ngx_size bytes" +fi case $ngx_size in 4) From s.martin49 at gmail.com Sun Jun 1 15:17:28 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 01 Jun 2014 17:17:28 +0200 Subject: [PATCH 2 of 4 v2] auto/feature: add mechanism allowing to force feature run test result In-Reply-To: References: Message-ID: <6b8ea834204dd2b2eacb.1401635848@bobook> # HG changeset patch # User Samuel Martin # Date 1401633266 -7200 # Sun Jun 01 16:34:26 2014 +0200 # Branch sma/cross-compilation # Node ID 6b8ea834204dd2b2eacbd3eb3d4e13e61e599525 # Parent e52b6d1510a8ce2570d29c56738189485e9c9a1e auto/feature: add mechanism allowing to force feature run test result Whenever a feature needs to run a test, the ngx_feature_run_force_result variable can be set to the desired test result, and thus skip the test. Therefore, the generated config.h file will honor these presets. This mechanism aims to make easier cross-compilation support. diff -r e52b6d1510a8 -r 6b8ea834204d auto/feature --- a/auto/feature Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/feature Sun Jun 01 16:34:26 2014 +0200 @@ -52,50 +52,88 @@ case "$ngx_feature_run" in yes) - # /bin/sh is used to intercept "Killed" or "Abort trap" messages - if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then - echo " found" + if test -n "$ngx_feature_run_force_result" ; then + echo " not tested (maybe cross-compiling)" + if test -n "$ngx_feature_name" ; then + if test "$ngx_feature_run_force_result" = "yes" ; then + have=$ngx_have_feature . auto/have + fi + fi ngx_found=yes + else - if test -n "$ngx_feature_name"; then - have=$ngx_have_feature . auto/have + # /bin/sh is used to intercept "Killed" or "Abort trap" messages + if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then + echo " found" + ngx_found=yes + + if test -n "$ngx_feature_name"; then + have=$ngx_have_feature . auto/have + fi + + else + echo " found but is not working" fi - else - echo " found but is not working" fi ;; value) - # /bin/sh is used to intercept "Killed" or "Abort trap" messages - if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then - echo " found" + if test -n "$ngx_feature_run_force_result" ; then + echo " not tested (maybe cross-compiling)" + cat << END >> $NGX_AUTO_CONFIG_H + +#ifndef $ngx_feature_name +#define $ngx_feature_name $ngx_feature_run_force_result +#endif + +END ngx_found=yes + else - cat << END >> $NGX_AUTO_CONFIG_H + # /bin/sh is used to intercept "Killed" or "Abort trap" messages + if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then + echo " found" + ngx_found=yes + + cat << END >> $NGX_AUTO_CONFIG_H #ifndef $ngx_feature_name #define $ngx_feature_name `$NGX_AUTOTEST` #endif END - else - echo " found but is not working" + else + echo " found but is not working" + fi + fi ;; bug) - # /bin/sh is used to intercept "Killed" or "Abort trap" messages - if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then - echo " not found" + if test -n "$ngx_feature_run_force_result" ; then + echo " not tested (maybe cross-compiling)" + if test -n "$ngx_feature_name"; then + if test "$ngx_feature_run_force_result" = "yes" ; then + have=$ngx_have_feature . auto/have + fi + fi + ngx_found=yes + else - else - echo " found" - ngx_found=yes + # /bin/sh is used to intercept "Killed" or "Abort trap" messages + if /bin/sh -c $NGX_AUTOTEST >> $NGX_AUTOCONF_ERR 2>&1; then + echo " not found" - if test -n "$ngx_feature_name"; then - have=$ngx_have_feature . auto/have + else + echo " found" + ngx_found=yes + + if test -n "$ngx_feature_name"; then + have=$ngx_have_feature . auto/have + fi fi + fi ;; From s.martin49 at gmail.com Sun Jun 1 15:17:30 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 01 Jun 2014 17:17:30 +0200 Subject: [PATCH 4 of 4 v2] auto/lib/libxslt/conf: allow to override ngx_feature_path and ngx_feature_libs In-Reply-To: References: Message-ID: <772e3e58534c255dcc32.1401635850@bobook> # HG changeset patch # User Samuel Martin # Date 1401633266 -7200 # Sun Jun 01 16:34:26 2014 +0200 # Branch sma/cross-compilation # Node ID 772e3e58534c255dcc32276d9aa39232f0488b5c # Parent c307e2addb184e7f5bcf236472e8fe21097459d3 auto/lib/libxslt/conf: allow to override ngx_feature_path and ngx_feature_libs Because libxml2 headers are not in /usr/include by default, hardcoding the include directory to /usr/include/libxml2 does not play well when cross-compiling, or if libxml2 has been installed somewhere else. This patch allows to define/override the libxslt include directory, and the libxslt libs flags. Being able to override the include location is especially useful when cross-compiling to prevent gcc from complaining about unsafe include location for cross-compilation (-Wpoision-system-directories). So far, this warning is only triggered by libxslt. diff -r c307e2addb18 -r 772e3e58534c auto/lib/libxslt/conf --- a/auto/lib/libxslt/conf Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/lib/libxslt/conf Sun Jun 01 16:34:26 2014 +0200 @@ -12,8 +12,8 @@ #include #include #include " - ngx_feature_path="/usr/include/libxml2" - ngx_feature_libs="-lxml2 -lxslt" + ngx_feature_path="${ngx_feature_path_libxslt:=/usr/include/libxml2}" + ngx_feature_libs="${ngx_feature_libs_libxslt:='-lxml2 -lxslt'}" ngx_feature_test="xmlParserCtxtPtr ctxt = NULL; xsltStylesheetPtr sheet = NULL; xmlDocPtr doc; From s.martin49 at gmail.com Sun Jun 1 15:17:29 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 01 Jun 2014 17:17:29 +0200 Subject: [PATCH 3 of 4 v2] auto/*: set ngx_feature_run_force_result for each feature requiring run test In-Reply-To: References: Message-ID: # HG changeset patch # User Samuel Martin # Date 1401633266 -7200 # Sun Jun 01 16:34:26 2014 +0200 # Branch sma/cross-compilation # Node ID c307e2addb184e7f5bcf236472e8fe21097459d3 # Parent 6b8ea834204dd2b2eacbd3eb3d4e13e61e599525 auto/*: set ngx_feature_run_force_result for each feature requiring run test Each feature requiring a run test has a matching preset variable (called ngx_force_*) used to set ngx_feature_run_force_result. These ngx_force_* variables are passed through the environment at configure time. diff -r 6b8ea834204d -r c307e2addb18 auto/cc/conf --- a/auto/cc/conf Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/cc/conf Sun Jun 01 16:34:26 2014 +0200 @@ -159,6 +159,7 @@ ngx_feature="gcc builtin atomic operations" ngx_feature_name=NGX_HAVE_GCC_ATOMIC ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_gcc_have_atomic" ngx_feature_incs= ngx_feature_path= ngx_feature_libs= @@ -179,6 +180,7 @@ ngx_feature="C99 variadic macros" ngx_feature_name="NGX_HAVE_C99_VARIADIC_MACROS" ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_c99_have_variadic_macros" ngx_feature_incs="#include #define var(dummy, ...) sprintf(__VA_ARGS__)" ngx_feature_path= @@ -193,6 +195,7 @@ ngx_feature="gcc variadic macros" ngx_feature_name="NGX_HAVE_GCC_VARIADIC_MACROS" ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_gcc_have_variadic_macros" ngx_feature_incs="#include #define var(dummy, args...) sprintf(args)" ngx_feature_path= diff -r 6b8ea834204d -r c307e2addb18 auto/cc/name --- a/auto/cc/name Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/cc/name Sun Jun 01 16:34:26 2014 +0200 @@ -8,6 +8,7 @@ ngx_feature="C compiler" ngx_feature_name= ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_c_compiler" ngx_feature_incs= ngx_feature_path= ngx_feature_libs= diff -r 6b8ea834204d -r c307e2addb18 auto/lib/libatomic/conf --- a/auto/lib/libatomic/conf Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/lib/libatomic/conf Sun Jun 01 16:34:26 2014 +0200 @@ -15,6 +15,7 @@ ngx_feature="atomic_ops library" ngx_feature_name=NGX_HAVE_LIBATOMIC ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_have_libatomic" ngx_feature_incs="#define AO_REQUIRE_CAS #include " ngx_feature_path= diff -r 6b8ea834204d -r c307e2addb18 auto/os/darwin --- a/auto/os/darwin Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/os/darwin Sun Jun 01 16:34:26 2014 +0200 @@ -27,6 +27,7 @@ ngx_feature="kqueue's EVFILT_TIMER" ngx_feature_name="NGX_HAVE_TIMER_EVENT" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_timer_event" ngx_feature_incs="#include #include " ngx_feature_path= @@ -57,6 +58,7 @@ ngx_feature="Darwin 64-bit kqueue millisecond timeout bug" ngx_feature_name=NGX_DARWIN_KEVENT_BUG ngx_feature_run=bug +ngx_feature_run_force_result="$ngx_force_kevent_bug" ngx_feature_incs="#include #include " ngx_feature_path= @@ -87,6 +89,7 @@ ngx_feature="sendfile()" ngx_feature_name="NGX_HAVE_SENDFILE" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_sendfile" ngx_feature_incs="#include #include #include diff -r 6b8ea834204d -r c307e2addb18 auto/os/linux --- a/auto/os/linux Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/os/linux Sun Jun 01 16:34:26 2014 +0200 @@ -49,6 +49,7 @@ ngx_feature="epoll" ngx_feature_name="NGX_HAVE_EPOLL" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_epoll" ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= @@ -106,6 +107,7 @@ ngx_feature="sendfile()" ngx_feature_name="NGX_HAVE_SENDFILE" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_sendfile" ngx_feature_incs="#include #include " ngx_feature_path= @@ -127,6 +129,7 @@ ngx_feature="sendfile64()" ngx_feature_name="NGX_HAVE_SENDFILE64" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_sendfile64" ngx_feature_incs="#include #include " ngx_feature_path= @@ -145,6 +148,7 @@ ngx_feature="prctl(PR_SET_DUMPABLE)" ngx_feature_name="NGX_HAVE_PR_SET_DUMPABLE" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_pr_set_dumpable" ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= diff -r 6b8ea834204d -r c307e2addb18 auto/unix --- a/auto/unix Sun Jun 01 16:34:26 2014 +0200 +++ b/auto/unix Sun Jun 01 16:34:26 2014 +0200 @@ -99,6 +99,7 @@ ngx_feature="kqueue's EVFILT_TIMER" ngx_feature_name="NGX_HAVE_TIMER_EVENT" ngx_feature_run=yes + ngx_feature_run_force_result="$ngx_force_have_timer_event" ngx_feature_incs="#include #include " ngx_feature_path= @@ -544,6 +545,7 @@ ngx_feature="sys_nerr" ngx_feature_name="NGX_SYS_NERR" ngx_feature_run=value +ngx_feature_run_force_result="$ngx_force_sys_nerr" ngx_feature_incs='#include #include ' ngx_feature_path= @@ -558,6 +560,7 @@ ngx_feature="_sys_nerr" ngx_feature_name="NGX_SYS_NERR" ngx_feature_run=value + ngx_feature_run_force_result="$ngx_force_sys_nerr" ngx_feature_incs='#include #include ' ngx_feature_path= @@ -573,6 +576,7 @@ ngx_feature='maximum errno' ngx_feature_name=NGX_SYS_NERR ngx_feature_run=value + ngx_feature_run_force_result="$ngx_force_sys_nerr" ngx_feature_incs='#include #include #include ' @@ -631,6 +635,7 @@ ngx_feature="mmap(MAP_ANON|MAP_SHARED)" ngx_feature_name="NGX_HAVE_MAP_ANON" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_map_anon" ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= @@ -644,6 +649,7 @@ ngx_feature='mmap("/dev/zero", MAP_SHARED)' ngx_feature_name="NGX_HAVE_MAP_DEVZERO" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_map_devzero" ngx_feature_incs="#include #include #include " @@ -659,6 +665,7 @@ ngx_feature="System V shared memory" ngx_feature_name="NGX_HAVE_SYSVSHM" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_sysvshm" ngx_feature_incs="#include #include " ngx_feature_path= @@ -673,6 +680,7 @@ ngx_feature="POSIX semaphores" ngx_feature_name="NGX_HAVE_POSIX_SEM" ngx_feature_run=yes +ngx_feature_run_force_result="$ngx_force_have_posix_sem" ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= From arut at nginx.com Mon Jun 2 12:21:56 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 02 Jun 2014 12:21:56 +0000 Subject: [nginx] Upstream: generic hash module. Message-ID: details: http://hg.nginx.org/nginx/rev/efc84a5723b3 branches: changeset: 5717:efc84a5723b3 user: Roman Arutyunyan date: Mon Jun 02 16:16:22 2014 +0400 description: Upstream: generic hash module. diffstat: auto/modules | 5 + auto/options | 4 + auto/sources | 4 + src/http/modules/ngx_http_upstream_hash_module.c | 631 +++++++++++++++++++++++ src/http/ngx_http_upstream.c | 1 + src/http/ngx_http_upstream.h | 1 + src/http/ngx_http_upstream_round_robin.c | 2 + src/http/ngx_http_upstream_round_robin.h | 1 + 8 files changed, 649 insertions(+), 0 deletions(-) diffs (truncated from 744 to 300 lines): diff -r 34d460c5d186 -r efc84a5723b3 auto/modules --- a/auto/modules Thu May 29 21:15:19 2014 +0400 +++ b/auto/modules Mon Jun 02 16:16:22 2014 +0400 @@ -371,6 +371,11 @@ if [ $HTTP_MP4 = YES ]; then HTTP_SRCS="$HTTP_SRCS $HTTP_MP4_SRCS" fi +if [ $HTTP_UPSTREAM_HASH = YES ]; then + HTTP_MODULES="$HTTP_MODULES $HTTP_UPSTREAM_HASH_MODULE" + HTTP_SRCS="$HTTP_SRCS $HTTP_UPSTREAM_HASH_SRCS" +fi + if [ $HTTP_UPSTREAM_IP_HASH = YES ]; then HTTP_MODULES="$HTTP_MODULES $HTTP_UPSTREAM_IP_HASH_MODULE" HTTP_SRCS="$HTTP_SRCS $HTTP_UPSTREAM_IP_HASH_SRCS" diff -r 34d460c5d186 -r efc84a5723b3 auto/options --- a/auto/options Thu May 29 21:15:19 2014 +0400 +++ b/auto/options Mon Jun 02 16:16:22 2014 +0400 @@ -99,6 +99,7 @@ HTTP_FLV=NO HTTP_MP4=NO HTTP_GUNZIP=NO HTTP_GZIP_STATIC=NO +HTTP_UPSTREAM_HASH=YES HTTP_UPSTREAM_IP_HASH=YES HTTP_UPSTREAM_LEAST_CONN=YES HTTP_UPSTREAM_KEEPALIVE=YES @@ -251,6 +252,7 @@ use the \"--without-http_limit_conn_modu --without-http_limit_req_module) HTTP_LIMIT_REQ=NO ;; --without-http_empty_gif_module) HTTP_EMPTY_GIF=NO ;; --without-http_browser_module) HTTP_BROWSER=NO ;; + --without-http_upstream_hash_module) HTTP_UPSTREAM_HASH=NO ;; --without-http_upstream_ip_hash_module) HTTP_UPSTREAM_IP_HASH=NO ;; --without-http_upstream_least_conn_module) HTTP_UPSTREAM_LEAST_CONN=NO ;; @@ -395,6 +397,8 @@ cat << END --without-http_limit_req_module disable ngx_http_limit_req_module --without-http_empty_gif_module disable ngx_http_empty_gif_module --without-http_browser_module disable ngx_http_browser_module + --without-http_upstream_hash_module + disable ngx_http_upstream_hash_module --without-http_upstream_ip_hash_module disable ngx_http_upstream_ip_hash_module --without-http_upstream_least_conn_module diff -r 34d460c5d186 -r efc84a5723b3 auto/sources --- a/auto/sources Thu May 29 21:15:19 2014 +0400 +++ b/auto/sources Mon Jun 02 16:16:22 2014 +0400 @@ -497,6 +497,10 @@ HTTP_GZIP_STATIC_MODULE=ngx_http_gzip_st HTTP_GZIP_STATIC_SRCS=src/http/modules/ngx_http_gzip_static_module.c +HTTP_UPSTREAM_HASH_MODULE=ngx_http_upstream_hash_module +HTTP_UPSTREAM_HASH_SRCS=src/http/modules/ngx_http_upstream_hash_module.c + + HTTP_UPSTREAM_IP_HASH_MODULE=ngx_http_upstream_ip_hash_module HTTP_UPSTREAM_IP_HASH_SRCS=src/http/modules/ngx_http_upstream_ip_hash_module.c diff -r 34d460c5d186 -r efc84a5723b3 src/http/modules/ngx_http_upstream_hash_module.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/http/modules/ngx_http_upstream_hash_module.c Mon Jun 02 16:16:22 2014 +0400 @@ -0,0 +1,631 @@ + +/* + * Copyright (C) Roman Arutyunyan + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include + + +typedef struct { + uint32_t hash; + ngx_str_t *server; +} ngx_http_upstream_chash_point_t; + + +typedef struct { + ngx_uint_t number; + ngx_http_upstream_chash_point_t point[1]; +} ngx_http_upstream_chash_points_t; + + +typedef struct { + ngx_http_complex_value_t key; + ngx_http_upstream_chash_points_t *points; +} ngx_http_upstream_hash_srv_conf_t; + + +typedef struct { + /* the round robin data must be first */ + ngx_http_upstream_rr_peer_data_t rrp; + ngx_http_upstream_hash_srv_conf_t *conf; + ngx_str_t key; + ngx_uint_t tries; + ngx_uint_t rehash; + uint32_t hash; + ngx_event_get_peer_pt get_rr_peer; +} ngx_http_upstream_hash_peer_data_t; + + +static ngx_int_t ngx_http_upstream_init_hash(ngx_conf_t *cf, + ngx_http_upstream_srv_conf_t *us); +static ngx_int_t ngx_http_upstream_init_hash_peer(ngx_http_request_t *r, + ngx_http_upstream_srv_conf_t *us); +static ngx_int_t ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, + void *data); + +static ngx_int_t ngx_http_upstream_init_chash(ngx_conf_t *cf, + ngx_http_upstream_srv_conf_t *us); +static void ngx_http_upstream_add_chash_point( + ngx_http_upstream_chash_points_t *points, uint32_t hash, ngx_str_t *server); +static ngx_uint_t ngx_http_upstream_find_chash_point( + ngx_http_upstream_chash_points_t *points, uint32_t hash); +static ngx_int_t ngx_http_upstream_init_chash_peer(ngx_http_request_t *r, + ngx_http_upstream_srv_conf_t *us); +static ngx_int_t ngx_http_upstream_get_chash_peer(ngx_peer_connection_t *pc, + void *data); + +static void *ngx_http_upstream_hash_create_conf(ngx_conf_t *cf); +static char *ngx_http_upstream_hash(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); + + +static ngx_command_t ngx_http_upstream_hash_commands[] = { + + { ngx_string("hash"), + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, + ngx_http_upstream_hash, + NGX_HTTP_SRV_CONF_OFFSET, + 0, + NULL }, + + ngx_null_command +}; + + +static ngx_http_module_t ngx_http_upstream_hash_module_ctx = { + NULL, /* preconfiguration */ + NULL, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + ngx_http_upstream_hash_create_conf, /* create server configuration */ + NULL, /* merge server configuration */ + + NULL, /* create location configuration */ + NULL /* merge location configuration */ +}; + + +ngx_module_t ngx_http_upstream_hash_module = { + NGX_MODULE_V1, + &ngx_http_upstream_hash_module_ctx, /* module context */ + ngx_http_upstream_hash_commands, /* module directives */ + NGX_HTTP_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_int_t +ngx_http_upstream_init_hash(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us) +{ + if (ngx_http_upstream_init_round_robin(cf, us) != NGX_OK) { + return NGX_ERROR; + } + + us->peer.init = ngx_http_upstream_init_hash_peer; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_upstream_init_hash_peer(ngx_http_request_t *r, + ngx_http_upstream_srv_conf_t *us) +{ + ngx_http_upstream_hash_srv_conf_t *hcf; + ngx_http_upstream_hash_peer_data_t *hp; + + hp = ngx_palloc(r->pool, sizeof(ngx_http_upstream_hash_peer_data_t)); + if (hp == NULL) { + return NGX_ERROR; + } + + r->upstream->peer.data = &hp->rrp; + + if (ngx_http_upstream_init_round_robin_peer(r, us) != NGX_OK) { + return NGX_ERROR; + } + + r->upstream->peer.get = ngx_http_upstream_get_hash_peer; + + hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); + + if (ngx_http_complex_value(r, &hcf->key, &hp->key) != NGX_OK) { + return NGX_ERROR; + } + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "upstream hash key:\"%V\"", &hp->key); + + hp->conf = hcf; + hp->tries = 0; + hp->rehash = 0; + hp->hash = 0; + hp->get_rr_peer = ngx_http_upstream_get_round_robin_peer; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_upstream_get_hash_peer(ngx_peer_connection_t *pc, void *data) +{ + ngx_http_upstream_hash_peer_data_t *hp = data; + + time_t now; + u_char buf[NGX_INT_T_LEN]; + size_t size; + uint32_t hash; + ngx_int_t w; + uintptr_t m; + ngx_uint_t i, n, p; + ngx_http_upstream_rr_peer_t *peer; + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get hash peer, try: %ui", pc->tries); + + if (hp->tries > 20 || hp->rrp.peers->single) { + return hp->get_rr_peer(pc, &hp->rrp); + } + + now = ngx_time(); + + pc->cached = 0; + pc->connection = NULL; + + for ( ;; ) { + + /* + * Hash expression is compatible with Cache::Memcached: + * ((crc32([REHASH] KEY) >> 16) & 0x7fff) + PREV_HASH + * with REHASH omitted at the first iteration. + */ + + ngx_crc32_init(hash); + + if (hp->rehash > 0) { + size = ngx_sprintf(buf, "%ui", hp->rehash) - buf; + ngx_crc32_update(&hash, buf, size); + } + + ngx_crc32_update(&hash, hp->key.data, hp->key.len); + ngx_crc32_final(hash); + + hash = (hash >> 16) & 0x7fff; + + hp->hash += hash; + hp->rehash++; + + if (!hp->rrp.peers->weighted) { + p = hp->hash % hp->rrp.peers->number; + + } else { + w = hp->hash % hp->rrp.peers->total_weight; + + for (i = 0; i < hp->rrp.peers->number; i++) { + w -= hp->rrp.peers->peer[i].weight; + if (w < 0) { + break; + } + } + + p = i; + } + + n = p / (8 * sizeof(uintptr_t)); + m = (uintptr_t) 1 << p % (8 * sizeof(uintptr_t)); + + if (hp->rrp.tried[n] & m) { + goto next; + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, pc->log, 0, + "get hash peer, value:%uD, peer:%ui", hp->hash, p); + + peer = &hp->rrp.peers->peer[p]; + + if (peer->down) { From mdounin at mdounin.ru Mon Jun 2 16:26:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 20:26:57 +0400 Subject: best approach for content phase handler to run again with event timer & ngx_done? In-Reply-To: References: Message-ID: <20140602162657.GJ1849@mdounin.ru> Hello! On Fri, May 30, 2014 at 02:27:36PM -0700, bfranks781 at gmail.com wrote: > If a content phase handler needs to wait on some potentially delayed > result, my understanding is that it should return NGX_DONE so that it is > called again. > > I've been reading through the eval, echo, and http_limit_req modules to see > how to integrate an nginx_add_timer event prior to returning NGX_DONE. A > short timer event seems reasonable, because the content phase handler isn't > waiting on some other event type (subrequest result, timeout, etc). The > timer event seems fairly straight-forward -- configure the event in a > request context and set the event handler, data and log. > > I don't really want my timer event handler to do anything -- rather I just > want the same content phase handler that had previously returned NGX_DONE > to run again. In that case, should my timer event handler actually do > anything at all? Is there a best practice for this -- i.e. have it point > to the write_event_handler(), call ngx_http_core_run_phases() or > ngx_http_run_posted_requests(), etc? A content phase handler will not be called again (or at least it's not supposed to). If a content phase handler returns NGX_DONE, it means that it's responsible for further request handling, in particular: - you've already done proper request reference counting tweaks (normally, by just calling ngx_http_read_client_request_body(), which will do r->count++); - you are responsible for sending a response and then finalizing the request with ngx_http_finalize_request(). Modules based on the ngx_http_upstream.c (most simple one is memcached) are examples of content handlers which return NGX_DONE. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jun 2 16:42:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Jun 2014 20:42:50 +0400 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) In-Reply-To: References: <20140528183836.GG1849@mdounin.ru> Message-ID: <20140602164250.GL1849@mdounin.ru> Hello! On Sat, May 31, 2014 at 11:46:28PM -0300, Wandenberg Peixoto wrote: > Hello Maxim, > > I executed my tests again and seems that your improved patch version is > working fine too. Good, thanks for testing. > Did you plan to merge it on nginx core soon? It's currently waiting for Igor's review, and will be committed once it's passed. -- Maxim Dounin http://nginx.org/ From bsfranks at gmail.com Mon Jun 2 18:05:14 2014 From: bsfranks at gmail.com (bsfranks at gmail.com) Date: Mon, 2 Jun 2014 11:05:14 -0700 Subject: best approach for content phase handler to run again with event timer & ngx_done? In-Reply-To: <20140602162657.GJ1849@mdounin.ru> References: <20140602162657.GJ1849@mdounin.ru> Message-ID: Maxim - thank you for your helpful response. I will review the memcached module. In the meantime, I had tried a few things and ended up with the following approach (as a rough example): static ngx_int_t content_phase_handler(ngx_http_request_t *r) { ... /* some function that returns AGAIN or OK to either wait or proceed */ rc = function(); if (rc == NGX_AGAIN) { r->main->count++; ctx->ev.handler = event_handler; ctx->ev.data = r; ctx->ev.log = r->connection->log; ngx_add_timer(&ctx->ev, 100); return NGX_DONE; } ...normal work of content phase handler... } static void event_handler(ngx_event_t *ev) { ngx_http_request *r; r = ev->data; r->write_event_handler = ngx_http_core_run_phases; ngx_http_core_run_phases(r); return; } I had noticed from the DEBUG log output sequence that nginx_finalize_request() was getting called after NGX_DONE was returned from the content phase handler. From the source, nginx_finalize_request() called nginx_http_finalize_connection() if (r->main->count != 1), which then called nginx_http_close_request() which decremented the count and returned without freeing the req or closing the connection. So, it seems like some of what you've recommended (incrementing req count, and having finalize_request called) is then being done. This seemed to work correctly and each time the event handler triggered, the content phase handler was called again. However, is calling core_run_phases() a poor/dangerous approach or not recommended? Also, is there a recommended lower bound for the millisecond timer? For example, don't make it smaller than X ms, otherwise the event cycle gets run too frequently? Thanks, Ben On Mon, Jun 2, 2014 at 9:26 AM, Maxim Dounin wrote: > Hello! > > On Fri, May 30, 2014 at 02:27:36PM -0700, bfranks781 at gmail.com wrote: > > > If a content phase handler needs to wait on some potentially delayed > > result, my understanding is that it should return NGX_DONE so that it is > > called again. > > > > I've been reading through the eval, echo, and http_limit_req modules to > see > > how to integrate an nginx_add_timer event prior to returning NGX_DONE. A > > short timer event seems reasonable, because the content phase handler > isn't > > waiting on some other event type (subrequest result, timeout, etc). The > > timer event seems fairly straight-forward -- configure the event in a > > request context and set the event handler, data and log. > > > > I don't really want my timer event handler to do anything -- rather I > just > > want the same content phase handler that had previously returned NGX_DONE > > to run again. In that case, should my timer event handler actually do > > anything at all? Is there a best practice for this -- i.e. have it point > > to the write_event_handler(), call ngx_http_core_run_phases() or > > ngx_http_run_posted_requests(), etc? > > A content phase handler will not be called again (or at least it's > not supposed to). If a content phase handler returns NGX_DONE, it > means that it's responsible for further request handling, in > particular: > > - you've already done proper request reference counting tweaks > (normally, by just calling ngx_http_read_client_request_body(), > which will do r->count++); > > - you are responsible for sending a response and then finalizing > the request with ngx_http_finalize_request(). > > Modules based on the ngx_http_upstream.c (most simple one is > memcached) are examples of content handlers which return NGX_DONE. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 3 11:57:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Jun 2014 15:57:10 +0400 Subject: best approach for content phase handler to run again with event timer & ngx_done? In-Reply-To: References: <20140602162657.GJ1849@mdounin.ru> Message-ID: <20140603115710.GP1849@mdounin.ru> Hello! On Mon, Jun 02, 2014 at 11:05:14AM -0700, bsfranks at gmail.com wrote: [...] > So, it seems like some of what you've recommended (incrementing req count, > and having finalize_request called) is then being done. > > This seemed to work correctly and each time the event handler triggered, > the content phase handler was called again. However, is calling > core_run_phases() a poor/dangerous approach or not recommended? While the code above will likely work (at least with the current nginx code), I don't think it's a good approach. I would rather recommend using a separate function to do a "delayed" work, much like it's done when reading a response body. > Also, is there a recommended lower bound for the millisecond timer? For > example, don't make it smaller than X ms, otherwise the event cycle gets > run too frequently? It's better to avoid timers altogether, if possible, and use events instead. If not possible, something like 100ms should be good enough. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 3 13:55:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 03 Jun 2014 13:55:53 +0000 Subject: [nginx] Core: slab allocator free pages defragmentation. Message-ID: details: http://hg.nginx.org/nginx/rev/c46657e391a3 branches: changeset: 5718:c46657e391a3 user: Maxim Dounin date: Tue Jun 03 17:53:03 2014 +0400 description: Core: slab allocator free pages defragmentation. Large allocations from a slab pool result in free page blocks being fragmented, eventually leading to a situation when no further allocation larger than a page size are possible from the pool. While this isn't a problem for nginx itself, it is known to be bad for various 3rd party modules. Fix is to merge adjacent blocks of free pages in the ngx_slab_free_pages() function. Prodded by Wandenberg Peixoto and Yichun Zhang. diffstat: src/core/ngx_slab.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++++- src/core/ngx_slab.h | 1 + 2 files changed, 60 insertions(+), 1 deletions(-) diffs (102 lines): diff --git a/src/core/ngx_slab.c b/src/core/ngx_slab.c --- a/src/core/ngx_slab.c +++ b/src/core/ngx_slab.c @@ -129,6 +129,8 @@ ngx_slab_init(ngx_slab_pool_t *pool) pool->pages->slab = pages; } + pool->last = pool->pages + pages; + pool->log_nomem = 1; pool->log_ctx = &pool->zero; pool->zero = '\0'; @@ -626,6 +628,8 @@ ngx_slab_alloc_pages(ngx_slab_pool_t *po if (page->slab >= pages) { if (page->slab > pages) { + page[page->slab - 1].prev = (uintptr_t) &page[pages]; + page[pages].slab = page->slab - pages; page[pages].next = page->next; page[pages].prev = page->prev; @@ -672,7 +676,8 @@ static void ngx_slab_free_pages(ngx_slab_pool_t *pool, ngx_slab_page_t *page, ngx_uint_t pages) { - ngx_slab_page_t *prev; + ngx_uint_t type; + ngx_slab_page_t *prev, *join; page->slab = pages--; @@ -686,6 +691,59 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo page->next->prev = page->prev; } + join = page + page->slab; + + if (join < pool->last) { + type = join->prev & NGX_SLAB_PAGE_MASK; + + if (type == NGX_SLAB_PAGE) { + + if (join->next != NULL) { + pages += join->slab; + page->slab += join->slab; + + prev = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); + prev->next = join->next; + join->next->prev = join->prev; + + join->slab = NGX_SLAB_PAGE_FREE; + join->next = NULL; + join->prev = NGX_SLAB_PAGE; + } + } + } + + if (page > pool->pages) { + join = page - 1; + type = join->prev & NGX_SLAB_PAGE_MASK; + + if (type == NGX_SLAB_PAGE) { + + if (join->slab == NGX_SLAB_PAGE_FREE) { + join = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); + } + + if (join->next != NULL) { + pages += join->slab; + join->slab += page->slab; + + prev = (ngx_slab_page_t *) (join->prev & ~NGX_SLAB_PAGE_MASK); + prev->next = join->next; + join->next->prev = join->prev; + + page->slab = NGX_SLAB_PAGE_FREE; + page->next = NULL; + page->prev = NGX_SLAB_PAGE; + + page = join; + } + } + } + + if (pages) { + page[pages].prev = (uintptr_t) page; + } + page->prev = (uintptr_t) &pool->free; page->next = pool->free.next; diff --git a/src/core/ngx_slab.h b/src/core/ngx_slab.h --- a/src/core/ngx_slab.h +++ b/src/core/ngx_slab.h @@ -29,6 +29,7 @@ typedef struct { size_t min_shift; ngx_slab_page_t *pages; + ngx_slab_page_t *last; ngx_slab_page_t free; u_char *start; From mdounin at mdounin.ru Tue Jun 3 14:00:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Jun 2014 18:00:19 +0400 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) In-Reply-To: <20140602164250.GL1849@mdounin.ru> References: <20140528183836.GG1849@mdounin.ru> <20140602164250.GL1849@mdounin.ru> Message-ID: <20140603140019.GR1849@mdounin.ru> Hello! On Mon, Jun 02, 2014 at 08:42:50PM +0400, Maxim Dounin wrote: > Hello! > > On Sat, May 31, 2014 at 11:46:28PM -0300, Wandenberg Peixoto wrote: > > > Hello Maxim, > > > > I executed my tests again and seems that your improved patch version is > > working fine too. > > Good, thanks for testing. > > > Did you plan to merge it on nginx core soon? > > It's currently waiting for Igor's review, and will be committed > once it's passed. Committed with minor changes after Igor's review: http://hg.nginx.org/nginx/rev/c46657e391a3 Thanks for prodding this. -- Maxim Dounin http://nginx.org/ From wandenberg at gmail.com Tue Jun 3 15:16:36 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Tue, 3 Jun 2014 12:16:36 -0300 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) In-Reply-To: <20140603140019.GR1849@mdounin.ru> References: <20140528183836.GG1849@mdounin.ru> <20140602164250.GL1849@mdounin.ru> <20140603140019.GR1849@mdounin.ru> Message-ID: Thanks agentzh, Maxim and Igor On Tue, Jun 3, 2014 at 11:00 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jun 02, 2014 at 08:42:50PM +0400, Maxim Dounin wrote: > > > Hello! > > > > On Sat, May 31, 2014 at 11:46:28PM -0300, Wandenberg Peixoto wrote: > > > > > Hello Maxim, > > > > > > I executed my tests again and seems that your improved patch version is > > > working fine too. > > > > Good, thanks for testing. > > > > > Did you plan to merge it on nginx core soon? > > > > It's currently waiting for Igor's review, and will be committed > > once it's passed. > > Committed with minor changes after Igor's review: > > http://hg.nginx.org/nginx/rev/c46657e391a3 > > Thanks for prodding this. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Tue Jun 3 17:54:49 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 03 Jun 2014 10:54:49 -0700 Subject: [PATCH] Access log: fix default value, broken by cb308813b453 Message-ID: <7f425d67f91ae3966b4f.1401818089@piotrs-macbook-pro.local> # HG changeset patch # User Piotr Sikora # Date 1401818028 25200 # Tue Jun 03 10:53:48 2014 -0700 # Node ID 7f425d67f91ae3966b4f31b33dcd0386977a97a4 # Parent c46657e391a3710c4ea20f312d46ff6566d80aef Access log: fix default value, broken by cb308813b453. log->filter ("if" parameter) was uninitialized when the default value was being used, which would lead to a crash (SIGSEGV) when access_log directive wasn't specified in the configuration. Zero-fill the whole structure instead of zeroing fields one-by-one in order to prevent similar issues in the future. Signed-off-by: Piotr Sikora diff -r c46657e391a3 -r 7f425d67f91a src/http/modules/ngx_http_log_module.c --- a/src/http/modules/ngx_http_log_module.c Tue Jun 03 17:53:03 2014 +0400 +++ b/src/http/modules/ngx_http_log_module.c Tue Jun 03 10:53:48 2014 -0700 @@ -1109,16 +1109,13 @@ ngx_http_log_merge_loc_conf(ngx_conf_t * return NGX_CONF_ERROR; } + ngx_memzero(log, sizeof(ngx_http_log_t)); + log->file = ngx_conf_open_file(cf->cycle, &ngx_http_access_log); if (log->file == NULL) { return NGX_CONF_ERROR; } - log->script = NULL; - log->disk_full_time = 0; - log->error_log_time = 0; - log->syslog_peer = NULL; - lmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_log_module); fmt = lmcf->formats.elts; From ru at nginx.com Tue Jun 3 20:38:47 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 4 Jun 2014 00:38:47 +0400 Subject: [PATCH] Access log: fix default value, broken by cb308813b453 In-Reply-To: <7f425d67f91ae3966b4f.1401818089@piotrs-macbook-pro.local> References: <7f425d67f91ae3966b4f.1401818089@piotrs-macbook-pro.local> Message-ID: <20140603203847.GE25209@lo0.su> On Tue, Jun 03, 2014 at 10:54:49AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1401818028 25200 > # Tue Jun 03 10:53:48 2014 -0700 > # Node ID 7f425d67f91ae3966b4f31b33dcd0386977a97a4 > # Parent c46657e391a3710c4ea20f312d46ff6566d80aef > Access log: fix default value, broken by cb308813b453. > > log->filter ("if" parameter) was uninitialized when the default value > was being used, which would lead to a crash (SIGSEGV) when access_log > directive wasn't specified in the configuration. > > Zero-fill the whole structure instead of zeroing fields one-by-one > in order to prevent similar issues in the future. > > Signed-off-by: Piotr Sikora Looks good. From albertcasademont at gmail.com Thu Jun 5 10:14:02 2014 From: albertcasademont at gmail.com (Albert Casademont Filella) Date: Thu, 5 Jun 2014 12:14:02 +0200 Subject: [nginx] Adding Support for Weak ETags In-Reply-To: References: <20140421122647.GB34696@mdounin.ru> <20140425113558.GN34696@mdounin.ru> Message-ID: Another one here using dynamic etags generated by the backend that are lost during the gzip compression; it would indeed ve very nice to have the weak etag support. The Last Modified header is not always a viable option. On Tue, Apr 29, 2014 at 8:53 PM, Adam Arsenault < adam.arsenault at hootsuite.com> wrote: > Hi Maxim/Aaron, > > Would love to see support for Weak Etags in nginx. The lack of support for > gzip + etags has been a major problem for us as we use etags for caching > (and need gzip to work with it) in a bunch of different place in our > application. > > Let me know if there is anything I can do to help out here as well. > > Thanks, > Adam Arsenault > > > On Tue, Apr 29, 2014 at 11:47 AM, Aaron Peschel > wrote: > >> Hello Maxim, >> >> If you provide a copy of your newer draft patch, I am willing to spend >> time helping improve it as you see fit. >> >> -Aaron Peschel >> >> On Fri, Apr 25, 2014 at 4:35 AM, Maxim Dounin wrote: >> > Hello! >> > >> > On Thu, Apr 24, 2014 at 06:20:24PM -0700, Aaron Peschel wrote: >> > >> >> Hi Maxim, >> >> >> >> Is the draft patch the same as the one that your posted in the >> >> previous thread, or has more work been done since then? >> > >> > The one I've posted is to ignore weak etags. The draft one is to >> > downgrade strict etags to weak etags. >> > >> >> >> >> -Aaron Peschel >> >> >> >> On Mon, Apr 21, 2014 at 5:26 AM, Maxim Dounin >> wrote: >> >> > Hello! >> >> > >> >> > On Thu, Apr 17, 2014 at 05:39:40PM -0700, Aaron Peschel wrote: >> >> > >> >> >> Hello, >> >> >> >> >> >> I am interested in getting support for Weak ETags into the mainline. >> >> >> There was some discussion previously in here previously that >> developed >> >> >> a quick patch to add support. What additional functionality would be >> >> >> required and what steps should be followed to get weak etag >> >> >> functionality added to nginx? I am willing to do the work, I just >> need >> >> >> some help with heading in the right direction. >> >> > >> >> > I had a quick draft patch sitting in my patchqueue since previous >> >> > discussion (see [1]) to downgrade strict etags to weak ones. It >> >> > needs more work though, as I'm not yet happy with the code. I >> >> > hope I'll be able to find some time and finish it in 1.7.x. >> >> > >> >> > [1] >> http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004523.html >> >> > >> >> > -- >> >> > Maxim Dounin >> >> > http://nginx.org/ >> >> > >> >> > _______________________________________________ >> >> > nginx-devel mailing list >> >> > nginx-devel at nginx.org >> >> > http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> >> >> _______________________________________________ >> >> nginx-devel mailing list >> >> nginx-devel at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > >> > -- >> > Maxim Dounin >> > http://nginx.org/ >> > >> > _______________________________________________ >> > nginx-devel mailing list >> > nginx-devel at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > > -- > [image: HootSuite] > > *Adam Arsenault* > Senior Software Engineer, Mobile Web | HootSuite > @Adam_Arsenault | hootsuite > | blog | facebook > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Jun 5 12:17:01 2014 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 05 Jun 2014 12:17:01 +0000 Subject: [nginx] Access log: fix default value, broken by cb308813b453. Message-ID: details: http://hg.nginx.org/nginx/rev/7f425d67f91a branches: changeset: 5719:7f425d67f91a user: Piotr Sikora date: Tue Jun 03 10:53:48 2014 -0700 description: Access log: fix default value, broken by cb308813b453. log->filter ("if" parameter) was uninitialized when the default value was being used, which would lead to a crash (SIGSEGV) when access_log directive wasn't specified in the configuration. Zero-fill the whole structure instead of zeroing fields one-by-one in order to prevent similar issues in the future. Signed-off-by: Piotr Sikora diffstat: src/http/modules/ngx_http_log_module.c | 7 ++----- 1 files changed, 2 insertions(+), 5 deletions(-) diffs (22 lines): diff -r c46657e391a3 -r 7f425d67f91a src/http/modules/ngx_http_log_module.c --- a/src/http/modules/ngx_http_log_module.c Tue Jun 03 17:53:03 2014 +0400 +++ b/src/http/modules/ngx_http_log_module.c Tue Jun 03 10:53:48 2014 -0700 @@ -1109,16 +1109,13 @@ ngx_http_log_merge_loc_conf(ngx_conf_t * return NGX_CONF_ERROR; } + ngx_memzero(log, sizeof(ngx_http_log_t)); + log->file = ngx_conf_open_file(cf->cycle, &ngx_http_access_log); if (log->file == NULL) { return NGX_CONF_ERROR; } - log->script = NULL; - log->disk_full_time = 0; - log->error_log_time = 0; - log->syslog_peer = NULL; - lmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_log_module); fmt = lmcf->formats.elts; From pluknet at nginx.com Thu Jun 5 12:18:21 2014 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 5 Jun 2014 16:18:21 +0400 Subject: [PATCH] Access log: fix default value, broken by cb308813b453 In-Reply-To: <7f425d67f91ae3966b4f.1401818089@piotrs-macbook-pro.local> References: <7f425d67f91ae3966b4f.1401818089@piotrs-macbook-pro.local> Message-ID: <9C93C50C-5DEE-40DE-8074-A4E8D76DAD32@nginx.com> On Jun 3, 2014, at 9:54 PM, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1401818028 25200 > # Tue Jun 03 10:53:48 2014 -0700 > # Node ID 7f425d67f91ae3966b4f31b33dcd0386977a97a4 > # Parent c46657e391a3710c4ea20f312d46ff6566d80aef > Access log: fix default value, broken by cb308813b453. Committed, thanks. -- Sergey Kandaurov From yurnerola at gmail.com Fri Jun 6 02:38:33 2014 From: yurnerola at gmail.com (liubin) Date: Fri, 6 Jun 2014 10:38:33 +0800 Subject: Difference between NGX_DIRECT_CONF and NGX_MAIN_CONF Message-ID: <3465556F-0B36-4AE1-947F-3E0D7A06D97F@gmail.com> Hello: Who can tell me the Difference between NGX_DIRECT_CONF and NGX_MAIN_CONF ? If a command is NGX_DIRECT_CONF,it must be NGX_MAIN_CONF? Best regards, -yurnero -------------- next part -------------- An HTML attachment was scrubbed... URL: From ameirh at gmail.com Fri Jun 6 04:57:38 2014 From: ameirh at gmail.com (Ameir Abdeldayem) Date: Fri, 6 Jun 2014 00:57:38 -0400 Subject: nginx subrequests / background operations Message-ID: Hello, I am investigating the complexity of allowing for an additional parameter, "expired", to proxy_cache_use_stale. Instead of a request hitting the backend with an $upstream_cache_status of EXPIRED and making the client wait for the request to complete, the client would instead be given a stale version of that cached entry, and that entry would be updated in the background. The directive would look like: proxy_cache_use_stale expired updating error ...; I was looking into ngx_http_subrequest() as a potential route to take, but it looks like it's blocking (as in the client would have to wait for it to complete). I also looked at post_action, but it's unclear on whether it's blocking or not, and whether it'd work for this case. Could you advise me on the best route to take here? Varnish 4 supports this feature ("grace mode") and I'm getting some pressure to switch solely because of this feature, but because I love nginx so much, I figured I'd take a stab at getting this feature in. Any tips would be well-appreciated. Thanks! -Ameir -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jun 6 11:52:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 15:52:58 +0400 Subject: Difference between NGX_DIRECT_CONF and NGX_MAIN_CONF In-Reply-To: <3465556F-0B36-4AE1-947F-3E0D7A06D97F@gmail.com> References: <3465556F-0B36-4AE1-947F-3E0D7A06D97F@gmail.com> Message-ID: <20140606115258.GG1849@mdounin.ru> Hello! On Fri, Jun 06, 2014 at 10:38:33AM +0800, liubin wrote: > Hello: > > Who can tell me the Difference between NGX_DIRECT_CONF and NGX_MAIN_CONF ? > > If a command is NGX_DIRECT_CONF,it must be NGX_MAIN_CONF? The NGX_MAIN_CONF specifies context of the directive. The NGX_DIRECT_CONF flag for NGX_MAIN_CONF directives means that the configuration should be access directly (as created with create_conf callback), instead of passing an indirect pointer to a directive handler (allowing the directive handler to create the configuration itself). As of now, the NGX_DIRECT_CONF flag make sense for NGX_MAIN_CONF directives only. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jun 6 12:16:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Jun 2014 16:16:22 +0400 Subject: nginx subrequests / background operations In-Reply-To: References: Message-ID: <20140606121622.GH1849@mdounin.ru> Hello! On Fri, Jun 06, 2014 at 12:57:38AM -0400, Ameir Abdeldayem wrote: > Hello, > > I am investigating the complexity of allowing for an additional parameter, > "expired", to proxy_cache_use_stale. > > Instead of a request hitting the backend with an $upstream_cache_status of > EXPIRED and making the client wait for the request to complete, the client > would instead be given a stale version of that cached entry, and that entry > would be updated in the background. This is what the "proxy_cache_use_stale updating" does for years, with the only difference that the update is done by the first request instead of background. So, basically, what you are trying to optimize is a single request per a resource expiration. I would recommend you to reconsider if it actually worth the effort. > I was looking into ngx_http_subrequest() as a potential route to take, but > it looks like it's blocking (as in the client would have to wait for it to > complete). I also looked at post_action, but it's unclear on whether it's > blocking or not, and whether it'd work for this case. The post_action functionality will block a connection as well. If you want something to happen "in the background", you'll have to introduce the "in the background" notion in the first place. Most simple approach seems to be to create a separate "fake" request with emulated properties and a closed connection. -- Maxim Dounin http://nginx.org/ From ameirh at gmail.com Sat Jun 7 05:36:59 2014 From: ameirh at gmail.com (Ameir Abdeldayem) Date: Sat, 7 Jun 2014 01:36:59 -0400 Subject: nginx subrequests / background operations Message-ID: Hello Maxim, Thanks for your feedback. Yes, "proxy_cache_use_stale updating" does do a great job, but "the first request" is the first request per TTL, which relates to the problem we're facing. We run some high-profile sites, oftentimes with low TTLs (1m or so). The queries we run on the backend are very complex and time-consuming, and oftentimes take on the order of 10s to complete. Because of the low TTL, although most users get immediate responses, the user who makes the request when the entry is EXPIRED has to suffer. Additionally, each page load requests several resources through nginx, and it's happened a number of times that a single user will be the victim of hitting more than one EXPIRED entry, hitting the backend more than once (very unlucky, I know). The end result is user complaints, which would be mitigated entirely if we could serve the STALE entry before updating it. If you have any thoughts on how to improve performance in this scenario, I would love to hear them. Thanks! -Ameir -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jun 8 19:12:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 8 Jun 2014 23:12:17 +0400 Subject: nginx subrequests / background operations In-Reply-To: References: Message-ID: <20140608191217.GL1849@mdounin.ru> Hello! On Sat, Jun 07, 2014 at 01:36:59AM -0400, Ameir Abdeldayem wrote: > Hello Maxim, > > Thanks for your feedback. Yes, "proxy_cache_use_stale updating" does do a > great job, but "the first request" is the first request per TTL, which > relates to the problem we're facing. > > We run some high-profile sites, oftentimes with low TTLs (1m or so). The > queries we run on the backend are very complex and time-consuming, and > oftentimes take on the order of 10s to complete. Because of the low TTL, > although most users get immediate responses, the user who makes the request > when the entry is EXPIRED has to suffer. >From the description it looks like the real problem is that your backend response time is inadequate. Instead of trying to use nginx cache to mitigate this, you may want to focus on the real problem instead, and make backend to respond in a reasonable time. Following the "nginx mitigation" approach, a solution you may want to consider is a helper script to use proxy_cache_bypass to update cached items before they expire. -- Maxim Dounin http://nginx.org/ From bbarker5025 at gmail.com Sun Jun 8 20:00:06 2014 From: bbarker5025 at gmail.com (bbarker5025 .) Date: Sun, 08 Jun 2014 13:00:06 -0700 (PDT) Subject: nginx subrequests / background operations In-Reply-To: References: Message-ID: <1402257606377.0d6e57e@Nodemailer> On Fri, Jun 6, 2014 at 10:37 PM, Ameir Abdeldayem wrote: > Hello Maxim, > Thanks for your feedback. Yes, "proxy_cache_use_stale updating" does do a > great job, but "the first request" is the first request per TTL, which > relates to the problem we're facing. > We run some high-profile sites, oftentimes with low TTLs (1m or so). The > queries we run on the backend are very complex and time-consuming, and > oftentimes take on the order of 10s to complete. Because of the low TTL, > although most users get immediate responses, the user who makes the request > when the entry is EXPIRED has to suffer. > Additionally, each page load requests several resources through nginx, and > it's happened a number of times that a single user will be the victim of > hitting more than one EXPIRED entry, hitting the backend more than once > (very unlucky, I know). The end result is user complaints, which would be > mitigated entirely if we could serve the STALE entry before updating it. > If you have any thoughts on how to improve performance in this scenario, I > would love to hear them. > Thanks! > -Ameir -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeeply at gmail.com Mon Jun 9 16:54:12 2014 From: codeeply at gmail.com (Jianjun Zheng) Date: Tue, 10 Jun 2014 00:54:12 +0800 Subject: [PATCH] Unix: remove errno logging when send() returned zero. Message-ID: When send() returned zero, the errno won't be set. So, it's meaningless here. # HG changeset patch # User Jianjun Zheng # Date 1402330476 -28800 # Tue Jun 10 00:14:36 2014 +0800 # Node ID 77e5822468c8619dcc3c7ad35f906763d34292a1 # Parent 7f425d67f91ae3966b4f31b33dcd0386977a97a4 Unix: remove errno logging when send() returned zero. diff -r 7f425d67f91a -r 77e5822468c8 src/os/unix/ngx_send.c --- a/src/os/unix/ngx_send.c Tue Jun 03 10:53:48 2014 -0700 +++ b/src/os/unix/ngx_send.c Tue Jun 10 00:14:36 2014 +0800 @@ -46,14 +46,14 @@ return n; } - err = ngx_socket_errno; - if (n == 0) { - ngx_log_error(NGX_LOG_ALERT, c->log, err, "send() returned zero"); + ngx_log_error(NGX_LOG_ALERT, c->log, 0, "send() returned zero"); wev->ready = 0; return n; } + err = ngx_socket_errno; + if (err == NGX_EAGAIN || err == NGX_EINTR) { wev->ready = 0; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 10 11:36:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 15:36:01 +0400 Subject: [PATCH] Unix: remove errno logging when send() returned zero. In-Reply-To: References: Message-ID: <20140610113601.GT1849@mdounin.ru> Hello! On Tue, Jun 10, 2014 at 12:54:12AM +0800, Jianjun Zheng wrote: > When send() returned zero, the errno won't be set. > So, it's meaningless here. This should never ever happen, and if it happens - logging errno make sense, as nobody knows why it happened in the first place. -- Maxim Dounin http://nginx.org/ From codeeply at gmail.com Tue Jun 10 12:02:26 2014 From: codeeply at gmail.com (Jianjun Zheng) Date: Tue, 10 Jun 2014 20:02:26 +0800 Subject: [PATCH] Unix: remove errno logging when send() returned zero. In-Reply-To: <20140610113601.GT1849@mdounin.ru> References: <20140610113601.GT1849@mdounin.ru> Message-ID: AFAIK, it happens when sends zero bytes. 2014-06-10 19:36 GMT+08:00 Maxim Dounin : > Hello! > > On Tue, Jun 10, 2014 at 12:54:12AM +0800, Jianjun Zheng wrote: > > > When send() returned zero, the errno won't be set. > > So, it's meaningless here. > > This should never ever happen, and if it happens - logging errno > make sense, as nobody knows why it happened in the first place. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 10 12:05:05 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Jun 2014 16:05:05 +0400 Subject: [PATCH] Unix: remove errno logging when send() returned zero. In-Reply-To: References: <20140610113601.GT1849@mdounin.ru> Message-ID: <20140610120505.GV1849@mdounin.ru> Hello! On Tue, Jun 10, 2014 at 08:02:26PM +0800, Jianjun Zheng wrote: > AFAIK, it happens when sends zero bytes. And this should never ever happen. And that's why the logging is here in the first place. > > > 2014-06-10 19:36 GMT+08:00 Maxim Dounin : > > > Hello! > > > > On Tue, Jun 10, 2014 at 12:54:12AM +0800, Jianjun Zheng wrote: > > > > > When send() returned zero, the errno won't be set. > > > So, it's meaningless here. > > > > This should never ever happen, and if it happens - logging errno > > make sense, as nobody knows why it happened in the first place. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From a.marinov at ucdn.com Wed Jun 11 10:49:34 2014 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Wed, 11 Jun 2014 13:49:34 +0300 Subject: I want to use $ in module argument Message-ID: Hello, I am working on custom module which uses pcre. It could be configured and one from the arguments is a regular expression. How can I use argument with $ sign into it. Every time when I try for example use it like " my_custom_module '\.mp4$' " I got the error: invalid variable name in nginx.conf:62 In line 62 I have my module configuration. I tried to escape $ with \$ and also tried $$ but they both don't work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 11 22:29:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jun 2014 02:29:14 +0400 Subject: I want to use $ in module argument In-Reply-To: References: Message-ID: <20140611222914.GI1849@mdounin.ru> Hello! On Wed, Jun 11, 2014 at 01:49:34PM +0300, Anatoli Marinov wrote: > Hello, > I am working on custom module which uses pcre. It could be configured and > one from the arguments is a regular expression. How can I use argument with > $ sign into it. Every time when I try for example use it like " > my_custom_module '\.mp4$' " I got the error: > > invalid variable name in nginx.conf:62 > > In line 62 I have my module configuration. > I tried to escape $ with \$ and also tried $$ but they both don't work. http://mailman.nginx.org/pipermail/nginx/2011-November/030406.html -- Maxim Dounin http://nginx.org/ From a.marinov at ucdn.com Thu Jun 12 08:44:29 2014 From: a.marinov at ucdn.com (Anatoli Marinov) Date: Thu, 12 Jun 2014 11:44:29 +0300 Subject: I want to use $ in module argument In-Reply-To: <20140611222914.GI1849@mdounin.ru> References: <20140611222914.GI1849@mdounin.ru> Message-ID: Thanks Maxim! At the moment I am using @ instead of $ and internally in my module I replace it back. Its a different kind of workaround :) On Thu, Jun 12, 2014 at 1:29 AM, Maxim Dounin wrote: > Hello! > > On Wed, Jun 11, 2014 at 01:49:34PM +0300, Anatoli Marinov wrote: > > > Hello, > > I am working on custom module which uses pcre. It could be configured and > > one from the arguments is a regular expression. How can I use argument > with > > $ sign into it. Every time when I try for example use it like " > > my_custom_module '\.mp4$' " I got the error: > > > > invalid variable name in nginx.conf:62 > > > > In line 62 I have my module configuration. > > I tried to escape $ with \$ and also tried $$ but they both don't work. > > http://mailman.nginx.org/pipermail/nginx/2011-November/030406.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Jun 12 19:15:16 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 12 Jun 2014 19:15:16 +0000 Subject: [nginx] Upstream: simplified some code that accesses peers. Message-ID: details: http://hg.nginx.org/nginx/rev/ab540dd44528 branches: changeset: 5720:ab540dd44528 user: Ruslan Ermilov date: Thu Jun 12 21:13:24 2014 +0400 description: Upstream: simplified some code that accesses peers. No functional changes. diffstat: src/http/ngx_http_upstream_round_robin.c | 116 ++++++++++++++++-------------- 1 files changed, 62 insertions(+), 54 deletions(-) diffs (208 lines): diff -r 7f425d67f91a -r ab540dd44528 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Tue Jun 03 10:53:48 2014 -0700 +++ b/src/http/ngx_http_upstream_round_robin.c Thu Jun 12 21:13:24 2014 +0400 @@ -30,6 +30,7 @@ ngx_http_upstream_init_round_robin(ngx_c ngx_url_t u; ngx_uint_t i, j, n, w; ngx_http_upstream_server_t *server; + ngx_http_upstream_rr_peer_t *peer; ngx_http_upstream_rr_peers_t *peers, *backup; us->peer.init = ngx_http_upstream_init_round_robin_peer; @@ -69,6 +70,7 @@ ngx_http_upstream_init_round_robin(ngx_c peers->name = &us->host; n = 0; + peer = peers->peer; for (i = 0; i < us->servers->nelts; i++) { if (server[i].backup) { @@ -76,16 +78,16 @@ ngx_http_upstream_init_round_robin(ngx_c } for (j = 0; j < server[i].naddrs; j++) { - peers->peer[n].sockaddr = server[i].addrs[j].sockaddr; - peers->peer[n].socklen = server[i].addrs[j].socklen; - peers->peer[n].name = server[i].addrs[j].name; - peers->peer[n].weight = server[i].weight; - peers->peer[n].effective_weight = server[i].weight; - peers->peer[n].current_weight = 0; - peers->peer[n].max_fails = server[i].max_fails; - peers->peer[n].fail_timeout = server[i].fail_timeout; - peers->peer[n].down = server[i].down; - peers->peer[n].server = server[i].name; + peer[n].sockaddr = server[i].addrs[j].sockaddr; + peer[n].socklen = server[i].addrs[j].socklen; + peer[n].name = server[i].addrs[j].name; + peer[n].weight = server[i].weight; + peer[n].effective_weight = server[i].weight; + peer[n].current_weight = 0; + peer[n].max_fails = server[i].max_fails; + peer[n].fail_timeout = server[i].fail_timeout; + peer[n].down = server[i].down; + peer[n].server = server[i].name; n++; } } @@ -124,6 +126,7 @@ ngx_http_upstream_init_round_robin(ngx_c backup->name = &us->host; n = 0; + peer = backup->peer; for (i = 0; i < us->servers->nelts; i++) { if (!server[i].backup) { @@ -131,16 +134,16 @@ ngx_http_upstream_init_round_robin(ngx_c } for (j = 0; j < server[i].naddrs; j++) { - backup->peer[n].sockaddr = server[i].addrs[j].sockaddr; - backup->peer[n].socklen = server[i].addrs[j].socklen; - backup->peer[n].name = server[i].addrs[j].name; - backup->peer[n].weight = server[i].weight; - backup->peer[n].effective_weight = server[i].weight; - backup->peer[n].current_weight = 0; - backup->peer[n].max_fails = server[i].max_fails; - backup->peer[n].fail_timeout = server[i].fail_timeout; - backup->peer[n].down = server[i].down; - backup->peer[n].server = server[i].name; + peer[n].sockaddr = server[i].addrs[j].sockaddr; + peer[n].socklen = server[i].addrs[j].socklen; + peer[n].name = server[i].addrs[j].name; + peer[n].weight = server[i].weight; + peer[n].effective_weight = server[i].weight; + peer[n].current_weight = 0; + peer[n].max_fails = server[i].max_fails; + peer[n].fail_timeout = server[i].fail_timeout; + peer[n].down = server[i].down; + peer[n].server = server[i].name; n++; } } @@ -189,15 +192,17 @@ ngx_http_upstream_init_round_robin(ngx_c peers->total_weight = n; peers->name = &us->host; + peer = peers->peer; + for (i = 0; i < u.naddrs; i++) { - peers->peer[i].sockaddr = u.addrs[i].sockaddr; - peers->peer[i].socklen = u.addrs[i].socklen; - peers->peer[i].name = u.addrs[i].name; - peers->peer[i].weight = 1; - peers->peer[i].effective_weight = 1; - peers->peer[i].current_weight = 0; - peers->peer[i].max_fails = 1; - peers->peer[i].fail_timeout = 10; + peer[i].sockaddr = u.addrs[i].sockaddr; + peer[i].socklen = u.addrs[i].socklen; + peer[i].name = u.addrs[i].name; + peer[i].weight = 1; + peer[i].effective_weight = 1; + peer[i].current_weight = 0; + peer[i].max_fails = 1; + peer[i].fail_timeout = 10; } us->peer.data = peers; @@ -271,6 +276,7 @@ ngx_http_upstream_create_round_robin_pee socklen_t socklen; ngx_uint_t i, n; struct sockaddr *sockaddr; + ngx_http_upstream_rr_peer_t *peer; ngx_http_upstream_rr_peers_t *peers; ngx_http_upstream_rr_peer_data_t *rrp; @@ -295,15 +301,17 @@ ngx_http_upstream_create_round_robin_pee peers->number = ur->naddrs; peers->name = &ur->host; + peer = peers->peer; + if (ur->sockaddr) { - peers->peer[0].sockaddr = ur->sockaddr; - peers->peer[0].socklen = ur->socklen; - peers->peer[0].name = ur->host; - peers->peer[0].weight = 1; - peers->peer[0].effective_weight = 1; - peers->peer[0].current_weight = 0; - peers->peer[0].max_fails = 1; - peers->peer[0].fail_timeout = 10; + peer[0].sockaddr = ur->sockaddr; + peer[0].socklen = ur->socklen; + peer[0].name = ur->host; + peer[0].weight = 1; + peer[0].effective_weight = 1; + peer[0].current_weight = 0; + peer[0].max_fails = 1; + peer[0].fail_timeout = 10; } else { @@ -335,15 +343,15 @@ ngx_http_upstream_create_round_robin_pee len = ngx_sock_ntop(sockaddr, socklen, p, NGX_SOCKADDR_STRLEN, 1); - peers->peer[i].sockaddr = sockaddr; - peers->peer[i].socklen = socklen; - peers->peer[i].name.len = len; - peers->peer[i].name.data = p; - peers->peer[i].weight = 1; - peers->peer[i].effective_weight = 1; - peers->peer[i].current_weight = 0; - peers->peer[i].max_fails = 1; - peers->peer[i].fail_timeout = 10; + peer[i].sockaddr = sockaddr; + peer[i].socklen = socklen; + peer[i].name.len = len; + peer[i].name.data = p; + peer[i].weight = 1; + peer[i].effective_weight = 1; + peer[i].current_weight = 0; + peer[i].max_fails = 1; + peer[i].fail_timeout = 10; } } @@ -389,13 +397,15 @@ ngx_http_upstream_get_round_robin_peer(n ngx_log_debug1(NGX_LOG_DEBUG_HTTP, pc->log, 0, "get rr peer, try: %ui", pc->tries); - /* ngx_lock_mutex(rrp->peers->mutex); */ - pc->cached = 0; pc->connection = NULL; - if (rrp->peers->single) { - peer = &rrp->peers->peer[0]; + peers = rrp->peers; + + /* ngx_lock_mutex(peers->mutex); */ + + if (peers->single) { + peer = &peers->peer[0]; if (peer->down) { goto failed; @@ -420,18 +430,16 @@ ngx_http_upstream_get_round_robin_peer(n pc->socklen = peer->socklen; pc->name = &peer->name; - /* ngx_unlock_mutex(rrp->peers->mutex); */ + /* ngx_unlock_mutex(peers->mutex); */ - if (pc->tries == 1 && rrp->peers->next) { - pc->tries += rrp->peers->next->number; + if (pc->tries == 1 && peers->next) { + pc->tries += peers->next->number; } return NGX_OK; failed: - peers = rrp->peers; - if (peers->next) { /* ngx_unlock_mutex(peers->mutex); */ From fdasilvayy at gmail.com Mon Jun 16 20:22:54 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Mon, 16 Jun 2014 22:22:54 +0200 Subject: [PATCH] Mail: added support for SSL client certificate In-Reply-To: References: <5c7ccfc96070fc8b5d77.1392983328@FLEVIONNOIS2.dictao.com> <20140307113147.GY34696@mdounin.ru> Message-ID: Hi all. I rework a bit the mail ssl client certificate patch on two points : - Drop the AuthVerify header, as it is unhelpful. - Add a new certificate fingerprint header, feature that was added recently to http module. Please find attached a new patch to add the "auth_http_client_cert" setting. It allow now to configure which part(s) of the certificate will be sent to the authentication backend script. Comments, and review are welcomed. Regards, Filipe da Silva 2014-04-14 9:33 GMT+02:00 Franck Levionnois : > Hello, > > I forward Filipe's message, because it doesn't appear in forum's stack. > I'm ok with the proposal. > > Kind Regards. > Franck Levionnois. > > > 2014-04-07 10:35 GMT+02:00 Filipe Da Silva : > >> Hi, >> >> From the mail-auth-http module point of view, the Auth-Verify is a >> trivial information. >> Its value mostly depends of the current server configuration ( verify >> setting ). >> IMHO, it could be discard. >> >> About the various/duplicated headers related to the client >> certificate, a smart solution >> could be adding a 'auth_http_client_cert' setting. >> >> It could be either a kind of bit-field allowing to select the wanted >> headers one by one or a log level. >> >> Bit-field doesn't seems to be a part of nginx configuration usages. >> Instead, a short list of keywords could be defined, may be following >> the OpenSSL display one: >> http://www.openssl.org/docs/apps/x509.html#DISPLAY_OPTIONS >> >> Or, the auth_http_client_cert log levels could be : >> - none >> - basic -> just the Certificate Subject >> - detailed : Subject, Issuer >> - complete : Subject, Issuer, sha1 hash >> - full -> whole certificate >> IMHO, 'detailled' should be the default settings, if not configured. >> >> Regards, >> Filipe da Silva >> -------------- next part -------------- # HG changeset patch # Parent cc921a930c4aa0db1dc642ac0ce977e5734e59e5 Mail: Add 'SSL client auth header fields' configuration setting Added mail configuration directive : auth_http_client_cert Possible values are: none, cert, subject, issuer, serial, fingerprint. The 'none' option is exclusive to any other. diff -r cc921a930c4a src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Fri Jan 24 16:26:16 2014 +0100 +++ b/src/mail/ngx_mail_auth_http_module.c Mon Jun 16 21:59:52 2014 +0200 @@ -25,6 +25,7 @@ typedef struct { u_char *file; ngx_uint_t line; + ngx_uint_t cert_fields; } ngx_mail_auth_http_conf_t; @@ -83,6 +84,27 @@ static char *ngx_mail_auth_http_header(n void *conf); +#define NGX_MAIL_AUTH_HTTP_CERTIFICATE 0x0002 +#define NGX_MAIL_AUTH_HTTP_CERT_SUBJECT 0x0010 +#define NGX_MAIL_AUTH_HTTP_CERT_ISSUER 0x0020 +#define NGX_MAIL_AUTH_HTTP_CERT_SERIAL 0x0040 +#define NGX_MAIL_AUTH_HTTP_CERT_FINGERPRINT 0x0080 + +#define NGX_MAIL_AUTH_HTTP_CERT_NONE 0x8000 + +#define NGX_MAIL_AUTH_HTTP_CERT_DEFAULT \ + (NGX_MAIL_AUTH_HTTP_CERT_SUBJECT | NGX_MAIL_AUTH_HTTP_CERT_ISSUER) + +static ngx_conf_bitmask_t ngx_mail_auth_http_client_cert[] = { + { ngx_string("none"), NGX_MAIL_AUTH_HTTP_CERT_NONE }, + { ngx_string("cert"), NGX_MAIL_AUTH_HTTP_CERTIFICATE }, + { ngx_string("subject"), NGX_MAIL_AUTH_HTTP_CERT_SUBJECT }, + { ngx_string("issuer"), NGX_MAIL_AUTH_HTTP_CERT_ISSUER }, + { ngx_string("serial"), NGX_MAIL_AUTH_HTTP_CERT_SERIAL }, + { ngx_string("fingerprint"), NGX_MAIL_AUTH_HTTP_CERT_FINGERPRINT }, + { ngx_null_string, 0 } +}; + static ngx_command_t ngx_mail_auth_http_commands[] = { { ngx_string("auth_http"), @@ -106,6 +128,13 @@ static ngx_command_t ngx_mail_auth_http 0, NULL }, + { ngx_string("auth_http_client_cert"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, + ngx_conf_set_bitmask_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_auth_http_conf_t, cert_fields), + ngx_mail_auth_http_client_cert }, + ngx_null_command }; @@ -1189,30 +1218,52 @@ ngx_mail_auth_http_create_request(ngx_ma cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); #if (NGX_MAIL_SSL) - if (s->connection->ssl) { - if (ngx_ssl_get_certificate_oneline(s->connection, pool, - &client_cert) != NGX_OK) { - return NULL; + if (s->connection->ssl && + !(ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_NONE)) { + + if (ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERTIFICATE) { + if (ngx_ssl_get_certificate_oneline(s->connection, pool, + &client_cert) != NGX_OK) { + return NULL; + } + } else { + client_cert.len = 0; } - if (ngx_ssl_get_subject_dn(s->connection, pool, - &client_subject) != NGX_OK) { - return NULL; + if (ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_SUBJECT) { + if (ngx_ssl_get_subject_dn(s->connection, pool, + &client_subject) != NGX_OK) { + return NULL; + } + } else { + client_subject.len = 0; } - if (ngx_ssl_get_issuer_dn(s->connection, pool, - &client_issuer) != NGX_OK) { - return NULL; + if (ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_ISSUER) { + if (ngx_ssl_get_issuer_dn(s->connection, pool, + &client_issuer) != NGX_OK) { + return NULL; + } + } else { + client_issuer.len = 0; } - if (ngx_ssl_get_serial_number(s->connection, pool, - &client_serial) != NGX_OK) { - return NULL; + if (ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_SERIAL) { + if (ngx_ssl_get_serial_number(s->connection, pool, + &client_serial) != NGX_OK) { + return NULL; + } + } else { + client_serial.len = 0; } - if (ngx_ssl_get_fingerprint(s->connection, pool, - &client_fingerprint) != NGX_OK) { - return NULL; + if (ahcf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_FINGERPRINT) { + if (ngx_ssl_get_fingerprint(s->connection, pool, + &client_fingerprint) != NGX_OK) { + return NULL; + } + } else { + client_fingerprint.len = 0; } } else { @@ -1469,6 +1520,18 @@ ngx_mail_auth_http_merge_conf(ngx_conf_t } } + ngx_conf_merge_bitmask_value(conf->cert_fields, prev->cert_fields, + (NGX_CONF_BITMASK_SET + |NGX_MAIL_AUTH_HTTP_CERT_DEFAULT)); + + if ((conf->cert_fields & NGX_MAIL_AUTH_HTTP_CERT_NONE) + && conf->cert_fields != NGX_MAIL_AUTH_HTTP_CERT_NONE ) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "\"auth_http_client_cert none\" is an exclusive flag" + "%s:%ui", conf->file, conf->line); + return NGX_CONF_ERROR; + } + ngx_conf_merge_msec_value(conf->timeout, prev->timeout, 60000); if (conf->headers == NULL) { -------------- next part -------------- A non-text attachment was scrubbed... Name: Mail-SSL-MutualAuthentification.patch Type: text/x-diff Size: 14324 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Jun 17 08:42:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 08:42:30 +0000 Subject: [nginx] Updated OpenSSL used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/6ce251c860ba branches: changeset: 5721:6ce251c860ba user: Maxim Dounin date: Tue Jun 17 11:38:55 2014 +0400 description: Updated OpenSSL used for win32 builds. diffstat: misc/GNUmakefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -5,7 +5,7 @@ NGINX = nginx-$(VER) TEMP = tmp OBJS = objs.msvc8 -OPENSSL = openssl-1.0.1g +OPENSSL = openssl-1.0.1h ZLIB = zlib-1.2.8 PCRE = pcre-8.34 From mdounin at mdounin.ru Tue Jun 17 08:42:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 08:42:43 +0000 Subject: [nginx] Configure: workaround for system perl on OS X (ticket #5... Message-ID: details: http://hg.nginx.org/nginx/rev/baf2816d556d branches: changeset: 5722:baf2816d556d user: Maxim Dounin date: Tue Jun 17 12:07:06 2014 +0400 description: Configure: workaround for system perl on OS X (ticket #576). diffstat: auto/lib/perl/conf | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff --git a/auto/lib/perl/conf b/auto/lib/perl/conf --- a/auto/lib/perl/conf +++ b/auto/lib/perl/conf @@ -52,6 +52,11 @@ if test -n "$NGX_PERL_VER"; then ngx_perl_ldopts=`echo $ngx_perl_ldopts | sed 's/ -pthread//'` fi + if [ "$NGX_SYSTEM" = "Darwin" ]; then + # OS X system perl wants to link universal binaries + ngx_perl_ldopts=`echo $ngx_perl_ldopts | sed -e 's/-arch x86_64 -arch i386//'` + fi + CORE_LINK="$CORE_LINK $ngx_perl_ldopts" LINK_DEPS="$LINK_DEPS $NGX_OBJS/src/http/modules/perl/blib/arch/auto/nginx/nginx.$ngx_perl_dlext" From piotr at cloudflare.com Tue Jun 17 09:56:38 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Jun 2014 02:56:38 -0700 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X Message-ID: # HG changeset patch # User Piotr Sikora # Date 1402998740 25200 # Tue Jun 17 02:52:20 2014 -0700 # Node ID d325d3f2df583988567a979bc8736dbf08291f84 # Parent baf2816d556d26b79ecc745140b408d59908a182 Configure: fix build from sources for OpenSSL on OS X. Signed-off-by: Piotr Sikora diff -r baf2816d556d -r d325d3f2df58 auto/lib/openssl/make --- a/auto/lib/openssl/make Tue Jun 17 12:07:06 2014 +0400 +++ b/auto/lib/openssl/make Tue Jun 17 02:52:20 2014 -0700 @@ -51,12 +51,17 @@ END *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; esac + case $NGX_PLATFORM in + Darwin:*:x86_64) OPENSSL_CONFIG="./Configure darwin64-x86_64-cc" ;; + *) OPENSSL_CONFIG="./config" ;; + esac + cat << END >> $NGX_MAKEFILE $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && $OPENSSL_CONFIG --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib From ru at nginx.com Tue Jun 17 10:17:03 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Jun 2014 14:17:03 +0400 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: References: Message-ID: <20140617101703.GH52309@lo0.su> On Tue, Jun 17, 2014 at 02:56:38AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1402998740 25200 > # Tue Jun 17 02:52:20 2014 -0700 > # Node ID d325d3f2df583988567a979bc8736dbf08291f84 > # Parent baf2816d556d26b79ecc745140b408d59908a182 > Configure: fix build from sources for OpenSSL on OS X. > > Signed-off-by: Piotr Sikora > > diff -r baf2816d556d -r d325d3f2df58 auto/lib/openssl/make > --- a/auto/lib/openssl/make Tue Jun 17 12:07:06 2014 +0400 > +++ b/auto/lib/openssl/make Tue Jun 17 02:52:20 2014 -0700 > @@ -51,12 +51,17 @@ END > *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; > esac > > + case $NGX_PLATFORM in Nit: I'd suggest taking $NGX_PLATFORM in double quotes. > + Darwin:*:x86_64) OPENSSL_CONFIG="./Configure darwin64-x86_64-cc" ;; > + *) OPENSSL_CONFIG="./config" ;; > + esac > + > cat << END >> $NGX_MAKEFILE > > $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE > cd $OPENSSL \\ > && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ > - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > + && $OPENSSL_CONFIG --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > && \$(MAKE) \\ > && \$(MAKE) install LIBDIR=lib > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Ruslan Ermilov From piotr at cloudflare.com Tue Jun 17 10:28:12 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Jun 2014 03:28:12 -0700 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: <20140617101703.GH52309@lo0.su> References: <20140617101703.GH52309@lo0.su> Message-ID: # HG changeset patch # User Piotr Sikora # Date 1403000738 25200 # Tue Jun 17 03:25:38 2014 -0700 # Node ID a5e387ac0357073df23e07e58ef1437fc0c15c1b # Parent baf2816d556d26b79ecc745140b408d59908a182 Configure: fix build from sources for OpenSSL on OS X. Signed-off-by: Piotr Sikora diff -r baf2816d556d -r a5e387ac0357 auto/lib/openssl/make --- a/auto/lib/openssl/make Tue Jun 17 12:07:06 2014 +0400 +++ b/auto/lib/openssl/make Tue Jun 17 03:25:38 2014 -0700 @@ -51,12 +51,17 @@ END *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; esac + case "$NGX_PLATFORM" in + Darwin:*:x86_64) OPENSSL_CONFIG="./Configure darwin64-x86_64-cc" ;; + *) OPENSSL_CONFIG="./config" ;; + esac + cat << END >> $NGX_MAKEFILE $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && $OPENSSL_CONFIG --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib From mdounin at mdounin.ru Tue Jun 17 10:35:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 14:35:58 +0400 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: References: Message-ID: <20140617103558.GG1849@mdounin.ru> Hello! On Tue, Jun 17, 2014 at 02:56:38AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1402998740 25200 > # Tue Jun 17 02:52:20 2014 -0700 > # Node ID d325d3f2df583988567a979bc8736dbf08291f84 > # Parent baf2816d556d26b79ecc745140b408d59908a182 > Configure: fix build from sources for OpenSSL on OS X. > > Signed-off-by: Piotr Sikora > > diff -r baf2816d556d -r d325d3f2df58 auto/lib/openssl/make > --- a/auto/lib/openssl/make Tue Jun 17 12:07:06 2014 +0400 > +++ b/auto/lib/openssl/make Tue Jun 17 02:52:20 2014 -0700 > @@ -51,12 +51,17 @@ END > *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; > esac > > + case $NGX_PLATFORM in > + Darwin:*:x86_64) OPENSSL_CONFIG="./Configure darwin64-x86_64-cc" ;; > + *) OPENSSL_CONFIG="./config" ;; > + esac > + > cat << END >> $NGX_MAKEFILE > > $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE > cd $OPENSSL \\ > && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ > - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > + && $OPENSSL_CONFIG --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ > && \$(MAKE) \\ > && \$(MAKE) install LIBDIR=lib This looks utterly wrong to me. The problem seems to be in OpenSSL as it fails to detect default arch to build for, and looking into config script suggests a quick fix is to set KERNEL_BITS=64 in the environment. I don't think nginx should try to do anything with this. -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Tue Jun 17 10:48:29 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Jun 2014 03:48:29 -0700 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: <20140617103558.GG1849@mdounin.ru> References: <20140617103558.GG1849@mdounin.ru> Message-ID: Hey Maxim, > The problem seems to be in OpenSSL as it fails to detect default > arch to build for Agreed. > and looking into config script suggests a quick > fix is to set KERNEL_BITS=64 in the environment. Except that KERNEL_BITS trick requires OpenSSL-1.0.1+. > I don't think nginx should try to do anything with this. So it's better to not build at all? And why is the workaround for pretty much the same issue fine for perl but not OpenSSL? It's really not that intrusive change... Best regards, Piotr Sikora From ru at nginx.com Tue Jun 17 10:57:05 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 17 Jun 2014 14:57:05 +0400 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: References: <20140617103558.GG1849@mdounin.ru> Message-ID: <20140617105705.GK52309@lo0.su> On Tue, Jun 17, 2014 at 03:48:29AM -0700, Piotr Sikora wrote: > Hey Maxim, > > > The problem seems to be in OpenSSL as it fails to detect default > > arch to build for > > Agreed. > > > and looking into config script suggests a quick > > fix is to set KERNEL_BITS=64 in the environment. > > Except that KERNEL_BITS trick requires OpenSSL-1.0.1+. > > > I don't think nginx should try to do anything with this. > > So it's better to not build at all? And why is the workaround for > pretty much the same issue fine for perl but not OpenSSL? It's really > not that intrusive change... Because dealing with open sourced OpenSSL might be easier that with close-sourced Apple? :) From mdounin at mdounin.ru Tue Jun 17 11:25:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 15:25:17 +0400 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: References: <20140617103558.GG1849@mdounin.ru> Message-ID: <20140617112517.GI1849@mdounin.ru> Hello! On Tue, Jun 17, 2014 at 03:48:29AM -0700, Piotr Sikora wrote: > Hey Maxim, > > > The problem seems to be in OpenSSL as it fails to detect default > > arch to build for > > Agreed. > > > and looking into config script suggests a quick > > fix is to set KERNEL_BITS=64 in the environment. > > Except that KERNEL_BITS trick requires OpenSSL-1.0.1+. That's sad, but I don't think it matters. BTW, looking into OpenSSL 0.9.7 suggests it should be fine, and the change proposed will break build. > > I don't think nginx should try to do anything with this. > > So it's better to not build at all? And why is the workaround for > pretty much the same issue fine for perl but not OpenSSL? It's really > not that intrusive change... The workaround for perl is to make it possible to build nginx with a perl version available by default, and it changes nothing in other cases. Suggested patch for OpenSSL, in contrast, only required for build of OpenSSL by nginx (which is exotic on OS X anyway), only needed for old versions of OpenSSL, and will break building for i386 if needed for whatever reason, as well as building of at least some supported OpenSSL versions. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jun 17 13:19:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 13:19:55 +0000 Subject: [nginx] nginx-1.7.2-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/0bd223a54619 branches: changeset: 5723:0bd223a54619 user: Maxim Dounin date: Tue Jun 17 16:51:25 2014 +0400 description: nginx-1.7.2-RELEASE diffstat: docs/xml/nginx/changes.xml | 69 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 69 insertions(+), 0 deletions(-) diffs (79 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,75 @@ + + + + +????????? hash ? ????? upstream. + + +the "hash" directive inside the "upstream" block. + + + + + +?????????????? ????????? ?????? ??????????? ??????.
+??????? Wandenberg Peixoto ? Yichun Zhang. +
+ +defragmentation of free shared memory blocks.
+Thanks to Wandenberg Peixoto and Yichun Zhang. +
+
+ + + +? ??????? ???????? ??? ????????? segmentation fault, +???? ?????????????? ???????? access_log ?? ?????????; +?????? ????????? ? 1.7.0.
+??????? Piotr Sikora. +
+ +a segmentation fault might occur in a worker process +if the default value of the "access_log" directive was used; +the bug had appeared in 1.7.0.
+Thanks to Piotr Sikora. +
+
+ + + +??????????? ???? ???????? ???????? +?? ?????????? ????????? ????????? try_files. + + +trailing slash was mistakenly removed +from the last parameter of the "try_files" directive. + + + + + +nginx ??? ?? ?????????? ?? OS X. + + +nginx could not be built on OS X in some cases. + + + + + +? ?????? ngx_http_spdy_module. + + +in the ngx_http_spdy_module. + + + +
+ + From mdounin at mdounin.ru Tue Jun 17 13:20:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Jun 2014 13:20:03 +0000 Subject: [nginx] release-1.7.2 tag Message-ID: details: http://hg.nginx.org/nginx/rev/ec919574cc14 branches: changeset: 5724:ec919574cc14 user: Maxim Dounin date: Tue Jun 17 16:51:25 2014 +0400 description: release-1.7.2 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -371,3 +371,4 @@ 97b47d95e4449cbde976657cf8cbbc118351ffe0 fd722b890eabc600394349730a093f50dac31639 release-1.5.13 d161d68df8be32e5cbf72b07db1a707714827803 release-1.7.0 0351a6d89c3dbcc7a76295024ba6b70e27b9a497 release-1.7.1 +0bd223a546192fdf2e862f33938f4ec2a3b5b283 release-1.7.2 From yejingx at gmail.com Tue Jun 17 15:50:19 2014 From: yejingx at gmail.com (Jing Ye) Date: Tue, 17 Jun 2014 23:50:19 +0800 Subject: Only 64k sent when the first upstream failed in fastcgi_pass Message-ID: Hello all, I have encountered the following possible bug in Nginx when using the fastcgi_pass directive to put a file larger than 64k. I?m not sure if this is a bug or I missed something in the config file. My nginx version is 1.5.12, and the problem remains when i switched to the newest 1.7.2. Here is my nginx.conf file, upstream *api.php.com * { server 127.0.0.1:9902 max_fails=3 fail_timeout=30s; server 127.0.0.1:9901 max_fails=3 fail_timeout=30s; } server { listen 8080; server_name localhost; location ~ \.php$ { root html; fastcgi_pass *api.php.com *; fastcgi_param SCRIPT_FILENAME /usr/local/nginx/html/$fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } And the index.php file, We need two php-fpm servers listen on port 9901 and 9902, and importantly, the server listen on 9902 should be somehow reset to make the upstream module choose the next server listen on 9901. I made this by setting the request_terminate_timeout argument to 1s. Send a put request, test.png is a file sized 90833 bytes(>64K) $ curl -T test.png *http://127.0.0.1:8080/index.php * -v * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * Adding handle: conn: 0x7fda7b803a00 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fda7b803a00) send_pipe: 1, recv_pipe: 0 * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > PUT /index.php HTTP/1.1 > User-Agent: curl/7.30.0 > Host: 127.0.0.1:8080 > Accept: */* > Content-Length: 90833 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK * Server nginx/1.5.12 is not blacklisted < Server: nginx/1.5.12 < Date: Tue, 17 Jun 2014 13:55:30 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < X-Powered-By: PHP/5.4.24 < * Connection #0 to host 127.0.0.1 left intact 65536% The response status is 200 OK, but only 65536 bytes(64k) received. Is this a bug or have i made something wrong in the config file? I?m really confusing and hope if someone could help me figure it out. Many thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jun 17 16:50:13 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 17 Jun 2014 20:50:13 +0400 Subject: Only 64k sent when the first upstream failed in fastcgi_pass In-Reply-To: References: Message-ID: <1532980.8VxNd81dtW@vbart-workstation> On Tuesday 17 June 2014 23:50:19 Jing Ye wrote: > Hello all, > > I have encountered the following possible bug in Nginx when using the > fastcgi_pass directive to put a file larger than 64k. I?m not sure if this > is a bug or I missed something in the config file. > > My nginx version is 1.5.12, and the problem remains when i switched to the > newest 1.7.2. > > > Here is my nginx.conf file, > > upstream *api.php.com * { > server 127.0.0.1:9902 max_fails=3 fail_timeout=30s; > server 127.0.0.1:9901 max_fails=3 fail_timeout=30s; > } > server { > listen 8080; > server_name localhost; > location ~ \.php$ { > root html; > fastcgi_pass *api.php.com *; > fastcgi_param SCRIPT_FILENAME > /usr/local/nginx/html/$fastcgi_script_name; > fastcgi_index index.php; > include fastcgi_params; > } > > > > And the index.php file, > > /* simplify get the request body and print its length. */ > $raw_post_data = file_get_contents('php://input', 'r'); > print(strlen($raw_post_data)); > sleep(3); /* waiting for the server to be reset after 1 > second */ > ?> > > [..] > > The response status is 200 OK, but only 65536 bytes(64k) received. > Is this a bug or have i made something wrong in the config file? > I?m really confusing and hope if someone could help me figure it out. > In the FastCGI protocol the data is transferred using "records". The maximum size of one record is 64k. So you're probably getting only the first record by calling file_get_contents() once. wbr, Valentin V. Bartenev From piotr at cloudflare.com Tue Jun 17 22:38:09 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Jun 2014 15:38:09 -0700 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: <20140617112517.GI1849@mdounin.ru> References: <20140617103558.GG1849@mdounin.ru> <20140617112517.GI1849@mdounin.ru> Message-ID: Hey Maxim, > Suggested patch for OpenSSL, in contrast, only required for build > of OpenSSL by nginx (which is exotic on OS X anyway), only needed > for old versions of OpenSSL, and will break building for i386 if > needed for whatever reason, as well as building of at least some > supported OpenSSL versions. So, in summary, on 64-bit OS X: - 64-bit build (default) is broken with all versions of OpenSSL, - 32-bit build (non-default) works fine with all versions of OpenSSL, - env KERNEL_BITS=64 fixes 64-bit build, but only with OpenSSL-1.0.1, - my patch fixes 64-bit build (default) with OpenSSL-0.9.8, OpenSSL-1.0.0 and OpenSSL-1.0.1 at the cost of breaking 32-bit build (non-default), which can be fixed with a simple test for "arch i386" in CFLAGS. Yet it's still a no-go? Best regards, Piotr Sikora From piotr at cloudflare.com Tue Jun 17 22:40:17 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 17 Jun 2014 15:40:17 -0700 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: <20140617112517.GI1849@mdounin.ru> References: <20140617112517.GI1849@mdounin.ru> Message-ID: <3e29d10e56a059b2a8fe.1403044817@Piotrs-MacBook-Pro.local> # HG changeset patch # User Piotr Sikora # Date 1403044587 25200 # Tue Jun 17 15:36:27 2014 -0700 # Node ID 3e29d10e56a059b2a8fe54bd8b913a7e399a5672 # Parent ec919574cc14f7781c0ca212cffec586f88eec40 Configure: fix build from sources for OpenSSL on OS X. Signed-off-by: Piotr Sikora diff -r ec919574cc14 -r 3e29d10e56a0 auto/lib/openssl/make --- a/auto/lib/openssl/make Tue Jun 17 16:51:25 2014 +0400 +++ b/auto/lib/openssl/make Tue Jun 17 15:36:27 2014 -0700 @@ -51,12 +51,20 @@ END *) ngx_prefix="$PWD/$OPENSSL/.openssl" ;; esac + if [ "$NGX_SYSTEM" = "Darwin" -a "$NGX_MACHINE" = "x86_64" \ + -a -z "`echo $CFLAGS | grep 'arch i386'`" ]; + then + OPENSSL_CONFIG="./Configure darwin64-x86_64-cc" + else + OPENSSL_CONFIG="./config" + fi + cat << END >> $NGX_MAKEFILE $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && $OPENSSL_CONFIG --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib From yejingx at gmail.com Wed Jun 18 01:46:36 2014 From: yejingx at gmail.com (Jing Ye) Date: Wed, 18 Jun 2014 09:46:36 +0800 Subject: Only 64k sent when the first upstream failed in fastcgi_pass In-Reply-To: References: Message-ID: vbart, Thanks for the advice, but I?m afraid this is not the case. When i remove the 9902 upstream and curl again, it works properly and print 90833 at the end with only calling file_get_contents once. In addition, in the error.log with debug mode, i found the following lines, ????. 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream request: "/index.php?" 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request handler 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:584 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 * In the ngx_http_upstream_reinit function, cl->buf->file_pos are all reset to 0 for every buf in the output chain. But i think file_post should be reset to 0, 32768, 65536 instead. PS: the output chain is initiated here with buf->file_pos = 0, 32768, 65536... *https://github.com/nginx/nginx/blob/v1.5.12/src/http/modules/ngx_http_fastcgi_module.c#L1085 * Is this the reason that cause the problem? > Date: Tue, 17 Jun 2014 20:50:13 +0400 > From: "Valentin V. Bartenev" > To: nginx-devel at nginx.org > Subject: Re: Only 64k sent when the first upstream failed in > fastcgi_pass > Message-ID: <1532980.8VxNd81dtW at vbart-workstation> > Content-Type: text/plain; charset="utf-8" > On Tuesday 17 June 2014 23:50:19 Jing Ye wrote: > > Hello all, > > > > I have encountered the following possible bug in Nginx when using the > > fastcgi_pass directive to put a file larger than 64k. I?m not sure if > this > > is a bug or I missed something in the config file. > > > > My nginx version is 1.5.12, and the problem remains when i switched to > the > > newest 1.7.2. > > > > > > Here is my nginx.conf file, > > > > upstream *api.php.com * { > > server 127.0.0.1:9902 max_fails=3 fail_timeout=30s; > > server 127.0.0.1:9901 max_fails=3 fail_timeout=30s; > > } > > server { > > listen 8080; > > server_name localhost; > > location ~ \.php$ { > > root html; > > fastcgi_pass *api.php.com *; > > fastcgi_param SCRIPT_FILENAME > > /usr/local/nginx/html/$fastcgi_script_name; > > fastcgi_index index.php; > > include fastcgi_params; > > } > > > > > > > > And the index.php file, > > > > > /* simplify get the request body and print its length. */ > > $raw_post_data = file_get_contents('php://input', 'r'); > > print(strlen($raw_post_data)); > > sleep(3); /* waiting for the server to be reset after 1 > > second */ > > ?> > > > > > [..] > > > > The response status is 200 OK, but only 65536 bytes(64k) received. > > Is this a bug or have i made something wrong in the config file? > > I?m really confusing and hope if someone could help me figure it out. > > > In the FastCGI protocol the data is transferred using "records". The > maximum > size of one record is 64k. So you're probably getting only the first > record > by calling file_get_contents() once. > wbr, Valentin V. Bartenev > > > On Tue, Jun 17, 2014 at 11:50 PM, Jing Ye wrote: > Hello all, > I have encountered the following possible bug in Nginx when using the > fastcgi_pass directive to put a file larger than 64k. I?m not sure if this > is a bug or I missed something in the config file. > My nginx version is 1.5.12, and the problem remains when i switched to the > newest 1.7.2. > > Here is my nginx.conf file, > server 127.0.0.1:9902 max_fails=3 fail_timeout=30s; > server 127.0.0.1:9901 max_fails=3 fail_timeout=30s; > } > server { > listen 8080; > server_name localhost; > location ~ \.php$ { > root html; > fastcgi_pass *api.php.com *; > fastcgi_param SCRIPT_FILENAME > /usr/local/nginx/html/$fastcgi_script_name; > fastcgi_index index.php; > include fastcgi_params; > } > > And the index.php file, > /* simplify get the request body and print its length. */ > $raw_post_data = file_get_contents('php://input', 'r'); > print(strlen($raw_post_data)); > sleep(3); /* waiting for the server to be reset after 1 > second */ > ?> > We need two php-fpm servers listen on port 9901 and 9902, and importantly, > the server listen on 9902 should be somehow reset to make the upstream > module choose the next server listen on 9901. I made this by setting the > request_terminate_timeout argument to 1s. > > > Send a put request, test.png is a file sized 90833 bytes(>64K) > * About to connect() to 127.0.0.1 port 8080 (#0) > * Trying 127.0.0.1... > * Adding handle: conn: 0x7fda7b803a00 > * Adding handle: send: 0 > * Adding handle: recv: 0 > * Curl_addHandleToPipeline: length: 1 > * - Conn 0 (0x7fda7b803a00) send_pipe: 1, recv_pipe: 0 > * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > > PUT /index.php HTTP/1.1 > > User-Agent: curl/7.30.0 > > Host: 127.0.0.1:8080 > > Accept: */* > > Content-Length: 90833 > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > * We are completely uploaded and fine > < HTTP/1.1 200 OK > * Server nginx/1.5.12 is not blacklisted > < Server: nginx/1.5.12 > < Date: Tue, 17 Jun 2014 13:55:30 GMT > < Content-Type: text/html > < Transfer-Encoding: chunked > < Connection: keep-alive > < X-Powered-By: PHP/5.4.24 > < > * Connection #0 to host 127.0.0.1 left intact > 65536% > The response status is 200 OK, but only 65536 bytes(64k) received. > Is this a bug or have i made something wrong in the config file? > I?m really confusing and hope if someone could help me figure it out. > > Many thanks! > upstream *api.php.com * { > > > $ curl -T test.png *http://127.0.0.1:8080/index.php > * -v > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeeply at gmail.com Wed Jun 18 09:00:14 2014 From: codeeply at gmail.com (Jianjun Zheng) Date: Wed, 18 Jun 2014 17:00:14 +0800 Subject: [PATCH] Core: bugfix for the ngx_slab_max_size case Message-ID: At present, alloting memory with size of ngx_slab_max_size causes 1) an internal fragmentation, size of ngx_slab_max_size, comes into being 2) the slot with index of (ngx_pagesize_shift - pool->min_shift - 1) is the right slot for this size. # HG changeset patch # User Jianjun Zheng # Date 1403080799 -28800 # Wed Jun 18 16:39:59 2014 +0800 # Node ID 1704335dd810e2e2abb2b393b4f7b7c9004c6012 # Parent ec919574cc14f7781c0ca212cffec586f88eec40 Core: bugfix for the ngx_slab_max_size case diff -r ec919574cc14 -r 1704335dd810 src/core/ngx_slab.c --- a/src/core/ngx_slab.c Tue Jun 17 16:51:25 2014 +0400 +++ b/src/core/ngx_slab.c Wed Jun 18 16:39:59 2014 +0800 @@ -160,7 +160,7 @@ ngx_uint_t i, slot, shift, map; ngx_slab_page_t *page, *prev, *slots; - if (size >= ngx_slab_max_size) { + if (size > ngx_slab_max_size) { ngx_log_debug1(NGX_LOG_DEBUG_ALLOC, ngx_cycle->log, 0, "slab alloc: %uz", size); -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Jun 18 09:39:58 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Jun 2014 09:39:58 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/eadf46f888e9 branches: changeset: 5725:eadf46f888e9 user: Ruslan Ermilov date: Wed Jun 18 13:39:20 2014 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r ec919574cc14 -r eadf46f888e9 src/core/nginx.h --- a/src/core/nginx.h Tue Jun 17 16:51:25 2014 +0400 +++ b/src/core/nginx.h Wed Jun 18 13:39:20 2014 +0400 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1007002 -#define NGINX_VERSION "1.7.2" +#define nginx_version 1007003 +#define NGINX_VERSION "1.7.3" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From ru at nginx.com Wed Jun 18 09:40:02 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Jun 2014 09:40:02 +0000 Subject: [nginx] Core: added ngx_slab_calloc() and ngx_slab_calloc_locked(). Message-ID: details: http://hg.nginx.org/nginx/rev/25ade23cf281 branches: changeset: 5726:25ade23cf281 user: Ruslan Ermilov date: Wed Jun 04 15:09:19 2014 +0400 description: Core: added ngx_slab_calloc() and ngx_slab_calloc_locked(). These functions return zeroed memory, analogous to ngx_pcalloc(). diffstat: src/core/ngx_slab.c | 29 +++++++++++++++++++++++++++++ src/core/ngx_slab.h | 2 ++ src/http/ngx_http_file_cache.c | 22 ++++++---------------- 3 files changed, 37 insertions(+), 16 deletions(-) diffs (112 lines): diff -r eadf46f888e9 -r 25ade23cf281 src/core/ngx_slab.c --- a/src/core/ngx_slab.c Wed Jun 18 13:39:20 2014 +0400 +++ b/src/core/ngx_slab.c Wed Jun 04 15:09:19 2014 +0400 @@ -398,6 +398,35 @@ done: } +void * +ngx_slab_calloc(ngx_slab_pool_t *pool, size_t size) +{ + void *p; + + ngx_shmtx_lock(&pool->mutex); + + p = ngx_slab_calloc_locked(pool, size); + + ngx_shmtx_unlock(&pool->mutex); + + return p; +} + + +void * +ngx_slab_calloc_locked(ngx_slab_pool_t *pool, size_t size) +{ + void *p; + + p = ngx_slab_alloc_locked(pool, size); + if (p) { + ngx_memzero(p, size); + } + + return p; +} + + void ngx_slab_free(ngx_slab_pool_t *pool, void *p) { diff -r eadf46f888e9 -r 25ade23cf281 src/core/ngx_slab.h --- a/src/core/ngx_slab.h Wed Jun 18 13:39:20 2014 +0400 +++ b/src/core/ngx_slab.h Wed Jun 04 15:09:19 2014 +0400 @@ -50,6 +50,8 @@ typedef struct { void ngx_slab_init(ngx_slab_pool_t *pool); void *ngx_slab_alloc(ngx_slab_pool_t *pool, size_t size); void *ngx_slab_alloc_locked(ngx_slab_pool_t *pool, size_t size); +void *ngx_slab_calloc(ngx_slab_pool_t *pool, size_t size); +void *ngx_slab_calloc_locked(ngx_slab_pool_t *pool, size_t size); void ngx_slab_free(ngx_slab_pool_t *pool, void *p); void ngx_slab_free_locked(ngx_slab_pool_t *pool, void *p); diff -r eadf46f888e9 -r 25ade23cf281 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Wed Jun 18 13:39:20 2014 +0400 +++ b/src/http/ngx_http_file_cache.c Wed Jun 04 15:09:19 2014 +0400 @@ -678,8 +678,8 @@ ngx_http_file_cache_exists(ngx_http_file goto done; } - fcn = ngx_slab_alloc_locked(cache->shpool, - sizeof(ngx_http_file_cache_node_t)); + fcn = ngx_slab_calloc_locked(cache->shpool, + sizeof(ngx_http_file_cache_node_t)); if (fcn == NULL) { ngx_shmtx_unlock(&cache->shpool->mutex); @@ -687,8 +687,8 @@ ngx_http_file_cache_exists(ngx_http_file ngx_shmtx_lock(&cache->shpool->mutex); - fcn = ngx_slab_alloc_locked(cache->shpool, - sizeof(ngx_http_file_cache_node_t)); + fcn = ngx_slab_calloc_locked(cache->shpool, + sizeof(ngx_http_file_cache_node_t)); if (fcn == NULL) { rc = NGX_ERROR; goto failed; @@ -704,8 +704,6 @@ ngx_http_file_cache_exists(ngx_http_file fcn->uses = 1; fcn->count = 1; - fcn->updating = 0; - fcn->deleting = 0; renew: @@ -1618,8 +1616,8 @@ ngx_http_file_cache_add(ngx_http_file_ca if (fcn == NULL) { - fcn = ngx_slab_alloc_locked(cache->shpool, - sizeof(ngx_http_file_cache_node_t)); + fcn = ngx_slab_calloc_locked(cache->shpool, + sizeof(ngx_http_file_cache_node_t)); if (fcn == NULL) { ngx_shmtx_unlock(&cache->shpool->mutex); return NGX_ERROR; @@ -1633,15 +1631,7 @@ ngx_http_file_cache_add(ngx_http_file_ca ngx_rbtree_insert(&cache->sh->rbtree, &fcn->node); fcn->uses = 1; - fcn->count = 0; - fcn->valid_msec = 0; - fcn->error = 0; fcn->exists = 1; - fcn->updating = 0; - fcn->deleting = 0; - fcn->uniq = 0; - fcn->valid_sec = 0; - fcn->body_start = 0; fcn->fs_size = c->fs_size; cache->sh->size += c->fs_size; From vbart at nginx.com Wed Jun 18 14:27:50 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Jun 2014 18:27:50 +0400 Subject: Only 64k sent when the first upstream failed in fastcgi_pass In-Reply-To: References: Message-ID: <2837485.mY35VVdHYd@vbart-workstation> On Wednesday 18 June 2014 09:46:36 Jing Ye wrote: > vbart, > > Thanks for the advice, but I?m afraid this is not the case. When i remove > the 9902 upstream and curl again, it works properly and print 90833 at the > end with only calling file_get_contents once. > > In addition, in the error.log with debug mode, i found the following lines, > > ????. > > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream request: "/index.php?" > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request handler > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:584 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:25297 > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:15 > > ????. > > 2014/06/17 21:57:49 [error] 61130#0: *9 upstream prematurely closed > connection while reading response header from upstream, client: 127.0.0.1, > server: localhost, request: "PUT /index.php HTTP/1.1", upstream: "fastcgi:// > 127.0.0.1:9902", host: "127.0.0.1:8080? > ????.. > > 2014/06/17 21:57:49 [debug] 61130#0: *9 http next upstream, 2 > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream request: "/index.php?" > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream send request handler > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream send request > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:584 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:65536 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:90833 > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:15 > ????.. > > Before 9902 failed, the request body was sent correctly in bufs sized > 32768, 32768 and 25297(totally 90833), but after 9902 failed, the upstream > module retried sending body with size 32768, 65536 and 90833. Maybe > something was wrong here. > > I guess after receiving the former 32768 bytes of the second buf(65536), > php-fpm tries to locate a new fastcgi header but failed, for the data right > behind is the body data of the image but a pre-constructed fastcgi header. > So, it mislead php-fpm to think of receiving the data end. > > Refer to the source code, i also found why this happen. > > *https://github.com/nginx/nginx/blob/v1.5.12/src/http/ngx_http_upstream.c#L1441 > * > > In the ngx_http_upstream_reinit function, cl->buf->file_pos are all reset > to 0 for every buf in the output chain. But i think file_post should be > reset to 0, 32768, 65536 instead. > > PS: the output chain is initiated here with buf->file_pos = 0, 32768, > 65536... > *https://github.com/nginx/nginx/blob/v1.5.12/src/http/modules/ngx_http_fastcgi_module.c#L1085 > * > > Is this the reason that cause the problem? [..] Yes, you're right. Thank you for the report. Please, try this patch: diff -r 25ade23cf281 src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Wed Jun 04 15:09:19 2014 +0400 +++ b/src/http/modules/ngx_http_fastcgi_module.c Wed Jun 18 18:25:20 2014 +0400 @@ -1126,6 +1126,13 @@ ngx_http_fastcgi_create_request(ngx_http len = (ngx_uint_t) (pos - b->pos); } + b->shadow = ngx_alloc_buf(r->pool); + if (b->shadow == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(b->shadow, b, sizeof(ngx_buf_t)); + padding = 8 - len % 8; padding = (padding == 8) ? 0 : padding; diff -r 25ade23cf281 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Jun 04 15:09:19 2014 +0400 +++ b/src/http/ngx_http_upstream.c Wed Jun 18 18:25:20 2014 +0400 @@ -1568,8 +1568,14 @@ ngx_http_upstream_reinit(ngx_http_reques /* reinit the request chain */ for (cl = u->request_bufs; cl; cl = cl->next) { - cl->buf->pos = cl->buf->start; - cl->buf->file_pos = 0; + + if (cl->buf->shadow) { + ngx_memcpy(cl->buf, cl->buf->shadow, sizeof(ngx_buf_t)); + + } else { + cl->buf->pos = cl->buf->start; + cl->buf->file_pos = 0; + } } /* reinit the subrequest's ngx_output_chain() context */ From xie.kenneth at gmail.com Wed Jun 18 16:28:49 2014 From: xie.kenneth at gmail.com (Ken) Date: Thu, 19 Jun 2014 00:28:49 +0800 Subject: why nginx clear environment when init process? Message-ID: in static void ngx_procs_process_init(ngx_cycle_t *cycle, ngx_proc_module_t *module, ngx_int_t priority) { ..... if (*ngx_set_environment(cycle, NULL)* == NULL) { /* fatal */ exit(2); } ..... why doing this? it make impossible to getenv in worker process. Best wishes, Kenneth Tse Email: xie.kenneth at gmail.com Twitter: kenneth_tse -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 18 19:02:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jun 2014 23:02:46 +0400 Subject: why nginx clear environment when init process? In-Reply-To: References: Message-ID: <20140618190246.GW1849@mdounin.ru> Hello! On Thu, Jun 19, 2014 at 12:28:49AM +0800, Ken wrote: > in > > static void > ngx_procs_process_init(ngx_cycle_t *cycle, ngx_proc_module_t *module, > ngx_int_t priority) > { > ..... > > if (*ngx_set_environment(cycle, NULL)* == NULL) { > /* fatal */ > exit(2); > } > > ..... > > why doing this? it make impossible to getenv in worker process. http://nginx.org/r/env -- Maxim Dounin http://nginx.org/ From yejingx at gmail.com Thu Jun 19 01:59:07 2014 From: yejingx at gmail.com (Jing Ye) Date: Thu, 19 Jun 2014 09:59:07 +0800 Subject: Only 64k sent when the first upstream failed in fastcgi_pass In-Reply-To: <2837485.mY35VVdHYd@vbart-workstation> References: <2837485.mY35VVdHYd@vbart-workstation> Message-ID: Yes, It works. Thank you for the patch. On Wed, Jun 18, 2014 at 10:27 PM, Valentin V. Bartenev wrote: > On Wednesday 18 June 2014 09:46:36 Jing Ye wrote: > > vbart, > > > > Thanks for the advice, but I?m afraid this is not the case. When i remove > > the 9902 upstream and curl again, it works properly and print 90833 at > the > > end with only calling file_get_contents once. > > > > In addition, in the error.log with debug mode, i found the following > lines, > > > > ????. > > > > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream request: > "/index.php?" > > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request > handler > > 2014/06/17 21:57:48 [debug] 61130#0: *9 http upstream send request > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:584 > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:25297 > > > 2014/06/17 21:57:48 [debug] 61130#0: *9 chain writer buf fl:0 s:15 > > > > ????. > > > > 2014/06/17 21:57:49 [error] 61130#0: *9 upstream prematurely closed > > connection while reading response header from upstream, client: > 127.0.0.1, > > server: localhost, request: "PUT /index.php HTTP/1.1", upstream: > "fastcgi:// > > 127.0.0.1:9902", host: "127.0.0.1:8080? > > ????.. > > > > 2014/06/17 21:57:49 [debug] 61130#0: *9 http next upstream, 2 > > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream request: > "/index.php?" > > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream send request > handler > > 2014/06/17 21:57:49 [debug] 61130#0: *9 http upstream send request > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:584 > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:32768 > > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:65536 > > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:8 > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:90833 > > > 2014/06/17 21:57:49 [debug] 61130#0: *9 chain writer buf fl:0 s:15 > > ????.. > > > > Before 9902 failed, the request body was sent correctly in bufs sized > > 32768, 32768 and 25297(totally 90833), but after 9902 failed, the > upstream > > module retried sending body with size 32768, 65536 and 90833. Maybe > > something was wrong here. > > > > I guess after receiving the former 32768 bytes of the second buf(65536), > > php-fpm tries to locate a new fastcgi header but failed, for the data > right > > behind is the body data of the image but a pre-constructed fastcgi > header. > > So, it mislead php-fpm to think of receiving the data end. > > > > Refer to the source code, i also found why this happen. > > > > * > https://github.com/nginx/nginx/blob/v1.5.12/src/http/ngx_http_upstream.c#L1441 > > < > https://github.com/nginx/nginx/blob/v1.5.12/src/http/ngx_http_upstream.c#L1441 > >* > > > > In the ngx_http_upstream_reinit function, cl->buf->file_pos are all reset > > to 0 for every buf in the output chain. But i think file_post should be > > reset to 0, 32768, 65536 instead. > > > > PS: the output chain is initiated here with buf->file_pos = 0, 32768, > > 65536... > > * > https://github.com/nginx/nginx/blob/v1.5.12/src/http/modules/ngx_http_fastcgi_module.c#L1085 > > < > https://github.com/nginx/nginx/blob/v1.5.12/src/http/modules/ngx_http_fastcgi_module.c#L1085 > >* > > > > Is this the reason that cause the problem? > [..] > > Yes, you're right. Thank you for the report. > > Please, try this patch: > > diff -r 25ade23cf281 src/http/modules/ngx_http_fastcgi_module.c > --- a/src/http/modules/ngx_http_fastcgi_module.c Wed Jun 04 > 15:09:19 2014 +0400 > +++ b/src/http/modules/ngx_http_fastcgi_module.c Wed Jun 18 > 18:25:20 2014 +0400 > @@ -1126,6 +1126,13 @@ ngx_http_fastcgi_create_request(ngx_http > len = (ngx_uint_t) (pos - b->pos); > } > > + b->shadow = ngx_alloc_buf(r->pool); > + if (b->shadow == NULL) { > + return NGX_ERROR; > + } > + > + ngx_memcpy(b->shadow, b, sizeof(ngx_buf_t)); > + > padding = 8 - len % 8; > padding = (padding == 8) ? 0 : padding; > > diff -r 25ade23cf281 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Wed Jun 04 15:09:19 2014 +0400 > +++ b/src/http/ngx_http_upstream.c Wed Jun 18 18:25:20 2014 +0400 > @@ -1568,8 +1568,14 @@ ngx_http_upstream_reinit(ngx_http_reques > /* reinit the request chain */ > > for (cl = u->request_bufs; cl; cl = cl->next) { > - cl->buf->pos = cl->buf->start; > - cl->buf->file_pos = 0; > + > + if (cl->buf->shadow) { > + ngx_memcpy(cl->buf, cl->buf->shadow, sizeof(ngx_buf_t)); > + > + } else { > + cl->buf->pos = cl->buf->start; > + cl->buf->file_pos = 0; > + } > } > > /* reinit the subrequest's ngx_output_chain() context */ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeeply at gmail.com Thu Jun 19 07:00:29 2014 From: codeeply at gmail.com (Jianjun Zheng) Date: Thu, 19 Jun 2014 15:00:29 +0800 Subject: [PATCH] Core: bugfix for the ngx_slab_max_size case In-Reply-To: References: Message-ID: add some words... 2) the slot with index of (ngx_pagesize_shift - pool->min_shift - 1) is the right slot for this size, and is never used. Suppose the page_size is 4K: There is 9 slots in the pool, the 9th slot (slots[8]) holds the size of ngx_slab_max_size. 2014-06-18 17:00 GMT+08:00 Jianjun Zheng : > At present, alloting memory with size of ngx_slab_max_size causes > > 1) an internal fragmentation, size of ngx_slab_max_size, comes into being > > 2) the slot with index of (ngx_pagesize_shift - pool->min_shift - 1) > is the right slot for this size. > > > # HG changeset patch > # User Jianjun Zheng > # Date 1403080799 -28800 > # Wed Jun 18 16:39:59 2014 +0800 > # Node ID 1704335dd810e2e2abb2b393b4f7b7c9004c6012 > # Parent ec919574cc14f7781c0ca212cffec586f88eec40 > Core: bugfix for the ngx_slab_max_size case > > diff -r ec919574cc14 -r 1704335dd810 src/core/ngx_slab.c > --- a/src/core/ngx_slab.c Tue Jun 17 16:51:25 2014 +0400 > +++ b/src/core/ngx_slab.c Wed Jun 18 16:39:59 2014 +0800 > @@ -160,7 +160,7 @@ > ngx_uint_t i, slot, shift, map; > ngx_slab_page_t *page, *prev, *slots; > > - if (size >= ngx_slab_max_size) { > + if (size > ngx_slab_max_size) { > > ngx_log_debug1(NGX_LOG_DEBUG_ALLOC, ngx_cycle->log, 0, > "slab alloc: %uz", size); > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Jun 19 10:01:10 2014 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 19 Jun 2014 10:01:10 +0000 Subject: [nginx] FreeBSD has migrated to Bugzilla. Message-ID: details: http://hg.nginx.org/nginx/rev/675bda8dcfdb branches: changeset: 5727:675bda8dcfdb user: Sergey Kandaurov date: Thu Jun 19 13:55:59 2014 +0400 description: FreeBSD has migrated to Bugzilla. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 2 +- src/os/unix/ngx_freebsd_sendfile_chain.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff -r 25ade23cf281 -r 675bda8dcfdb src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Jun 04 15:09:19 2014 +0400 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Thu Jun 19 13:55:59 2014 +0400 @@ -13,7 +13,7 @@ /* * It seems that Darwin 9.4 (Mac OS X 1.5) sendfile() has the same * old bug as early FreeBSD sendfile() syscall: - * http://www.freebsd.org/cgi/query-pr.cgi?pr=33771 + * http://bugs.freebsd.org/33771 * * Besides sendfile() has another bug: if one calls sendfile() * with both a header and a trailer, then sendfile() ignores a file part diff -r 25ade23cf281 -r 675bda8dcfdb src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Jun 04 15:09:19 2014 +0400 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Thu Jun 19 13:55:59 2014 +0400 @@ -265,7 +265,7 @@ ngx_freebsd_sendfile_chain(ngx_connectio /* * the "nbytes bug" of the old sendfile() syscall: - * http://www.freebsd.org/cgi/query-pr.cgi?pr=33771 + * http://bugs.freebsd.org/33771 */ if (!ngx_freebsd_sendfile_nbytes_bug) { From piotr at cloudflare.com Thu Jun 19 11:17:23 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 19 Jun 2014 04:17:23 -0700 Subject: [PATCH] Perl: NULL-terminate argument list Message-ID: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> # HG changeset patch # User Piotr Sikora # Date 1403176596 25200 # Thu Jun 19 04:16:36 2014 -0700 # Node ID 290f3fcb9cf552c235b9807cf0af3830b5add5af # Parent 675bda8dcfdbf66e4a17017839f39ed6c8cbb9f5 Perl: NULL-terminate argument list. perl_parse() function expects argv/argc-style argument list, which according to the C standard must be NULL-terminated, that is: argv[argc] == NULL. This change fixes a crash (SIGSEGV) that could happen because of the buffer overrun during perl module initialization. Signed-off-by: Piotr Sikora diff -r 675bda8dcfdb -r 290f3fcb9cf5 src/http/modules/perl/ngx_http_perl_module.c --- a/src/http/modules/perl/ngx_http_perl_module.c Thu Jun 19 13:55:59 2014 +0400 +++ b/src/http/modules/perl/ngx_http_perl_module.c Thu Jun 19 04:16:36 2014 -0700 @@ -577,7 +577,7 @@ ngx_http_perl_create_interpreter(ngx_con n = (pmcf->modules != NGX_CONF_UNSET_PTR) ? pmcf->modules->nelts * 2 : 0; - embedding = ngx_palloc(cf->pool, (4 + n) * sizeof(char *)); + embedding = ngx_palloc(cf->pool, (5 + n) * sizeof(char *)); if (embedding == NULL) { goto fail; } @@ -595,6 +595,7 @@ ngx_http_perl_create_interpreter(ngx_con embedding[n++] = "-Mnginx"; embedding[n++] = "-e"; embedding[n++] = "0"; + embedding[n] = NULL; n = perl_parse(perl, ngx_http_perl_xs_init, n, embedding, NULL); From mdounin at mdounin.ru Thu Jun 19 11:55:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jun 2014 15:55:11 +0400 Subject: [PATCH] Configure: fix build from sources for OpenSSL on OS X In-Reply-To: References: <20140617103558.GG1849@mdounin.ru> <20140617112517.GI1849@mdounin.ru> Message-ID: <20140619115511.GZ1849@mdounin.ru> Hello! On Tue, Jun 17, 2014 at 03:38:09PM -0700, Piotr Sikora wrote: > Hey Maxim, > > > Suggested patch for OpenSSL, in contrast, only required for build > > of OpenSSL by nginx (which is exotic on OS X anyway), only needed > > for old versions of OpenSSL, and will break building for i386 if > > needed for whatever reason, as well as building of at least some > > supported OpenSSL versions. > > So, in summary, on 64-bit OS X: > - 64-bit build (default) is broken with all versions of OpenSSL, > - 32-bit build (non-default) works fine with all versions of OpenSSL, > - env KERNEL_BITS=64 fixes 64-bit build, but only with OpenSSL-1.0.1, > - my patch fixes 64-bit build (default) with OpenSSL-0.9.8, > OpenSSL-1.0.0 and OpenSSL-1.0.1 at the cost of breaking 32-bit build > (non-default), which can be fixed with a simple test for "arch i386" > in CFLAGS. > > Yet it's still a no-go? As I already wrote: - There is no problem unless you are trying to ask nginx to compile OpenSSL for you. That is, all discussed cases are non-default. - The problem is clearly on the OpenSSL side. - An obvious workaround for all versions of OpenSSL is to compile OpenSSL yourself; workaround for recent versions of OpenSSL is to define KERNEL_BITS. - Suggested change breaks at least some builds (and/or requires additional checks to avoid breaking them), and doesn't look future-proof at all. So the answer is still no. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jun 19 13:26:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jun 2014 17:26:41 +0400 Subject: [PATCH] Perl: NULL-terminate argument list In-Reply-To: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> References: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> Message-ID: <20140619132641.GA1849@mdounin.ru> Hello! On Thu, Jun 19, 2014 at 04:17:23AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1403176596 25200 > # Thu Jun 19 04:16:36 2014 -0700 > # Node ID 290f3fcb9cf552c235b9807cf0af3830b5add5af > # Parent 675bda8dcfdbf66e4a17017839f39ed6c8cbb9f5 > Perl: NULL-terminate argument list. > > perl_parse() function expects argv/argc-style argument list, > which according to the C standard must be NULL-terminated, > that is: argv[argc] == NULL. > > This change fixes a crash (SIGSEGV) that could happen because > of the buffer overrun during perl module initialization. The perlembed manpage is full of examples without terminating NULL, and it's the only documentation available for the perl_parse() function, AFAIK. Could you please elaborate a bit more on the problem the patch tries to fix? > > Signed-off-by: Piotr Sikora > > diff -r 675bda8dcfdb -r 290f3fcb9cf5 src/http/modules/perl/ngx_http_perl_module.c > --- a/src/http/modules/perl/ngx_http_perl_module.c Thu Jun 19 13:55:59 2014 +0400 > +++ b/src/http/modules/perl/ngx_http_perl_module.c Thu Jun 19 04:16:36 2014 -0700 > @@ -577,7 +577,7 @@ ngx_http_perl_create_interpreter(ngx_con > > n = (pmcf->modules != NGX_CONF_UNSET_PTR) ? pmcf->modules->nelts * 2 : 0; > > - embedding = ngx_palloc(cf->pool, (4 + n) * sizeof(char *)); > + embedding = ngx_palloc(cf->pool, (5 + n) * sizeof(char *)); > if (embedding == NULL) { > goto fail; > } > @@ -595,6 +595,7 @@ ngx_http_perl_create_interpreter(ngx_con > embedding[n++] = "-Mnginx"; > embedding[n++] = "-e"; > embedding[n++] = "0"; > + embedding[n] = NULL; > > n = perl_parse(perl, ngx_http_perl_xs_init, n, embedding, NULL); The patch itself looks fine. -- Maxim Dounin http://nginx.org/ From fdasilvayy at gmail.com Thu Jun 19 20:10:30 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Thu, 19 Jun 2014 22:10:30 +0200 Subject: [PATCH] Mail: send starttls flag value to auth script In-Reply-To: <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> References: <51fd90f96449c23af007.1394099969@HPC> <20140306162718.GL34696@mdounin.ru> <877FD2F6-57CD-4C14-9F2B-4C9E909C3488@phpgangsta.de> Message-ID: Hi, I forget to post the reworked version. Here is it. Regards, Filipe DA SILVA # HG changeset patch # Parent b2b5b1b741290adf60220f44f6e37cd8bd9d3885 Mail: send a secure connection flag to auth script. Allow to do logging (if logging takes place in the auth script) and or force some users to use STARTTLS while others can use unencrypted connection. diff -r b2b5b1b74129 src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Fri Mar 07 15:17:38 2014 +0400 +++ b/src/mail/ngx_mail_auth_http_module.c Wed Mar 12 15:49:21 2014 +0100 @@ -1165,6 +1165,9 @@ ngx_mail_auth_http_create_request(ngx_ma + sizeof("Auth-Salt: ") - 1 + s->salt.len + sizeof("Auth-Protocol: ") - 1 + cscf->protocol->name.len + sizeof(CRLF) - 1 +#if (NGX_MAIL_SSL) + + sizeof("Auth-Secured: ") - 1 + 1 + sizeof(CRLF) - 1 +#endif + sizeof("Auth-Login-Attempt: ") - 1 + NGX_INT_T_LEN + sizeof(CRLF) - 1 + sizeof("Client-IP: ") - 1 + s->connection->addr_text.len @@ -1219,6 +1222,13 @@ ngx_mail_auth_http_create_request(ngx_ma cscf->protocol->name.len); *b->last++ = CR; *b->last++ = LF; +#if (NGX_MAIL_SSL) + b->last = ngx_cpymem(b->last, "Auth-Secured: ", + sizeof("Auth-Secured: ") - 1); + *b->last++ = s->connection->ssl ? '1' : '0' ; + *b->last++ = CR; *b->last++ = LF; +#endif + b->last = ngx_sprintf(b->last, "Auth-Login-Attempt: %ui" CRLF, s->login_attempt); 2014-03-06 18:03 GMT+01:00 Michael Kliewe : > Hi Maxim, > > On Mar 6, 2014, at 5:27 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, Mar 06, 2014 at 10:59:29AM +0100, Filipe da Silva wrote: >> >>> # HG changeset patch >>> # User Filipe da Silva >>> # Date 1394099468 -3600 >>> # Thu Mar 06 10:51:08 2014 +0100 >>> # Node ID 51fd90f96449c23af0076a19efbfdb1f88702125 >>> # Parent 24df9fa5868957c1fb9a2d1569271e0958327dad >>> Mail: send starttls flag value to auth script. >>> >>> Allow to do logging (if logging takes place in the auth script) and or force >>> some users to use STARTTLS while others can use unencrypted connection. >>> >> >> I don't think that it's a good idea to pass STARTTLS into auth >> script. If at all needed, it should be something like a flag "if >> SSL is used", not an explicit STARTTLS status. From auth script >> point of view there is no difference if a connection uses SSL on a >> dedicated port or encryption was negotiated using STARTLS. > > yes, it is needed ;-) > > You are right, that would also be possible, the auth script then can check which port has been used, and then has the information if it has been STARTTLS or SSL. In our case we want to distinguish between STARTTLS and SSL in the auth script. > > Both solutions are fine I think, so let's take Maxims ;-) (Sorry Filipe for the extra work) > > Hope this easy patch gets into nginx then, we need it ;-) > > Thanks! > Michael > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- # HG changeset patch # Parent b2b5b1b741290adf60220f44f6e37cd8bd9d3885 Mail: send a secure connection flag to auth script. Allow to do logging (if logging takes place in the auth script) and or force some users to use STARTTLS while others can use unencrypted connection. diff -r b2b5b1b74129 src/mail/ngx_mail_auth_http_module.c --- a/src/mail/ngx_mail_auth_http_module.c Fri Mar 07 15:17:38 2014 +0400 +++ b/src/mail/ngx_mail_auth_http_module.c Wed Mar 12 15:49:21 2014 +0100 @@ -1165,6 +1165,9 @@ ngx_mail_auth_http_create_request(ngx_ma + sizeof("Auth-Salt: ") - 1 + s->salt.len + sizeof("Auth-Protocol: ") - 1 + cscf->protocol->name.len + sizeof(CRLF) - 1 +#if (NGX_MAIL_SSL) + + sizeof("Auth-Secured: ") - 1 + 1 + sizeof(CRLF) - 1 +#endif + sizeof("Auth-Login-Attempt: ") - 1 + NGX_INT_T_LEN + sizeof(CRLF) - 1 + sizeof("Client-IP: ") - 1 + s->connection->addr_text.len @@ -1219,6 +1222,13 @@ ngx_mail_auth_http_create_request(ngx_ma cscf->protocol->name.len); *b->last++ = CR; *b->last++ = LF; +#if (NGX_MAIL_SSL) + b->last = ngx_cpymem(b->last, "Auth-Secured: ", + sizeof("Auth-Secured: ") - 1); + *b->last++ = s->connection->ssl ? '1' : '0' ; + *b->last++ = CR; *b->last++ = LF; +#endif + b->last = ngx_sprintf(b->last, "Auth-Login-Attempt: %ui" CRLF, s->login_attempt); From grrm77 at gmail.com Thu Jun 19 21:56:33 2014 From: grrm77 at gmail.com (grrm grrm) Date: Fri, 20 Jun 2014 00:56:33 +0300 Subject: Patch: Refactor ngx_http_write_request_body into a filter Message-ID: Hello, This patch removes some redundant ngx_http_request_body_filter calls, simplifies the ngx_http_do_read_client_request_body and ngx_http_read_client_request_body functions and removes some duplication of code. body.t and body_chunked.t test in nginx-tests are passing. Please kindly consider it. Thank you. -------------- next part -------------- A non-text attachment was scrubbed... Name: request_body.patch Type: application/octet-stream Size: 9905 bytes Desc: not available URL: From piotr at cloudflare.com Fri Jun 20 09:46:59 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 20 Jun 2014 02:46:59 -0700 Subject: [PATCH] Perl: NULL-terminate argument list In-Reply-To: <20140619132641.GA1849@mdounin.ru> References: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> <20140619132641.GA1849@mdounin.ru> Message-ID: Hey Maxim, > The perlembed manpage is full of examples without terminating > NULL, and it's the only documentation available for the > perl_parse() function, AFAIK. > > Could you please elaborate a bit more on the problem the patch > tries to fix? The problem is that perl_parse() tries to read value at argv[argc]. I don't think it uses it, so it doesn't really have to be NULL, but the memory must be allocated, otherwise it's reading past the allocation. I've started digging into perl's code yesterday, but it's an unreadable mess of macros, so I eventually gave up and didn't find confirmation for it in the code. This issue is quite hard to hit under normal circumstances, because nginx uses memory pools, so the 1 byte buffer overrun can only happen when argv[argc] == pool->d.last == pool->d.end. Furthermore, you need to use malloc that puts guard page right after each allocation, otherwise you won't be able to detect it. Regarding the perlembed man page, it's indeed lacking any details and the examples suggest that the argument list doesn't have to be NULL-terminated, however the consistent crashes I was seeing for a few configurations (like the one generated by empty_gif.t with an extra perl_modules directive passed in via globals) that were fixed with my patch suggest otherwise. Hope that explains it enough. Best regards, Piotr Sikora From ru at nginx.com Fri Jun 20 12:27:17 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 20 Jun 2014 12:27:17 +0000 Subject: [nginx] Upstream: reduced diffs to the plus version of nginx. Message-ID: details: http://hg.nginx.org/nginx/rev/63d7d69d0fe4 branches: changeset: 5728:63d7d69d0fe4 user: Ruslan Ermilov date: Fri Jun 20 12:55:41 2014 +0400 description: Upstream: reduced diffs to the plus version of nginx. No functional changes. diffstat: src/http/ngx_http_upstream.c | 174 +++++++++++++++++++++--------------------- 1 files changed, 86 insertions(+), 88 deletions(-) diffs (212 lines): diff -r 675bda8dcfdb -r 63d7d69d0fe4 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Jun 19 13:55:59 2014 +0400 +++ b/src/http/ngx_http_upstream.c Fri Jun 20 12:55:41 2014 +0400 @@ -4851,6 +4851,12 @@ ngx_http_upstream(ngx_conf_t *cf, ngx_co } } + uscf->servers = ngx_array_create(cf->pool, 4, + sizeof(ngx_http_upstream_server_t)); + if (uscf->servers == NULL) { + return NGX_CONF_ERROR; + } + /* parse inside upstream{} */ @@ -4866,7 +4872,7 @@ ngx_http_upstream(ngx_conf_t *cf, ngx_co return rv; } - if (uscf->servers == NULL) { + if (uscf->servers->nelts == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no servers are inside upstream"); return NGX_CONF_ERROR; @@ -4888,14 +4894,6 @@ ngx_http_upstream_server(ngx_conf_t *cf, ngx_uint_t i; ngx_http_upstream_server_t *us; - if (uscf->servers == NULL) { - uscf->servers = ngx_array_create(cf->pool, 4, - sizeof(ngx_http_upstream_server_t)); - if (uscf->servers == NULL) { - return NGX_CONF_ERROR; - } - } - us = ngx_array_push(uscf->servers); if (us == NULL) { return NGX_CONF_ERROR; @@ -4905,6 +4903,85 @@ ngx_http_upstream_server(ngx_conf_t *cf, value = cf->args->elts; + weight = 1; + max_fails = 1; + fail_timeout = 10; + + for (i = 2; i < cf->args->nelts; i++) { + + if (ngx_strncmp(value[i].data, "weight=", 7) == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_WEIGHT)) { + goto invalid; + } + + weight = ngx_atoi(&value[i].data[7], value[i].len - 7); + + if (weight == NGX_ERROR || weight == 0) { + goto invalid; + } + + continue; + } + + if (ngx_strncmp(value[i].data, "max_fails=", 10) == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_MAX_FAILS)) { + goto invalid; + } + + max_fails = ngx_atoi(&value[i].data[10], value[i].len - 10); + + if (max_fails == NGX_ERROR) { + goto invalid; + } + + continue; + } + + if (ngx_strncmp(value[i].data, "fail_timeout=", 13) == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_FAIL_TIMEOUT)) { + goto invalid; + } + + s.len = value[i].len - 13; + s.data = &value[i].data[13]; + + fail_timeout = ngx_parse_time(&s, 1); + + if (fail_timeout == (time_t) NGX_ERROR) { + goto invalid; + } + + continue; + } + + if (ngx_strcmp(value[i].data, "backup") == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_BACKUP)) { + goto invalid; + } + + us->backup = 1; + + continue; + } + + if (ngx_strcmp(value[i].data, "down") == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_DOWN)) { + goto invalid; + } + + us->down = 1; + + continue; + } + + goto invalid; + } + ngx_memzero(&u, sizeof(ngx_url_t)); u.url = value[1]; @@ -4919,85 +4996,6 @@ ngx_http_upstream_server(ngx_conf_t *cf, return NGX_CONF_ERROR; } - weight = 1; - max_fails = 1; - fail_timeout = 10; - - for (i = 2; i < cf->args->nelts; i++) { - - if (ngx_strncmp(value[i].data, "weight=", 7) == 0) { - - if (!(uscf->flags & NGX_HTTP_UPSTREAM_WEIGHT)) { - goto invalid; - } - - weight = ngx_atoi(&value[i].data[7], value[i].len - 7); - - if (weight == NGX_ERROR || weight == 0) { - goto invalid; - } - - continue; - } - - if (ngx_strncmp(value[i].data, "max_fails=", 10) == 0) { - - if (!(uscf->flags & NGX_HTTP_UPSTREAM_MAX_FAILS)) { - goto invalid; - } - - max_fails = ngx_atoi(&value[i].data[10], value[i].len - 10); - - if (max_fails == NGX_ERROR) { - goto invalid; - } - - continue; - } - - if (ngx_strncmp(value[i].data, "fail_timeout=", 13) == 0) { - - if (!(uscf->flags & NGX_HTTP_UPSTREAM_FAIL_TIMEOUT)) { - goto invalid; - } - - s.len = value[i].len - 13; - s.data = &value[i].data[13]; - - fail_timeout = ngx_parse_time(&s, 1); - - if (fail_timeout == (time_t) NGX_ERROR) { - goto invalid; - } - - continue; - } - - if (ngx_strcmp(value[i].data, "backup") == 0) { - - if (!(uscf->flags & NGX_HTTP_UPSTREAM_BACKUP)) { - goto invalid; - } - - us->backup = 1; - - continue; - } - - if (ngx_strcmp(value[i].data, "down") == 0) { - - if (!(uscf->flags & NGX_HTTP_UPSTREAM_DOWN)) { - goto invalid; - } - - us->down = 1; - - continue; - } - - goto invalid; - } - us->name = u.url; us->addrs = u.addrs; us->naddrs = u.naddrs; From mdounin at mdounin.ru Fri Jun 20 18:09:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 22:09:26 +0400 Subject: Patch: Refactor ngx_http_write_request_body into a filter In-Reply-To: References: Message-ID: <20140620180926.GO1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 12:56:33AM +0300, grrm grrm wrote: > Hello, > > This patch removes some redundant ngx_http_request_body_filter calls, > simplifies the ngx_http_do_read_client_request_body and > ngx_http_read_client_request_body functions and removes some > duplication of code. > body.t and body_chunked.t test in nginx-tests are passing. > > Please kindly consider it. It looks like the patch introduces at least one serious enough problem: with the patch, if disk buffering is used, all reads from a client are mapped into disk writes, which is bad. Also, I don't think that it improves things from readability point of view. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jun 20 19:41:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jun 2014 23:41:03 +0400 Subject: [PATCH] Perl: NULL-terminate argument list In-Reply-To: References: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> <20140619132641.GA1849@mdounin.ru> Message-ID: <20140620194102.GQ1849@mdounin.ru> Hello! On Fri, Jun 20, 2014 at 02:46:59AM -0700, Piotr Sikora wrote: > Hey Maxim, > > > The perlembed manpage is full of examples without terminating > > NULL, and it's the only documentation available for the > > perl_parse() function, AFAIK. > > > > Could you please elaborate a bit more on the problem the patch > > tries to fix? > > The problem is that perl_parse() tries to read value at argv[argc]. I > don't think it uses it, so it doesn't really have to be NULL, but the > memory must be allocated, otherwise it's reading past the allocation. > I've started digging into perl's code yesterday, but it's an > unreadable mess of macros, so I eventually gave up and didn't find > confirmation for it in the code. > > This issue is quite hard to hit under normal circumstances, because > nginx uses memory pools, so the 1 byte buffer overrun can only happen > when argv[argc] == pool->d.last == pool->d.end. Furthermore, you need > to use malloc that puts guard page right after each allocation, > otherwise you won't be able to detect it. > > Regarding the perlembed man page, it's indeed lacking any details and > the examples suggest that the argument list doesn't have to be > NULL-terminated, however the consistent crashes I was seeing for a few > configurations (like the one generated by empty_gif.t with an extra > perl_modules directive passed in via globals) that were fixed with my > patch suggest otherwise. > > Hope that explains it enough. Ok, so it looks like a bug either in perl itself, or in the perlembed manpage. It may make sense to track it further, probably using valgrind and examples from perlembed. Most suspicious line I see in perl 5.18 sources is in toke.c: Copy(PL_origargv+1, newargv+2, PL_origargc+1, char*); I suspect that "+" in the "PL_origargc+1" is just a typo, it should be "-". I don't think that suggested patch will help if it's the reason (or, at least, it won't help in all cases), as it looks like 2 pointers overrun, not just 1 pointer you are adding. -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Fri Jun 20 23:09:31 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 20 Jun 2014 16:09:31 -0700 Subject: [PATCH] Perl: NULL-terminate argument list In-Reply-To: <20140620194102.GQ1849@mdounin.ru> References: <290f3fcb9cf552c235b9.1403176643@Piotrs-MacBook-Pro.local> <20140619132641.GA1849@mdounin.ru> <20140620194102.GQ1849@mdounin.ru> Message-ID: Hey Maxim, > Most suspicious line I see in perl 5.18 sources is in toke.c: > > Copy(PL_origargv+1, newargv+2, PL_origargc+1, char*); > > I suspect that "+" in the "PL_origargc+1" is just a typo, it > should be "-". I don't think that suggested patch will help if > it's the reason (or, at least, it won't help in all cases), as it > looks like 2 pointers overrun, not just 1 pointer you are adding. I don't think that's what's really happening, though. I don't see the invalid access errors that I was getting for the 1 byte overrun if I position the input to detect 2 byte buffer overrun, that is: argv[argc+1] == pool->d.end Best regards, Piotr Sikora From flygoast at 126.com Wed Jun 25 11:23:32 2014 From: flygoast at 126.com (flygoast) Date: Wed, 25 Jun 2014 19:23:32 +0800 (CST) Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. Message-ID: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> # HG changeset patch # User FengGu # Date 1403694825 -28800 # Wed Jun 25 19:13:45 2014 +0800 # Node ID 12fd8ef2f6ea3167dd96cb000aafeb2665aeee14 # Parent 63d7d69d0fe48e030ff9fc520c7036dbd1ebc13f Upstream: fix possible request hang when "proxy_buffering" is off. In ngx_http_upstream_process_non_buffered_request(), when processing non buffered request, if write event has been delayed, deleting write timer event is likely to result in follow-up writing buffered in ngx_http_write_filter() ever since. diff -r 63d7d69d0fe4 -r 12fd8ef2f6ea src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Jun 20 12:55:41 2014 +0400 +++ b/src/http/ngx_http_upstream.c Wed Jun 25 19:13:45 2014 +0800 @@ -3058,6 +3058,7 @@ ngx_add_timer(downstream->write, clcf->send_timeout); } else if (downstream->write->timer_set) { + downstream->write->delayed = 0; ngx_del_timer(downstream->write); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From flygoast at 126.com Wed Jun 25 11:27:25 2014 From: flygoast at 126.com (flygoast) Date: Wed, 25 Jun 2014 19:27:25 +0800 (CST) Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> Message-ID: <19a6ae80.11df.146d2c7c51c.Coremail.flygoast@126.com> Attach a log for this situation. Thanks. At 2014-06-25 19:23:32,flygoast wrote: # HG changeset patch # User FengGu # Date 1403694825 -28800 # Wed Jun 25 19:13:45 2014 +0800 # Node ID 12fd8ef2f6ea3167dd96cb000aafeb2665aeee14 # Parent 63d7d69d0fe48e030ff9fc520c7036dbd1ebc13f Upstream: fix possible request hang when "proxy_buffering" is off. In ngx_http_upstream_process_non_buffered_request(), when processing non buffered request, if write event has been delayed, deleting write timer event is likely to result in follow-up writing buffered in ngx_http_write_filter() ever since. diff -r 63d7d69d0fe4 -r 12fd8ef2f6ea src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Jun 20 12:55:41 2014 +0400 +++ b/src/http/ngx_http_upstream.c Wed Jun 25 19:13:45 2014 +0800 @@ -3058,6 +3058,7 @@ ngx_add_timer(downstream->write, clcf->send_timeout); } else if (downstream->write->timer_set) { + downstream->write->delayed = 0; ngx_del_timer(downstream->write); } -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: nginx.log URL: From mdounin at mdounin.ru Wed Jun 25 13:09:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 17:09:29 +0400 Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> Message-ID: <20140625130928.GC1849@mdounin.ru> Hello! On Wed, Jun 25, 2014 at 07:23:32PM +0800, flygoast wrote: > # HG changeset patch > # User FengGu > # Date 1403694825 -28800 > # Wed Jun 25 19:13:45 2014 +0800 > # Node ID 12fd8ef2f6ea3167dd96cb000aafeb2665aeee14 > # Parent 63d7d69d0fe48e030ff9fc520c7036dbd1ebc13f > Upstream: fix possible request hang when "proxy_buffering" is off. > > > In ngx_http_upstream_process_non_buffered_request(), when processing non > buffered request, if write event has been delayed, deleting write timer > event is likely to result in follow-up writing buffered in > ngx_http_write_filter() ever since. The question is "how write event got delayed?" In the non-buffered mode, the r->limit_rate is explcitly set to 0, and this shouldn't happen. -- Maxim Dounin http://nginx.org/ From flygoast at 126.com Wed Jun 25 14:00:35 2014 From: flygoast at 126.com (flygoast) Date: Wed, 25 Jun 2014 22:00:35 +0800 (CST) Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <20140625130928.GC1849@mdounin.ru> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> <20140625130928.GC1849@mdounin.ru> Message-ID: <24a4a796.131f.146d353fcef.Coremail.flygoast@126.com> At here: in ngx_http_write_filter(): if (limit && c->write->ready && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) { c->write->delayed = 1; ngx_add_timer(c->write, 1); } limit's value from clcf->sendfile_max_chunk. In my nginx.conf, I set "sendfile_max_chunk 8k;". I attached a debug log for this situation in last mail. Thanks. At 2014-06-25 21:09:29,"Maxim Dounin" wrote: >Hello! > >On Wed, Jun 25, 2014 at 07:23:32PM +0800, flygoast wrote: > >> # HG changeset patch >> # User FengGu >> # Date 1403694825 -28800 >> # Wed Jun 25 19:13:45 2014 +0800 >> # Node ID 12fd8ef2f6ea3167dd96cb000aafeb2665aeee14 >> # Parent 63d7d69d0fe48e030ff9fc520c7036dbd1ebc13f >> Upstream: fix possible request hang when "proxy_buffering" is off. >> >> >> In ngx_http_upstream_process_non_buffered_request(), when processing non >> buffered request, if write event has been delayed, deleting write timer >> event is likely to result in follow-up writing buffered in >> ngx_http_write_filter() ever since. > >The question is "how write event got delayed?" >In the non-buffered mode, the r->limit_rate is explcitly set to 0, >and this shouldn't happen. > >-- >Maxim Dounin >http://nginx.org/ > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jun 25 18:12:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:12:33 +0400 Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <24a4a796.131f.146d353fcef.Coremail.flygoast@126.com> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> <20140625130928.GC1849@mdounin.ru> <24a4a796.131f.146d353fcef.Coremail.flygoast@126.com> Message-ID: <20140625181233.GE1849@mdounin.ru> Hello! On Wed, Jun 25, 2014 at 10:00:35PM +0800, flygoast wrote: > At here: > in ngx_http_write_filter(): > > > if (limit > && c->write->ready > && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) > { > c->write->delayed = 1; > ngx_add_timer(c->write, 1); > } > > > limit's value from clcf->sendfile_max_chunk. In my nginx.conf, I set "sendfile_max_chunk 8k;". I attached a debug log for this situation in last mail. Well, so the problem happens with sendfile_max_chunk set lower than proxy_buffer_size. While I don't think that sendfile_max_chunk 8k is practical, it probably worth fixing. I don't think that suggested patch is right though - it will not prevent infinite stall of transfering a big enough data chunk, as timer set by write filter will be removed. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jun 25 22:39:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:39:59 +0000 Subject: [nginx] Not modified filter: debug log format fixed. Message-ID: details: http://hg.nginx.org/nginx/rev/bb3d74fc4aea branches: changeset: 5729:bb3d74fc4aea user: Maxim Dounin date: Thu Jun 26 02:19:55 2014 +0400 description: Not modified filter: debug log format fixed. diffstat: src/http/modules/ngx_http_not_modified_filter_module.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (21 lines): diff --git a/src/http/modules/ngx_http_not_modified_filter_module.c b/src/http/modules/ngx_http_not_modified_filter_module.c --- a/src/http/modules/ngx_http_not_modified_filter_module.c +++ b/src/http/modules/ngx_http_not_modified_filter_module.c @@ -118,7 +118,7 @@ ngx_http_test_if_unmodified(ngx_http_req r->headers_in.if_unmodified_since->value.len); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "http iums:%d lm:%d", iums, r->headers_out.last_modified_time); + "http iums:%T lm:%T", iums, r->headers_out.last_modified_time); if (iums >= r->headers_out.last_modified_time) { return 1; @@ -144,7 +144,7 @@ ngx_http_test_if_modified(ngx_http_reque r->headers_in.if_modified_since->value.len); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "http ims:%d lm:%d", ims, r->headers_out.last_modified_time); + "http ims:%T lm:%T", ims, r->headers_out.last_modified_time); if (ims == r->headers_out.last_modified_time) { return 0; From mdounin at mdounin.ru Wed Jun 25 22:40:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:08 +0000 Subject: [nginx] Upstream: no need to clear r->headers_out.last_modified_... Message-ID: details: http://hg.nginx.org/nginx/rev/c8bdde1c8c97 branches: changeset: 5730:c8bdde1c8c97 user: Maxim Dounin date: Thu Jun 26 02:19:58 2014 +0400 description: Upstream: no need to clear r->headers_out.last_modified_time. Clearing of the r->headers_out.last_modified_time field if a response isn't cacheable in ngx_http_upstream_send_response() was introduced in 3b6afa999c2f, the commit to enable not modified filter for cacheable responses. It doesn't make sense though, as at this point header was already sent, and not modified filter was already executed. Therefore, the line was removed to simplify code. diffstat: src/http/ngx_http_upstream.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2500,7 +2500,6 @@ ngx_http_upstream_send_response(ngx_http } else { u->cacheable = 0; - r->headers_out.last_modified_time = -1; } } From mdounin at mdounin.ru Wed Jun 25 22:40:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:11 +0000 Subject: [nginx] Upstream: removed unused offset to content_length. Message-ID: details: http://hg.nginx.org/nginx/rev/02674312be45 branches: changeset: 5731:02674312be45 user: Maxim Dounin date: Thu Jun 26 02:20:05 2014 +0400 description: Upstream: removed unused offset to content_length. It's not needed since introduction of ngx_http_upstream_content_length() in 103b0d9afe07. diffstat: src/http/ngx_http_upstream.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (13 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -174,8 +174,7 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_copy_content_type, 0, 1 }, { ngx_string("Content-Length"), - ngx_http_upstream_process_content_length, - offsetof(ngx_http_upstream_headers_in_t, content_length), + ngx_http_upstream_process_content_length, 0, ngx_http_upstream_ignore_header_line, 0, 0 }, { ngx_string("Date"), From mdounin at mdounin.ru Wed Jun 25 22:40:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:14 +0000 Subject: [nginx] Upstream: fixed cache revalidation with SSI. Message-ID: details: http://hg.nginx.org/nginx/rev/d0ce06cb9be1 branches: changeset: 5732:d0ce06cb9be1 user: Maxim Dounin date: Thu Jun 26 02:20:09 2014 +0400 description: Upstream: fixed cache revalidation with SSI. Previous code in ngx_http_upstream_send_response() used last modified time from r->headers_out.last_modified_time after the header filter chain was already called. At this point, last_modified_time may be already cleared, e.g., with SSI, resulting in incorrect last modified time stored in a cache file. Fix is to introduce u->headers_in.last_modified_time instead. diffstat: src/http/ngx_http_upstream.c | 34 +++++++++++++++++++++++++++++----- src/http/ngx_http_upstream.h | 5 +++-- 2 files changed, 32 insertions(+), 7 deletions(-) diffs (90 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -87,6 +87,8 @@ static ngx_int_t ngx_http_upstream_proce ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_content_length(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); +static ngx_int_t ngx_http_upstream_process_last_modified(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t ngx_http_upstream_process_set_cookie(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset); static ngx_int_t @@ -184,8 +186,7 @@ ngx_http_upstream_header_t ngx_http_ups offsetof(ngx_http_headers_out_t, date), 0 }, { ngx_string("Last-Modified"), - ngx_http_upstream_process_header_line, - offsetof(ngx_http_upstream_headers_in_t, last_modified), + ngx_http_upstream_process_last_modified, 0, ngx_http_upstream_copy_last_modified, 0, 0 }, { ngx_string("ETag"), @@ -2491,7 +2492,7 @@ ngx_http_upstream_send_response(ngx_http } if (valid) { - r->cache->last_modified = r->headers_out.last_modified_time; + r->cache->last_modified = u->headers_in.last_modified_time; r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -3728,6 +3729,29 @@ ngx_http_upstream_process_content_length static ngx_int_t +ngx_http_upstream_process_last_modified(ngx_http_request_t *r, + ngx_table_elt_t *h, ngx_uint_t offset) +{ + ngx_http_upstream_t *u; + + u = r->upstream; + + u->headers_in.last_modified = h; + +#if (NGX_HTTP_CACHE) + + if (u->cacheable) { + u->headers_in.last_modified_time = ngx_http_parse_time(h->value.data, + h->value.len); + } + +#endif + + return NGX_OK; +} + + +static ngx_int_t ngx_http_upstream_process_set_cookie(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { @@ -4183,8 +4207,8 @@ ngx_http_upstream_copy_last_modified(ngx #if (NGX_HTTP_CACHE) if (r->upstream->cacheable) { - r->headers_out.last_modified_time = ngx_http_parse_time(h->value.data, - h->value.len); + r->headers_out.last_modified_time = + r->upstream->headers_in.last_modified_time; } #endif diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -246,11 +246,12 @@ typedef struct { ngx_table_elt_t *content_encoding; #endif - off_t content_length_n; - ngx_array_t cache_control; ngx_array_t cookies; + off_t content_length_n; + time_t last_modified_time; + unsigned connection_close:1; unsigned chunked:1; } ngx_http_upstream_headers_in_t; From mdounin at mdounin.ru Wed Jun 25 22:40:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:19 +0000 Subject: [nginx] Entity tags: downgrade strong etags to weak ones as needed. Message-ID: details: http://hg.nginx.org/nginx/rev/e491b26fa5a1 branches: changeset: 5733:e491b26fa5a1 user: Maxim Dounin date: Thu Jun 26 02:21:01 2014 +0400 description: Entity tags: downgrade strong etags to weak ones as needed. See http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004523.html. diffstat: src/http/modules/ngx_http_addition_filter_module.c | 2 +- src/http/modules/ngx_http_gunzip_filter_module.c | 2 +- src/http/modules/ngx_http_gzip_filter_module.c | 2 +- src/http/modules/ngx_http_ssi_filter_module.c | 5 ++- src/http/modules/ngx_http_sub_filter_module.c | 5 ++- src/http/modules/ngx_http_xslt_filter_module.c | 6 ++- src/http/ngx_http_core_module.c | 40 ++++++++++++++++++++++ src/http/ngx_http_core_module.h | 1 + 8 files changed, 56 insertions(+), 7 deletions(-) diffs (153 lines): diff --git a/src/http/modules/ngx_http_addition_filter_module.c b/src/http/modules/ngx_http_addition_filter_module.c --- a/src/http/modules/ngx_http_addition_filter_module.c +++ b/src/http/modules/ngx_http_addition_filter_module.c @@ -121,7 +121,7 @@ ngx_http_addition_header_filter(ngx_http ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_weak_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_gunzip_filter_module.c b/src/http/modules/ngx_http_gunzip_filter_module.c --- a/src/http/modules/ngx_http_gunzip_filter_module.c +++ b/src/http/modules/ngx_http_gunzip_filter_module.c @@ -165,7 +165,7 @@ ngx_http_gunzip_header_filter(ngx_http_r ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_weak_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_gzip_filter_module.c b/src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c +++ b/src/http/modules/ngx_http_gzip_filter_module.c @@ -306,7 +306,7 @@ ngx_http_gzip_header_filter(ngx_http_req ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); + ngx_http_weak_etag(r); return ngx_http_next_header_filter(r); } diff --git a/src/http/modules/ngx_http_ssi_filter_module.c b/src/http/modules/ngx_http_ssi_filter_module.c --- a/src/http/modules/ngx_http_ssi_filter_module.c +++ b/src/http/modules/ngx_http_ssi_filter_module.c @@ -369,10 +369,13 @@ ngx_http_ssi_header_filter(ngx_http_requ if (r == r->main) { ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); - ngx_http_clear_etag(r); if (!slcf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_weak_etag(r); } } diff --git a/src/http/modules/ngx_http_sub_filter_module.c b/src/http/modules/ngx_http_sub_filter_module.c --- a/src/http/modules/ngx_http_sub_filter_module.c +++ b/src/http/modules/ngx_http_sub_filter_module.c @@ -175,10 +175,13 @@ ngx_http_sub_header_filter(ngx_http_requ if (r == r->main) { ngx_http_clear_content_length(r); - ngx_http_clear_etag(r); if (!slcf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_weak_etag(r); } } diff --git a/src/http/modules/ngx_http_xslt_filter_module.c b/src/http/modules/ngx_http_xslt_filter_module.c --- a/src/http/modules/ngx_http_xslt_filter_module.c +++ b/src/http/modules/ngx_http_xslt_filter_module.c @@ -337,12 +337,14 @@ ngx_http_xslt_send(ngx_http_request_t *r r->headers_out.content_length = NULL; } - ngx_http_clear_etag(r); - conf = ngx_http_get_module_loc_conf(r, ngx_http_xslt_filter_module); if (!conf->last_modified) { ngx_http_clear_last_modified(r); + ngx_http_clear_etag(r); + + } else { + ngx_http_weak_etag(r); } } diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -1851,6 +1851,46 @@ ngx_http_set_etag(ngx_http_request_t *r) } +void +ngx_http_weak_etag(ngx_http_request_t *r) +{ + size_t len; + u_char *p; + ngx_table_elt_t *etag; + + etag = r->headers_out.etag; + + if (etag == NULL) { + return; + } + + if (etag->value.len > 2 + && etag->value.data[0] == 'W' + && etag->value.data[1] == '/') + { + return; + } + + if (etag->value.len < 1 || etag->value.data[0] != '"') { + r->headers_out.etag->hash = 0; + r->headers_out.etag = NULL; + return; + } + + p = ngx_pnalloc(r->pool, etag->value.len + 2); + if (p == NULL) { + r->headers_out.etag->hash = 0; + r->headers_out.etag = NULL; + return; + } + + len = ngx_sprintf(p, "W/%V", &etag->value) - p; + + etag->value.data = p; + etag->value.len = len; +} + + ngx_int_t ngx_http_send_response(ngx_http_request_t *r, ngx_uint_t status, ngx_str_t *ct, ngx_http_complex_value_t *cv) diff --git a/src/http/ngx_http_core_module.h b/src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h +++ b/src/http/ngx_http_core_module.h @@ -501,6 +501,7 @@ void *ngx_http_test_content_type(ngx_htt ngx_int_t ngx_http_set_content_type(ngx_http_request_t *r); void ngx_http_set_exten(ngx_http_request_t *r); ngx_int_t ngx_http_set_etag(ngx_http_request_t *r); +void ngx_http_weak_etag(ngx_http_request_t *r); ngx_int_t ngx_http_send_response(ngx_http_request_t *r, ngx_uint_t status, ngx_str_t *ct, ngx_http_complex_value_t *cv); u_char *ngx_http_map_uri_to_path(ngx_http_request_t *r, ngx_str_t *name, From mdounin at mdounin.ru Wed Jun 25 22:40:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:22 +0000 Subject: [nginx] Entity tags: weak comparison for If-None-Match. Message-ID: details: http://hg.nginx.org/nginx/rev/af229f8cf987 branches: changeset: 5734:af229f8cf987 user: Maxim Dounin date: Thu Jun 26 02:21:20 2014 +0400 description: Entity tags: weak comparison for If-None-Match. diffstat: src/http/modules/ngx_http_not_modified_filter_module.c | 38 +++++++++++++---- 1 files changed, 28 insertions(+), 10 deletions(-) diffs (92 lines): diff --git a/src/http/modules/ngx_http_not_modified_filter_module.c b/src/http/modules/ngx_http_not_modified_filter_module.c --- a/src/http/modules/ngx_http_not_modified_filter_module.c +++ b/src/http/modules/ngx_http_not_modified_filter_module.c @@ -13,7 +13,7 @@ static ngx_uint_t ngx_http_test_if_unmodified(ngx_http_request_t *r); static ngx_uint_t ngx_http_test_if_modified(ngx_http_request_t *r); static ngx_uint_t ngx_http_test_if_match(ngx_http_request_t *r, - ngx_table_elt_t *header); + ngx_table_elt_t *header, ngx_uint_t weak); static ngx_int_t ngx_http_not_modified_filter_init(ngx_conf_t *cf); @@ -69,7 +69,7 @@ ngx_http_not_modified_header_filter(ngx_ } if (r->headers_in.if_match - && !ngx_http_test_if_match(r, r->headers_in.if_match)) + && !ngx_http_test_if_match(r, r->headers_in.if_match, 0)) { return ngx_http_filter_finalize_request(r, NULL, NGX_HTTP_PRECONDITION_FAILED); @@ -84,7 +84,7 @@ ngx_http_not_modified_header_filter(ngx_ } if (r->headers_in.if_none_match - && !ngx_http_test_if_match(r, r->headers_in.if_none_match)) + && !ngx_http_test_if_match(r, r->headers_in.if_none_match, 1)) { return ngx_http_next_header_filter(r); } @@ -161,10 +161,11 @@ ngx_http_test_if_modified(ngx_http_reque static ngx_uint_t -ngx_http_test_if_match(ngx_http_request_t *r, ngx_table_elt_t *header) +ngx_http_test_if_match(ngx_http_request_t *r, ngx_table_elt_t *header, + ngx_uint_t weak) { u_char *start, *end, ch; - ngx_str_t *etag, *list; + ngx_str_t etag, *list; list = &header->value; @@ -176,25 +177,42 @@ ngx_http_test_if_match(ngx_http_request_ return 0; } - etag = &r->headers_out.etag->value; + etag = r->headers_out.etag->value; ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, - "http im:\"%V\" etag:%V", list, etag); + "http im:\"%V\" etag:%V", list, &etag); + + if (weak + && etag.len > 2 + && etag.data[0] == 'W' + && etag.data[1] == '/') + { + etag.len -= 2; + etag.data += 2; + } start = list->data; end = list->data + list->len; while (start < end) { - if (etag->len > (size_t) (end - start)) { + if (weak + && end - start > 2 + && start[0] == 'W' + && start[1] == '/') + { + start += 2; + } + + if (etag.len > (size_t) (end - start)) { return 0; } - if (ngx_strncmp(start, etag->data, etag->len) != 0) { + if (ngx_strncmp(start, etag.data, etag.len) != 0) { goto skip; } - start += etag->len; + start += etag.len; while (start < end) { ch = *start; From mdounin at mdounin.ru Wed Jun 25 22:40:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:25 +0000 Subject: [nginx] Entity tags: explicit flag to skip not modified filter. Message-ID: details: http://hg.nginx.org/nginx/rev/5fb1e57c758a branches: changeset: 5735:5fb1e57c758a user: Maxim Dounin date: Thu Jun 26 02:27:11 2014 +0400 description: Entity tags: explicit flag to skip not modified filter. Previously, last_modified_time was tested against -1 to check if the not modified filter should be skipped. Notably, this prevented nginx from additional If-Modified-Since (et al.) checks on proxied responses. Such behaviour is suboptimal in some cases though, as checks are always skipped on responses from a cache with ETag only (without Last-Modified), resulting in If-None-Match being ignored in such cases. Additionally, it was not possible to return 412 from the If-Unmodified-Since if last modification time was not known for some reason. This change introduces explicit r->disable_not_modified flag instead, which is set by ngx_http_upstream_process_headers(). diffstat: src/http/modules/ngx_http_not_modified_filter_module.c | 10 +++++++++- src/http/ngx_http_request.h | 1 + src/http/ngx_http_upstream.c | 2 ++ 3 files changed, 12 insertions(+), 1 deletions(-) diffs (57 lines): diff --git a/src/http/modules/ngx_http_not_modified_filter_module.c b/src/http/modules/ngx_http_not_modified_filter_module.c --- a/src/http/modules/ngx_http_not_modified_filter_module.c +++ b/src/http/modules/ngx_http_not_modified_filter_module.c @@ -56,7 +56,7 @@ ngx_http_not_modified_header_filter(ngx_ { if (r->headers_out.status != NGX_HTTP_OK || r != r->main - || r->headers_out.last_modified_time == -1) + || r->disable_not_modified) { return ngx_http_next_header_filter(r); } @@ -114,6 +114,10 @@ ngx_http_test_if_unmodified(ngx_http_req { time_t iums; + if (r->headers_out.last_modified_time == (time_t) -1) { + return 0; + } + iums = ngx_http_parse_time(r->headers_in.if_unmodified_since->value.data, r->headers_in.if_unmodified_since->value.len); @@ -134,6 +138,10 @@ ngx_http_test_if_modified(ngx_http_reque time_t ims; ngx_http_core_loc_conf_t *clcf; + if (r->headers_out.last_modified_time == (time_t) -1) { + return 1; + } + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); if (clcf->if_modified_since == NGX_HTTP_IMS_OFF) { diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h +++ b/src/http/ngx_http_request.h @@ -528,6 +528,7 @@ struct ngx_http_request_s { unsigned filter_need_temporary:1; unsigned allow_ranges:1; unsigned single_range:1; + unsigned disable_not_modified:1; #if (NGX_STAT_STUB) unsigned stat_reading:1; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2238,6 +2238,8 @@ ngx_http_upstream_process_headers(ngx_ht r->headers_out.content_length_n = u->headers_in.content_length_n; + r->disable_not_modified = !u->cacheable; + u->length = -1; return NGX_OK; From mdounin at mdounin.ru Wed Jun 25 22:40:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:28 +0000 Subject: [nginx] Cache: version in cache files. Message-ID: details: http://hg.nginx.org/nginx/rev/2fe1967f8854 branches: changeset: 5736:2fe1967f8854 user: Maxim Dounin date: Thu Jun 26 02:27:21 2014 +0400 description: Cache: version in cache files. This allows to change the structure of cache files without spamming logs with false alerts. diffstat: src/http/ngx_http_cache.h | 3 +++ src/http/ngx_http_file_cache.c | 11 ++++++++++- 2 files changed, 13 insertions(+), 1 deletions(-) diffs (62 lines): diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -25,6 +25,8 @@ #define NGX_HTTP_CACHE_KEY_LEN 16 +#define NGX_HTTP_CACHE_VERSION 1 + typedef struct { ngx_uint_t status; @@ -97,6 +99,7 @@ struct ngx_http_cache_s { typedef struct { + ngx_uint_t version; time_t valid_sec; time_t last_modified; time_t date; diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -498,6 +498,12 @@ ngx_http_file_cache_read(ngx_http_reques h = (ngx_http_file_cache_header_t *) c->buf->pos; + if (h->version != NGX_HTTP_CACHE_VERSION) { + ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + "cache file \"%s\" version mismatch", c->file.name.data); + return NGX_DECLINED; + } + if (h->crc32 != c->crc32) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, 0, "cache file \"%s\" has md5 collision", c->file.name.data); @@ -875,6 +881,7 @@ ngx_http_file_cache_set_header(ngx_http_ ngx_memzero(h, sizeof(ngx_http_file_cache_header_t)); + h->version = NGX_HTTP_CACHE_VERSION; h->valid_sec = c->valid_sec; h->last_modified = c->last_modified; h->date = c->date; @@ -1042,7 +1049,8 @@ ngx_http_file_cache_update_header(ngx_ht goto done; } - if (h.last_modified != c->last_modified + if (h.version != NGX_HTTP_CACHE_VERSION + || h.last_modified != c->last_modified || h.crc32 != c->crc32 || h.header_start != c->header_start || h.body_start != c->body_start) @@ -1060,6 +1068,7 @@ ngx_http_file_cache_update_header(ngx_ht ngx_memzero(&h, sizeof(ngx_http_file_cache_header_t)); + h.version = NGX_HTTP_CACHE_VERSION; h.valid_sec = c->valid_sec; h.last_modified = c->last_modified; h.date = c->date; From mdounin at mdounin.ru Wed Jun 25 22:40:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:31 +0000 Subject: [nginx] Cache: ETag now saved into cache header. Message-ID: details: http://hg.nginx.org/nginx/rev/44b9ab7752e3 branches: changeset: 5737:44b9ab7752e3 user: Maxim Dounin date: Thu Jun 26 02:28:23 2014 +0400 description: Cache: ETag now saved into cache header. diffstat: src/http/ngx_http_cache.h | 7 ++++++- src/http/ngx_http_file_cache.c | 12 ++++++++++++ src/http/ngx_http_upstream.c | 4 ++++ 3 files changed, 22 insertions(+), 1 deletions(-) diffs (82 lines): diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -24,8 +24,9 @@ #define NGX_HTTP_CACHE_SCARCE 8 #define NGX_HTTP_CACHE_KEY_LEN 16 +#define NGX_HTTP_CACHE_ETAG_LEN 42 -#define NGX_HTTP_CACHE_VERSION 1 +#define NGX_HTTP_CACHE_VERSION 2 typedef struct { @@ -69,6 +70,8 @@ struct ngx_http_cache_s { time_t last_modified; time_t date; + ngx_str_t etag; + size_t header_start; size_t body_start; off_t length; @@ -107,6 +110,8 @@ typedef struct { u_short valid_msec; u_short header_start; u_short body_start; + u_char etag_len; + u_char etag[NGX_HTTP_CACHE_ETAG_LEN]; } ngx_http_file_cache_header_t; diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -525,6 +525,8 @@ ngx_http_file_cache_read(ngx_http_reques c->valid_msec = h->valid_msec; c->header_start = h->header_start; c->body_start = h->body_start; + c->etag.len = h->etag_len; + c->etag.data = h->etag; r->cached = 1; @@ -890,6 +892,11 @@ ngx_http_file_cache_set_header(ngx_http_ h->header_start = (u_short) c->header_start; h->body_start = (u_short) c->body_start; + if (c->etag.len <= NGX_HTTP_CACHE_ETAG_LEN) { + h->etag_len = (u_char) c->etag.len; + ngx_memcpy(h->etag, c->etag.data, c->etag.len); + } + p = buf + sizeof(ngx_http_file_cache_header_t); p = ngx_cpymem(p, ngx_http_file_cache_key, sizeof(ngx_http_file_cache_key)); @@ -1077,6 +1084,11 @@ ngx_http_file_cache_update_header(ngx_ht h.header_start = (u_short) c->header_start; h.body_start = (u_short) c->body_start; + if (c->etag.len <= NGX_HTTP_CACHE_ETAG_LEN) { + h.etag_len = (u_char) c->etag.len; + ngx_memcpy(h.etag, c->etag.data, c->etag.len); + } + (void) ngx_write_file(&file, (u_char *) &h, sizeof(ngx_http_file_cache_header_t), 0); diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2498,6 +2498,10 @@ ngx_http_upstream_send_response(ngx_http r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); + if (u->headers_in.etag) { + r->cache->etag = u->headers_in.etag->value; + } + ngx_http_file_cache_set_header(r, u->buffer.start); } else { From mdounin at mdounin.ru Wed Jun 25 22:40:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Jun 2014 22:40:34 +0000 Subject: [nginx] Upstream: cache revalidation using If-None-Match. Message-ID: details: http://hg.nginx.org/nginx/rev/c95d7882dfc9 branches: changeset: 5738:c95d7882dfc9 user: Maxim Dounin date: Thu Jun 26 02:35:01 2014 +0400 description: Upstream: cache revalidation using If-None-Match. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 2 +- src/http/modules/ngx_http_proxy_module.c | 2 +- src/http/modules/ngx_http_scgi_module.c | 2 +- src/http/modules/ngx_http_uwsgi_module.c | 2 +- src/http/ngx_http_upstream.c | 29 +++++++++++++++++++++++++++++ 5 files changed, 33 insertions(+), 4 deletions(-) diffs (101 lines): diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -573,7 +573,7 @@ static ngx_keyval_t ngx_http_fastcgi_ca { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, - { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, + { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("$upstream_cache_etag") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, { ngx_string("HTTP_RANGE"), ngx_string("") }, { ngx_string("HTTP_IF_RANGE"), ngx_string("") }, diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -677,7 +677,7 @@ static ngx_keyval_t ngx_http_proxy_cach { ngx_string("If-Modified-Since"), ngx_string("$upstream_cache_last_modified") }, { ngx_string("If-Unmodified-Since"), ngx_string("") }, - { ngx_string("If-None-Match"), ngx_string("") }, + { ngx_string("If-None-Match"), ngx_string("$upstream_cache_etag") }, { ngx_string("If-Match"), ngx_string("") }, { ngx_string("Range"), ngx_string("") }, { ngx_string("If-Range"), ngx_string("") }, diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -379,7 +379,7 @@ static ngx_keyval_t ngx_http_scgi_cache { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, - { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, + { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("$upstream_cache_etag") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, { ngx_string("HTTP_RANGE"), ngx_string("") }, { ngx_string("HTTP_IF_RANGE"), ngx_string("") }, diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -507,7 +507,7 @@ static ngx_keyval_t ngx_http_uwsgi_cach { ngx_string("HTTP_IF_MODIFIED_SINCE"), ngx_string("$upstream_cache_last_modified") }, { ngx_string("HTTP_IF_UNMODIFIED_SINCE"), ngx_string("") }, - { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("") }, + { ngx_string("HTTP_IF_NONE_MATCH"), ngx_string("$upstream_cache_etag") }, { ngx_string("HTTP_IF_MATCH"), ngx_string("") }, { ngx_string("HTTP_RANGE"), ngx_string("") }, { ngx_string("HTTP_IF_RANGE"), ngx_string("") }, diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -19,6 +19,8 @@ static ngx_int_t ngx_http_upstream_cache ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_upstream_cache_etag(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); #endif static void ngx_http_upstream_init_request(ngx_http_request_t *r); @@ -367,6 +369,10 @@ static ngx_http_variable_t ngx_http_ups ngx_http_upstream_cache_last_modified, 0, NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, + { ngx_string("upstream_cache_etag"), NULL, + ngx_http_upstream_cache_etag, 0, + NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, + #endif { ngx_null_string, NULL, NULL, 0, 0, 0 } @@ -4792,6 +4798,29 @@ ngx_http_upstream_cache_last_modified(ng return NGX_OK; } + +static ngx_int_t +ngx_http_upstream_cache_etag(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + if (r->upstream == NULL + || !r->upstream->conf->cache_revalidate + || r->upstream->cache_status != NGX_HTTP_CACHE_EXPIRED + || r->cache->etag.len == 0) + { + v->not_found = 1; + return NGX_OK; + } + + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->len = r->cache->etag.len; + v->data = r->cache->etag.data; + + return NGX_OK; +} + #endif From mdounin at mdounin.ru Thu Jun 26 00:12:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 00:12:11 +0000 Subject: [nginx] Configure: style. Message-ID: details: http://hg.nginx.org/nginx/rev/6e4bb1d6679d branches: changeset: 5739:6e4bb1d6679d user: Maxim Dounin date: Thu Jun 26 03:34:02 2014 +0400 description: Configure: style. diffstat: auto/lib/perl/conf | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diffs (26 lines): diff --git a/auto/lib/perl/conf b/auto/lib/perl/conf --- a/auto/lib/perl/conf +++ b/auto/lib/perl/conf @@ -41,6 +41,7 @@ if test -n "$NGX_PERL_VER"; then ngx_perl_ldopts=`$NGX_PERL -MExtUtils::Embed -e ldopts` ngx_perl_dlext=`$NGX_PERL -MConfig -e 'print $Config{dlext}'` + ngx_perl_module="src/http/modules/perl/blib/arch/auto/nginx.$ngx_perl_dlext" if $NGX_PERL -V:usemultiplicity | grep define > /dev/null; then have=NGX_HAVE_PERL_MULTIPLICITY . auto/have @@ -54,11 +55,12 @@ if test -n "$NGX_PERL_VER"; then if [ "$NGX_SYSTEM" = "Darwin" ]; then # OS X system perl wants to link universal binaries - ngx_perl_ldopts=`echo $ngx_perl_ldopts | sed -e 's/-arch x86_64 -arch i386//'` + ngx_perl_ldopts=`echo $ngx_perl_ldopts \ + | sed -e 's/-arch x86_64 -arch i386//'` fi CORE_LINK="$CORE_LINK $ngx_perl_ldopts" - LINK_DEPS="$LINK_DEPS $NGX_OBJS/src/http/modules/perl/blib/arch/auto/nginx/nginx.$ngx_perl_dlext" + LINK_DEPS="$LINK_DEPS $NGX_OBJS/$ngx_perl_module" if test -n "$NGX_PERL_MODULES"; then have=NGX_PERL_MODULES value="(u_char *) \"$NGX_PERL_MODULES\"" From mdounin at mdounin.ru Thu Jun 26 00:12:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 00:12:15 +0000 Subject: [nginx] Core: plugged socket leak during configuration test. Message-ID: details: http://hg.nginx.org/nginx/rev/4440438eb086 branches: changeset: 5740:4440438eb086 user: Maxim Dounin date: Thu Jun 26 03:34:05 2014 +0400 description: Core: plugged socket leak during configuration test. This isn't really important as configuration testing shortly ends with a process termination which will free all sockets, though Coverity complains. Prodded by Coverity (CID 400872). diffstat: src/core/ngx_connection.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diffs (31 lines): diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -411,13 +411,11 @@ ngx_open_listening_sockets(ngx_cycle_t * if (bind(s, ls[i].sockaddr, ls[i].socklen) == -1) { err = ngx_socket_errno; - if (err == NGX_EADDRINUSE && ngx_test_config) { - continue; + if (err != NGX_EADDRINUSE || !ngx_test_config) { + ngx_log_error(NGX_LOG_EMERG, log, err, + "bind() to %V failed", &ls[i].addr_text); } - ngx_log_error(NGX_LOG_EMERG, log, err, - "bind() to %V failed", &ls[i].addr_text); - if (ngx_close_socket(s) == -1) { ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, ngx_close_socket_n " %V failed", @@ -428,7 +426,9 @@ ngx_open_listening_sockets(ngx_cycle_t * return NGX_ERROR; } - failed = 1; + if (!ngx_test_config) { + failed = 1; + } continue; } From mdounin at mdounin.ru Thu Jun 26 00:12:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 00:12:18 +0000 Subject: [nginx] Fixed wrong sizeof() in ngx_http_init_locations(). Message-ID: details: http://hg.nginx.org/nginx/rev/b490bfbf8cfa branches: changeset: 5741:b490bfbf8cfa user: Maxim Dounin date: Thu Jun 26 03:34:13 2014 +0400 description: Fixed wrong sizeof() in ngx_http_init_locations(). There is no real difference on all known platforms, but it's still wrong. Found by Coverity (CID 400876). diffstat: src/http/ngx_http.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (21 lines): diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -742,7 +742,7 @@ ngx_http_init_locations(ngx_conf_t *cf, if (named) { clcfp = ngx_palloc(cf->pool, - (n + 1) * sizeof(ngx_http_core_loc_conf_t **)); + (n + 1) * sizeof(ngx_http_core_loc_conf_t *)); if (clcfp == NULL) { return NGX_ERROR; } @@ -768,7 +768,7 @@ ngx_http_init_locations(ngx_conf_t *cf, if (regex) { clcfp = ngx_palloc(cf->pool, - (r + 1) * sizeof(ngx_http_core_loc_conf_t **)); + (r + 1) * sizeof(ngx_http_core_loc_conf_t *)); if (clcfp == NULL) { return NGX_ERROR; } From mdounin at mdounin.ru Thu Jun 26 00:12:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 00:12:21 +0000 Subject: [nginx] Core: removed meaningless check from ngx_palloc_block(). Message-ID: details: http://hg.nginx.org/nginx/rev/c45c9812cf11 branches: changeset: 5742:c45c9812cf11 user: Maxim Dounin date: Thu Jun 26 03:34:19 2014 +0400 description: Core: removed meaningless check from ngx_palloc_block(). The check became meaningless after refactoring in 2a92804f4109. With the loop currently in place, "current" can't be NULL, hence the check can be dropped. Additionally, the local variable "current" was removed to simplify code, and pool->current now used directly instead. Found by Coverity (CID 714236). diffstat: src/core/ngx_palloc.c | 10 +++------- 1 files changed, 3 insertions(+), 7 deletions(-) diffs (33 lines): diff --git a/src/core/ngx_palloc.c b/src/core/ngx_palloc.c --- a/src/core/ngx_palloc.c +++ b/src/core/ngx_palloc.c @@ -181,7 +181,7 @@ ngx_palloc_block(ngx_pool_t *pool, size_ { u_char *m; size_t psize; - ngx_pool_t *p, *new, *current; + ngx_pool_t *p, *new; psize = (size_t) (pool->d.end - (u_char *) pool); @@ -200,18 +200,14 @@ ngx_palloc_block(ngx_pool_t *pool, size_ m = ngx_align_ptr(m, NGX_ALIGNMENT); new->d.last = m + size; - current = pool->current; - - for (p = current; p->d.next; p = p->d.next) { + for (p = pool->current; p->d.next; p = p->d.next) { if (p->d.failed++ > 4) { - current = p->d.next; + pool->current = p->d.next; } } p->d.next = new; - pool->current = current ? current : new; - return m; } From idinasabari at gmail.com Wed Jun 25 23:40:40 2014 From: idinasabari at gmail.com (idina) Date: Thu, 26 Jun 2014 07:40:40 +0800 Subject: [PATCH] Core: merge adjacent free slab pages to ameliorate fragmentation from multi-page blocks (Was Re: Help with shared memory usage) Message-ID: <265f2f54-4f2b-45f8-9daf-18a8f72e6cb0@email.android.com> Sent by Bird Mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From flygoast at 126.com Thu Jun 26 03:10:53 2014 From: flygoast at 126.com (flygoast) Date: Thu, 26 Jun 2014 11:10:53 +0800 (CST) Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <20140625181233.GE1849@mdounin.ru> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> <20140625130928.GC1849@mdounin.ru> <24a4a796.131f.146d353fcef.Coremail.flygoast@126.com> <20140625181233.GE1849@mdounin.ru> Message-ID: <1166a6ec.c82.146d6278729.Coremail.flygoast@126.com> You're right. It still can hang. I have another question: What's the meaning for "c->sent - sent >= limit - (off_t) (2 * ngx_pagesize))"? I don't understand it. Why "2 * ngx_pagesize"? Can you help to explain the logic? Thanks. At 2014-06-26 02:12:33,"Maxim Dounin" wrote: >Hello! > >On Wed, Jun 25, 2014 at 10:00:35PM +0800, flygoast wrote: > >> At here: >> in ngx_http_write_filter(): >> >> >> if (limit >> && c->write->ready >> && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize)) >> { >> c->write->delayed = 1; >> ngx_add_timer(c->write, 1); >> } >> >> >> limit's value from clcf->sendfile_max_chunk. In my nginx.conf, I set "sendfile_max_chunk 8k;". I attached a debug log for this situation in last mail. > >Well, so the problem happens with sendfile_max_chunk set lower >than proxy_buffer_size. While I don't think that >sendfile_max_chunk 8k is practical, it probably worth fixing. > >I don't think that suggested patch is right >though - it will not prevent infinite stall of transfering a big >enough data chunk, as timer set by write filter will be removed. > >-- >Maxim Dounin >http://nginx.org/ > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jun 26 11:22:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 11:22:22 +0000 Subject: [nginx] Configure: restored "nginx/" missed in 6e4bb1d6679d. Message-ID: details: http://hg.nginx.org/nginx/rev/dde2ae4701e1 branches: changeset: 5743:dde2ae4701e1 user: Maxim Dounin date: Thu Jun 26 05:08:59 2014 +0400 description: Configure: restored "nginx/" missed in 6e4bb1d6679d. diffstat: auto/lib/perl/conf | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diffs (13 lines): diff --git a/auto/lib/perl/conf b/auto/lib/perl/conf --- a/auto/lib/perl/conf +++ b/auto/lib/perl/conf @@ -41,7 +41,8 @@ if test -n "$NGX_PERL_VER"; then ngx_perl_ldopts=`$NGX_PERL -MExtUtils::Embed -e ldopts` ngx_perl_dlext=`$NGX_PERL -MConfig -e 'print $Config{dlext}'` - ngx_perl_module="src/http/modules/perl/blib/arch/auto/nginx.$ngx_perl_dlext" + ngx_perl_libdir="src/http/modules/perl/blib/arch/auto" + ngx_perl_module="$ngx_perl_libdir/nginx/nginx.$ngx_perl_dlext" if $NGX_PERL -V:usemultiplicity | grep define > /dev/null; then have=NGX_HAVE_PERL_MULTIPLICITY . auto/have From mdounin at mdounin.ru Thu Jun 26 12:08:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Jun 2014 16:08:06 +0400 Subject: [PATCH]Upstream: fix possible request hang when "proxy_buffering" is off. In-Reply-To: <1166a6ec.c82.146d6278729.Coremail.flygoast@126.com> References: <222b2ccf.11d4.146d2c4338f.Coremail.flygoast@126.com> <20140625130928.GC1849@mdounin.ru> <24a4a796.131f.146d353fcef.Coremail.flygoast@126.com> <20140625181233.GE1849@mdounin.ru> <1166a6ec.c82.146d6278729.Coremail.flygoast@126.com> Message-ID: <20140626120806.GN1849@mdounin.ru> Hello! On Thu, Jun 26, 2014 at 11:10:53AM +0800, flygoast wrote: > You're right. It still can hang. I have another question: > > > What's the meaning for "c->sent - sent >= limit - (off_t) (2 * ngx_pagesize))"? > > > I don't understand it. Why "2 * ngx_pagesize"? Can you help to explain the logic? This logic was introduced by Igor in this commit: http://hg.nginx.org/nginx/rev/e67ef50c3176 Idea is to match cases when amout of disk-related work (almost) matches the one allowed by sendfile_max_chunk. AFAIK, the "ngx_pagesize" is due to the fact that reading even single unaligned byte is identical to reading the whole page from disk, and "2" as there are normally two unaligend ends. That is, if 4098 bytes were sent, it means that up to 3 * 4096 bytes were read from from disk. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Thu Jun 26 17:40:55 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Thu, 26 Jun 2014 17:40:55 +0000 Subject: [nginx] SSL: the "ssl_password_file" directive. Message-ID: details: http://hg.nginx.org/nginx/rev/42114bf12da0 branches: changeset: 5744:42114bf12da0 user: Valentin Bartenev date: Mon Jun 16 19:43:25 2014 +0400 description: SSL: the "ssl_password_file" directive. diffstat: src/event/ngx_event_openssl.c | 223 +++++++++++++++++++++++++++++++- src/event/ngx_event_openssl.h | 3 +- src/http/modules/ngx_http_ssl_module.c | 37 +++++- src/http/modules/ngx_http_ssl_module.h | 2 + src/mail/ngx_mail_ssl_module.c | 37 +++++- src/mail/ngx_mail_ssl_module.h | 2 + 6 files changed, 293 insertions(+), 11 deletions(-) diffs (truncated from 480 to 300 lines): diff -r dde2ae4701e1 -r 42114bf12da0 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Jun 26 05:08:59 2014 +0400 +++ b/src/event/ngx_event_openssl.c Mon Jun 16 19:43:25 2014 +0400 @@ -10,14 +10,20 @@ #include +#define NGX_SSL_PASSWORD_BUFFER_SIZE 4096 + + typedef struct { ngx_uint_t engine; /* unsigned engine:1; */ } ngx_openssl_conf_t; +static int ngx_ssl_password_callback(char *buf, int size, int rwflag, + void *userdata); static int ngx_ssl_verify_callback(int ok, X509_STORE_CTX *x509_store); static void ngx_ssl_info_callback(const ngx_ssl_conn_t *ssl_conn, int where, int ret); +static void ngx_ssl_passwords_cleanup(void *data); static void ngx_ssl_handshake_handler(ngx_event_t *ev); static ngx_int_t ngx_ssl_handle_recv(ngx_connection_t *c, int n); static void ngx_ssl_write_handler(ngx_event_t *wev); @@ -257,11 +263,13 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, - ngx_str_t *key) + ngx_str_t *key, ngx_array_t *passwords) { - BIO *bio; - X509 *x509; - u_long n; + BIO *bio; + X509 *x509; + u_long n; + ngx_str_t *pwd; + ngx_uint_t tries; if (ngx_conf_full_name(cf->cycle, cert, 1) != NGX_OK) { return NGX_ERROR; @@ -348,19 +356,76 @@ ngx_ssl_certificate(ngx_conf_t *cf, ngx_ return NGX_ERROR; } - if (SSL_CTX_use_PrivateKey_file(ssl->ctx, (char *) key->data, - SSL_FILETYPE_PEM) - == 0) - { + if (passwords) { + tries = passwords->nelts; + pwd = passwords->elts; + + SSL_CTX_set_default_passwd_cb(ssl->ctx, ngx_ssl_password_callback); + SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, pwd); + + } else { + tries = 1; +#if (NGX_SUPPRESS_WARN) + pwd = NULL; +#endif + } + + for ( ;; ) { + + if (SSL_CTX_use_PrivateKey_file(ssl->ctx, (char *) key->data, + SSL_FILETYPE_PEM) + != 0) + { + break; + } + + if (--tries) { + n = ERR_peek_error(); + + if (ERR_GET_LIB(n) == ERR_LIB_EVP + && ERR_GET_REASON(n) == EVP_R_BAD_DECRYPT) + { + ERR_clear_error(); + SSL_CTX_set_default_passwd_cb_userdata(ssl->ctx, ++pwd); + continue; + } + } + ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_use_PrivateKey_file(\"%s\") failed", key->data); return NGX_ERROR; } + SSL_CTX_set_default_passwd_cb(ssl->ctx, NULL); + return NGX_OK; } +static int +ngx_ssl_password_callback(char *buf, int size, int rwflag, void *userdata) +{ + ngx_str_t *pwd = userdata; + + if (rwflag) { + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, + "ngx_ssl_password_callback() is called for encryption"); + return 0; + } + + if (pwd->len > (size_t) size) { + ngx_log_error(NGX_LOG_ERR, ngx_cycle->log, 0, + "password is truncated to %d bytes", size); + } else { + size = pwd->len; + } + + ngx_memcpy(buf, pwd->data, size); + + return size; +} + + ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth) @@ -597,6 +662,148 @@ ngx_ssl_rsa512_key_callback(ngx_ssl_conn } +ngx_array_t * +ngx_ssl_read_password_file(ngx_conf_t *cf, ngx_str_t *file) +{ + u_char *p, *last, *end; + size_t len; + ssize_t n; + ngx_fd_t fd; + ngx_str_t *pwd; + ngx_array_t *passwords; + ngx_pool_cleanup_t *cln; + u_char buf[NGX_SSL_PASSWORD_BUFFER_SIZE]; + + if (ngx_conf_full_name(cf->cycle, file, 1) != NGX_OK) { + return NULL; + } + + cln = ngx_pool_cleanup_add(cf->temp_pool, 0); + passwords = ngx_array_create(cf->temp_pool, 4, sizeof(ngx_str_t)); + + if (cln == NULL || passwords == NULL) { + return NULL; + } + + cln->handler = ngx_ssl_passwords_cleanup; + cln->data = passwords; + + fd = ngx_open_file(file->data, NGX_FILE_RDONLY, NGX_FILE_OPEN, 0); + if (fd == NGX_INVALID_FILE) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, ngx_errno, + ngx_open_file_n " \"%s\" failed", file->data); + return NULL; + } + + len = 0; + last = buf; + + do { + n = ngx_read_fd(fd, last, NGX_SSL_PASSWORD_BUFFER_SIZE - len); + + if (n == -1) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, ngx_errno, + ngx_read_fd_n " \"%s\" failed", file->data); + passwords = NULL; + goto cleanup; + } + + end = last + n; + + if (len && n == 0) { + *end++ = LF; + } + + p = buf; + + for ( ;; ) { + last = ngx_strlchr(last, end, LF); + + if (last == NULL) { + break; + } + + len = last++ - p; + + if (len && p[len - 1] == CR) { + len--; + } + + if (len) { + pwd = ngx_array_push(passwords); + if (pwd == NULL) { + passwords = NULL; + goto cleanup; + } + + pwd->len = len; + pwd->data = ngx_pnalloc(cf->temp_pool, len); + + if (pwd->data == NULL) { + passwords->nelts--; + passwords = NULL; + goto cleanup; + } + + ngx_memcpy(pwd->data, p, len); + } + + p = last; + } + + len = end - p; + + if (len == NGX_SSL_PASSWORD_BUFFER_SIZE) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "too long line in \"%s\"", file->data); + passwords = NULL; + goto cleanup; + } + + ngx_memmove(buf, p, len); + last = buf + len; + + } while (n != 0); + + if (passwords->nelts == 0) { + pwd = ngx_array_push(passwords); + if (pwd == NULL) { + passwords = NULL; + goto cleanup; + } + + ngx_memzero(pwd, sizeof(ngx_str_t)); + } + +cleanup: + + if (ngx_close_file(fd) == NGX_FILE_ERROR) { + ngx_conf_log_error(NGX_LOG_ALERT, cf, ngx_errno, + ngx_close_file_n " \"%s\" failed", file->data); + } + + ngx_memzero(buf, NGX_SSL_PASSWORD_BUFFER_SIZE); + + return passwords; +} + + +static void +ngx_ssl_passwords_cleanup(void *data) +{ + ngx_array_t *passwords = data; + + ngx_str_t *pwd; + ngx_uint_t i; + + pwd = passwords->elts; + + for (i = 0; i < passwords->nelts; i++) { + ngx_memzero(pwd[i].data, pwd[i].len); + } +} + + ngx_int_t ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file) { diff -r dde2ae4701e1 -r 42114bf12da0 src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h Thu Jun 26 05:08:59 2014 +0400 +++ b/src/event/ngx_event_openssl.h Mon Jun 16 19:43:25 2014 +0400 @@ -112,7 +112,7 @@ typedef struct { ngx_int_t ngx_ssl_init(ngx_log_t *log); ngx_int_t ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_t protocols, void *data); ngx_int_t ngx_ssl_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, - ngx_str_t *cert, ngx_str_t *key); + ngx_str_t *cert, ngx_str_t *key, ngx_array_t *passwords); ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth); ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, @@ -124,6 +124,7 @@ ngx_int_t ngx_ssl_stapling_resolver(ngx_ ngx_resolver_t *resolver, ngx_msec_t resolver_timeout); RSA *ngx_ssl_rsa512_key_callback(ngx_ssl_conn_t *ssl_conn, int is_export, int key_length); +ngx_array_t *ngx_ssl_read_password_file(ngx_conf_t *cf, ngx_str_t *file); ngx_int_t ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file); ngx_int_t ngx_ssl_ecdh_curve(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *name); ngx_int_t ngx_ssl_session_cache(ngx_ssl_t *ssl, ngx_str_t *sess_ctx, diff -r dde2ae4701e1 -r 42114bf12da0 src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c Thu Jun 26 05:08:59 2014 +0400 +++ b/src/http/modules/ngx_http_ssl_module.c Mon Jun 16 19:43:25 2014 +0400 @@ -43,6 +43,8 @@ static char *ngx_http_ssl_merge_srv_conf static char *ngx_http_ssl_enable(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, From grrm77 at gmail.com Thu Jun 26 19:07:55 2014 From: grrm77 at gmail.com (grrm grrm) Date: Thu, 26 Jun 2014 22:07:55 +0300 Subject: Patch: Refactor ngx_http_write_request_body into a filter In-Reply-To: <20140620180926.GO1849@mdounin.ru> References: <20140620180926.GO1849@mdounin.ru> Message-ID: Hi! I managed to fix the write to disk issue, but as you said the code now looks quite convoluted. Those ifs are horrible. Understandably I guess, when you try to move logic from different places into one place but it still depends on external context (rb->buf). My patch was mostly a try to pave the way to non-buffered request body processing in way similar to the response processing pipeline where all the work is done by the filters. I saw this feature in the tengine fork of nginx, however there the work is still done by a handler similar to write_to_file. Also, all the body data need to be copied inside the memory at least one time, which is not good. I also looked at the repose pipeline and there are two main methods of reading from the client - the nonbuffered and ngx_event_pipe_t. Do you think the pipe could be used in reverse (client->upstream)? Or would it even make sense to do it that way? Also, do you have any work done into this direction (if you can comment on that)? Granted, my attempt was maybe too big a step. Thank you. 2014-06-20 21:09 GMT+03:00 Maxim Dounin : > Hello! > > On Fri, Jun 20, 2014 at 12:56:33AM +0300, grrm grrm wrote: > >> Hello, >> >> This patch removes some redundant ngx_http_request_body_filter calls, >> simplifies the ngx_http_do_read_client_request_body and >> ngx_http_read_client_request_body functions and removes some >> duplication of code. >> body.t and body_chunked.t test in nginx-tests are passing. >> >> Please kindly consider it. > > It looks like the patch introduces at least one serious enough > problem: with the patch, if disk buffering is used, all reads > from a client are mapped into disk writes, which is bad. Also, I > don't think that it improves things from readability point of > view. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From piotr at cloudflare.com Fri Jun 27 06:40:27 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 26 Jun 2014 23:40:27 -0700 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding Message-ID: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> # HG changeset patch # User Piotr Sikora # Date 1403851163 25200 # Thu Jun 26 23:39:23 2014 -0700 # Node ID 177382006b7d7a421688831d5793b2e417074b48 # Parent 42114bf12da0cf3d428d0e695139f5366cbd0513 Core: use uppercase hexadecimal digits for percent-encoding. RFC3986 says that, for consistency, URI producers and normalizers should use uppercase hexadecimal digits for all percent-encodings. This is also what modern web browsers and other tools use. Using lowercase hexadecimal digits makes it harder to interact with those tools in case when use of the percent-encoded URI is required, for example when $request_uri is part of the cache key. Signed-off-by: Piotr Sikora diff -r 42114bf12da0 -r 177382006b7d src/core/ngx_string.c --- a/src/core/ngx_string.c Mon Jun 16 19:43:25 2014 +0400 +++ b/src/core/ngx_string.c Thu Jun 26 23:39:23 2014 -0700 @@ -1407,7 +1407,7 @@ ngx_escape_uri(u_char *dst, u_char *src, { ngx_uint_t n; uint32_t *escape; - static u_char hex[] = "0123456789abcdef"; + static u_char hex[] = "0123456789ABCDEF"; /* " ", "#", "%", "?", %00-%1F, %7F-%FF */ From mdounin at mdounin.ru Fri Jun 27 13:54:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 17:54:20 +0400 Subject: Patch: Refactor ngx_http_write_request_body into a filter In-Reply-To: References: <20140620180926.GO1849@mdounin.ru> Message-ID: <20140627135419.GR1849@mdounin.ru> Hello! On Thu, Jun 26, 2014 at 10:07:55PM +0300, grrm grrm wrote: > Hi! > > I managed to fix the write to disk issue, but as you said the code now > looks quite convoluted. Those ifs are horrible. Understandably I > guess, when you try to move logic from different places into one place > but it still depends on external context (rb->buf). My patch was > mostly a try to pave the way to non-buffered request body processing > in way similar to the response processing pipeline where all the work > is done by the filters. > > I saw this feature in the tengine fork of nginx, however there the > work is still done by a handler similar to write_to_file. Also, all > the body data need to be copied inside the memory at least one time, > which is not good. > > I also looked at the repose pipeline and there are two main methods of > reading from the client - the nonbuffered and ngx_event_pipe_t. Do you > think the pipe could be used in reverse (client->upstream)? Or would > it even make sense to do it that way? In theory it should be possible to use event pipe in any direction. But I don't think that it would be easy to integrate it with various request body requirements. > Also, do you have any work done into this direction (if you can > comment on that)? Granted, my attempt was maybe too big a step. I've previously posted an experimental patch which introduces an ability to insert filters into request body chain, which can be considered as a "work in this direction". I think it should be implemented as another filter in the request body filter chain. It will likely require various modification to the request body reading code though. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jun 27 17:57:54 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Jun 2014 21:57:54 +0400 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding In-Reply-To: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> References: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> Message-ID: <20140627175754.GX1849@mdounin.ru> Hello! On Thu, Jun 26, 2014 at 11:40:27PM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1403851163 25200 > # Thu Jun 26 23:39:23 2014 -0700 > # Node ID 177382006b7d7a421688831d5793b2e417074b48 > # Parent 42114bf12da0cf3d428d0e695139f5366cbd0513 > Core: use uppercase hexadecimal digits for percent-encoding. > > RFC3986 says that, for consistency, URI producers and normalizers > should use uppercase hexadecimal digits for all percent-encodings. > > This is also what modern web browsers and other tools use. > > Using lowercase hexadecimal digits makes it harder to interact with > those tools in case when use of the percent-encoded URI is required, > for example when $request_uri is part of the cache key. > > Signed-off-by: Piotr Sikora > > diff -r 42114bf12da0 -r 177382006b7d src/core/ngx_string.c > --- a/src/core/ngx_string.c Mon Jun 16 19:43:25 2014 +0400 > +++ b/src/core/ngx_string.c Thu Jun 26 23:39:23 2014 -0700 > @@ -1407,7 +1407,7 @@ ngx_escape_uri(u_char *dst, u_char *src, > { > ngx_uint_t n; > uint32_t *escape; > - static u_char hex[] = "0123456789abcdef"; > + static u_char hex[] = "0123456789ABCDEF"; > > /* " ", "#", "%", "?", %00-%1F, %7F-%FF */ I can't say I like this change. I've considered this a while ago, and decided to keep it as is. This preserve compatibility with what nginx used to do for years. And it also looks like Apache does the same. Any other opinions? -- Maxim Dounin http://nginx.org/ From bmoran at onehub.com Fri Jun 27 18:05:04 2014 From: bmoran at onehub.com (Brian Moran) Date: Fri, 27 Jun 2014 11:05:04 -0700 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding In-Reply-To: <20140627175754.GX1849@mdounin.ru> References: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> <20140627175754.GX1849@mdounin.ru> Message-ID: Perhaps a configuration item (ugh, another configuration option)? We noticed this issue a number of years ago, and we have been using a work-around for a number of years, as well. On Fri, Jun 27, 2014 at 10:57 AM, Maxim Dounin wrote: > Hello! > > On Thu, Jun 26, 2014 at 11:40:27PM -0700, Piotr Sikora wrote: > > > # HG changeset patch > > # User Piotr Sikora > > # Date 1403851163 25200 > > # Thu Jun 26 23:39:23 2014 -0700 > > # Node ID 177382006b7d7a421688831d5793b2e417074b48 > > # Parent 42114bf12da0cf3d428d0e695139f5366cbd0513 > > Core: use uppercase hexadecimal digits for percent-encoding. > > > > RFC3986 says that, for consistency, URI producers and normalizers > > should use uppercase hexadecimal digits for all percent-encodings. > > > > This is also what modern web browsers and other tools use. > > > > Using lowercase hexadecimal digits makes it harder to interact with > > those tools in case when use of the percent-encoded URI is required, > > for example when $request_uri is part of the cache key. > > > > Signed-off-by: Piotr Sikora > > > > diff -r 42114bf12da0 -r 177382006b7d src/core/ngx_string.c > > --- a/src/core/ngx_string.c Mon Jun 16 19:43:25 2014 +0400 > > +++ b/src/core/ngx_string.c Thu Jun 26 23:39:23 2014 -0700 > > @@ -1407,7 +1407,7 @@ ngx_escape_uri(u_char *dst, u_char *src, > > { > > ngx_uint_t n; > > uint32_t *escape; > > - static u_char hex[] = "0123456789abcdef"; > > + static u_char hex[] = "0123456789ABCDEF"; > > > > /* " ", "#", "%", "?", %00-%1F, %7F-%FF */ > > I can't say I like this change. I've considered this a while ago, > and decided to keep it as is. This preserve compatibility with > what nginx used to do for years. And it also looks like Apache > does the same. > > Any other opinions? > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- e: bmoran at onehub.com p: +1 206 390 4376 Onehub, Inc. www.onehub.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcarlier at afilias.info Sat Jun 28 06:22:41 2014 From: dcarlier at afilias.info (David Carlier) Date: Sat, 28 Jun 2014 07:22:41 +0100 Subject: Development Message-ID: HI All, I am working as C/C++ developer for a company which makes nginx modules and would like to know if I can contribute a bit. Kind regards. David CARLIER dotMobi / Afilias Technologies DUBLIN -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdasilvayy at gmail.com Mon Jun 30 08:05:36 2014 From: fdasilvayy at gmail.com (Filipe Da Silva) Date: Mon, 30 Jun 2014 10:05:36 +0200 Subject: Development In-Reply-To: References: Message-ID: Hi, In short : http://nginx.org/en/docs/contributing_changes.html And, patch must be made with this option set in your hgrc file : [diff] showfunc = True Rgds, Filipe 2014-06-28 8:22 GMT+02:00 David Carlier : > HI All, > I am working as C/C++ developer for a company which makes nginx modules and > would like to know if I can contribute a bit. > > Kind regards. > David CARLIER > > dotMobi / Afilias Technologies DUBLIN > From vbart at nginx.com Mon Jun 30 12:41:26 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 30 Jun 2014 16:41:26 +0400 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding In-Reply-To: <20140627175754.GX1849@mdounin.ru> References: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> <20140627175754.GX1849@mdounin.ru> Message-ID: <3338179.ENjqyOEFCX@vbart-workstation> On Friday 27 June 2014 21:57:54 Maxim Dounin wrote: > Hello! > > On Thu, Jun 26, 2014 at 11:40:27PM -0700, Piotr Sikora wrote: > > > # HG changeset patch > > # User Piotr Sikora > > # Date 1403851163 25200 > > # Thu Jun 26 23:39:23 2014 -0700 > > # Node ID 177382006b7d7a421688831d5793b2e417074b48 > > # Parent 42114bf12da0cf3d428d0e695139f5366cbd0513 > > Core: use uppercase hexadecimal digits for percent-encoding. > > > > RFC3986 says that, for consistency, URI producers and normalizers > > should use uppercase hexadecimal digits for all percent-encodings. > > > > This is also what modern web browsers and other tools use. > > > > Using lowercase hexadecimal digits makes it harder to interact with > > those tools in case when use of the percent-encoded URI is required, > > for example when $request_uri is part of the cache key. > > > > Signed-off-by: Piotr Sikora > > > > diff -r 42114bf12da0 -r 177382006b7d src/core/ngx_string.c > > --- a/src/core/ngx_string.c Mon Jun 16 19:43:25 2014 +0400 > > +++ b/src/core/ngx_string.c Thu Jun 26 23:39:23 2014 -0700 > > @@ -1407,7 +1407,7 @@ ngx_escape_uri(u_char *dst, u_char *src, > > { > > ngx_uint_t n; > > uint32_t *escape; > > - static u_char hex[] = "0123456789abcdef"; > > + static u_char hex[] = "0123456789ABCDEF"; > > > > /* " ", "#", "%", "?", %00-%1F, %7F-%FF */ > > I can't say I like this change. I've considered this a while ago, > and decided to keep it as is. This preserve compatibility with > what nginx used to do for years. And it also looks like Apache > does the same. > > Any other opinions? > I prefer to fix this instead of keeping it for another few years. Uppercase digits also look more distinctly, since all other parts of path are usually in lowercase. wbr, Valentin V. Bartenev From donatas.abraitis at gmail.com Mon Jun 30 12:43:56 2014 From: donatas.abraitis at gmail.com (Donatas Abraitis) Date: Mon, 30 Jun 2014 15:43:56 +0300 Subject: ngx resolver set custom name server Message-ID: Hello, is it possible to set resolver ( http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) writing own nginx module? I want to set custom name servers for querying by gethostbyaddr(). By default gethostbyaddr() uses /etc/resolv.conf, /etc/hosts. I want to override this by setting somehow. Anyone? -- Donatas -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Mon Jun 30 21:01:30 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 30 Jun 2014 14:01:30 -0700 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding In-Reply-To: <20140627175754.GX1849@mdounin.ru> References: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> <20140627175754.GX1849@mdounin.ru> Message-ID: Hey Maxim, > I can't say I like this change. I've considered this a while ago, > and decided to keep it as is. This preserve compatibility with > what nginx used to do for years. And it also looks like Apache > does the same. > > Any other opinions? Compatibility for whom? This change is transparent for decoders (upper- and lowercase hex digits are equivalent, per RFC) and, as far as I can tell, it only affects people who try to encode URLs to match what nginx produces and/or do case-sensitive matching (like nginx-tests/autoindex.t). The code in both: nginx and Apache predates RFC, which explains why it wasn't uppercase from the beginning, but since RFC is out, there is no good reason for keeping it this way. Best regards, Piotr Sikora From agentzh at gmail.com Mon Jun 30 23:24:58 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 30 Jun 2014 16:24:58 -0700 Subject: [PATCH] Core: use uppercase hexadecimal digits for percent-encoding In-Reply-To: <20140627175754.GX1849@mdounin.ru> References: <177382006b7d7a421688.1403851227@Piotrs-MacBook-Pro.local> <20140627175754.GX1849@mdounin.ru> Message-ID: Hello! On Fri, Jun 27, 2014 at 10:57 AM, Maxim Dounin wrote: > I can't say I like this change. I've considered this a while ago, > and decided to keep it as is. This preserve compatibility with > what nginx used to do for years. And it also looks like Apache > does the same. > > Any other opinions? > I agree with Piotr Sikora and hope that nginx uses %DD instead of %dd for URI escaping. Right now ngx_lua also uses %dd for the consistency with the nginx core, which has already yielded several complaints from ngx_lua's user community (the users also pointed me to the RFC). Once the nginx core switches over to %DD, I can make a similar change to ngx_lua accordingly :) Best regards, -agentzh