From faskiri.devel at gmail.com Tue Jan 1 18:27:30 2013 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 1 Jan 2013 23:57:30 +0530 Subject: Nginx mutli-threaded plugin Message-ID: Hi guys I need to write a plugin which does some CPU intensive work. I want to be able to create a thread which does some work and once done post an event into nginx thread via ngx_add_timer. I tried compiling my plugin with NGX_THREADS=pthread but that doesnt configure saying that the threading support is broken. I can obviously switch to a single threaded mode but that would be significantly difficult for me and would prefer to use only as the very last resort. Can you please advice how to go about doing this? I am considering pulling out the usage of 'ngx_event_timer_mutex' outside #if NGX_THREADS. Frankly I am surprised how no other plugin had this kind of requirement, the "broken support" commit has been there since quite some time now. One final question, if I do the CPU intensive work in the nginx thread and I have number of workers, how would the system behave? Will accept_mutex on/off change the behavior? Regards, +Fasih -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 1 21:19:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jan 2013 01:19:22 +0400 Subject: Nginx mutli-threaded plugin In-Reply-To: References: Message-ID: <20130101211922.GY40452@mdounin.ru> Hello! On Tue, Jan 01, 2013 at 11:57:30PM +0530, Fasih wrote: > Hi guys > > I need to write a plugin which does some CPU intensive work. I want to be > able to create a thread which does some work and once done post an event > into nginx thread via ngx_add_timer. I tried compiling my plugin with > NGX_THREADS=pthread but that doesnt configure saying that the threading > support is broken. I can obviously switch to a single threaded mode but > that would be significantly difficult for me and would prefer to use only > as the very last resort. > > Can you please advice how to go about doing this? I am considering pulling > out the usage of 'ngx_event_timer_mutex' outside #if NGX_THREADS. Frankly I > am surprised how no other plugin had this kind of requirement, the "broken > support" commit has been there since quite some time now. Threading support as currently present was an experiment, and it's broken for a long time (some very limited part of it is still used on win32, but it's different story). I wouldn't expect you'll be able to work with threads without problems. > One final question, if I do the CPU intensive work in the nginx thread and > I have number of workers, how would the system behave? Will > accept_mutex on/off change the behavior? Accept mutex only ensures no two processes try to accept new connections at the same time, and it doesn't care how do you handle requests. Obviously enough it will change the behaviour, much like in any other case. -- Maxim Dounin http://nginx.com/support.html From faskiri.devel at gmail.com Wed Jan 2 05:07:48 2013 From: faskiri.devel at gmail.com (Fasih) Date: Wed, 2 Jan 2013 10:37:48 +0530 Subject: Nginx mutli-threaded plugin In-Reply-To: <20130101211922.GY40452@mdounin.ru> References: <20130101211922.GY40452@mdounin.ru> Message-ID: Thank you Maxim. Will try to see if I can get to the single threaded model. Basically when I was talking about accept_mutex, what I meant was, if the worker thread is busy with my cpu-intensive work, I would obviously want the other workers to take over. IIUC, if I have the accept mutex I am the one who will take the requests, is it possible that my worker plugin is caught up doing the work *and* has not released the mutex? In other words, is it possible that my CPU intensive operation prevents other workers who are free from taking requests with any configuration option enabled/disabled. On Wed, Jan 2, 2013 at 2:49 AM, Maxim Dounin wrote: > Hello! > > On Tue, Jan 01, 2013 at 11:57:30PM +0530, Fasih wrote: > > > Hi guys > > > > I need to write a plugin which does some CPU intensive work. I want to be > > able to create a thread which does some work and once done post an event > > into nginx thread via ngx_add_timer. I tried compiling my plugin with > > NGX_THREADS=pthread but that doesnt configure saying that the threading > > support is broken. I can obviously switch to a single threaded mode but > > that would be significantly difficult for me and would prefer to use only > > as the very last resort. > > > > Can you please advice how to go about doing this? I am considering > pulling > > out the usage of 'ngx_event_timer_mutex' outside #if NGX_THREADS. > Frankly I > > am surprised how no other plugin had this kind of requirement, the > "broken > > support" commit has been there since quite some time now. > > Threading support as currently present was an experiment, and it's > broken for a long time (some very limited part of it is still used > on win32, but it's different story). I wouldn't expect you'll be > able to work with threads without problems. > > > One final question, if I do the CPU intensive work in the nginx thread > and > > I have number of workers, how would the system behave? Will > > accept_mutex on/off change the behavior? > > Accept mutex only ensures no two processes try to accept new > connections at the same time, and it doesn't care how do you > handle requests. Obviously enough it will change the behaviour, > much like in any other case. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sagar.sonawane at rediffmail.com Wed Jan 2 16:02:05 2013 From: sagar.sonawane at rediffmail.com (Sagar) Date: 2 Jan 2013 16:02:05 -0000 Subject: Book for module Development Message-ID: <20130102160205.19550.qmail@f4mail-235-163.rediffmail.com> Hi all, I am php developer having 6+ years of experience building websites,portals,applications etc. I need some guidance in developing 3rd party modules for nginx. But, i couldn't found any specific book for that. Please help me to walk on correct path, as i don't want to get messed/frustrated, by choosing wrong directions in early stage. Currently, i am having following article with me to start with: http://www.evanmiller.org/nginx-modules-guide-advanced.html Hoping for a favourable reply. Regards, Sagar Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at gwynne.id.au Fri Jan 4 01:22:04 2013 From: david at gwynne.id.au (David Gwynne) Date: Fri, 4 Jan 2013 11:22:04 +1000 Subject: [PATCH] implement a $location variable In-Reply-To: References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> Message-ID: <20130104012204.GA26729@animata.net> here's a diff that provides $location for use in not regex location blocks. we're using it to provide easy to use mass hosting of drupals: xdlg at argon nginx$ cat drupal-controller.conf root /var/www/apps/drupal; try_files /index.php =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $location$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass localhost:9000; which is used in server blocks like this: server { listen 80; server_name www.example.com; location / { include drupal-controller.conf; } location /foo { include drupal-controller.conf; } location /bar { include drupal-controller.conf; } location /baz { include drupal-controller.conf; } } i cannot otherwise find a nice way to use the locations name as a parameter without specifying the value as a variable again within the location. --- src/http/ngx_http_variables.c.orig Tue Jul 3 03:41:52 2012 +++ src/http/ngx_http_variables.c Fri Jan 4 10:49:50 2013 @@ -65,6 +65,8 @@ static ngx_int_t ngx_http_variable_request_filename(ng ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_name(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_location(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_method(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_remote_user(ngx_http_request_t *r, @@ -206,6 +208,10 @@ static ngx_http_variable_t ngx_http_core_variables[] { ngx_string("server_name"), NULL, ngx_http_variable_server_name, 0, 0, 0 }, + { ngx_string("location"), NULL, + ngx_http_variable_location, 0, + NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("request_method"), NULL, ngx_http_variable_request_method, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, @@ -1382,6 +1388,28 @@ ngx_http_variable_server_name(ngx_http_request_t *r, v->no_cacheable = 0; v->not_found = 0; v->data = cscf->server_name.data; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_variable_location(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_http_core_loc_conf_t *clcf; + + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + + if (clcf->regex) { + v->not_found = 1; + } else { + v->len = clcf->name.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = clcf->name.data; + } return NGX_OK; } From appa at perusio.net Fri Jan 4 01:58:43 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 04 Jan 2013 02:58:43 +0100 Subject: [PATCH] implement a $location variable In-Reply-To: <20130104012204.GA26729@animata.net> References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> Message-ID: <87zk0pn1yk.wl%appa@perusio.net> On 4 Jan 2013 02h22 CET, david at gwynne.id.au wrote: > here's a diff that provides $location for use in not regex location > blocks. > > we're using it to provide easy to use mass hosting of drupals: > > xdlg at argon nginx$ cat drupal-controller.conf > root /var/www/apps/drupal; > try_files /index.php =404; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_NAME $location$fastcgi_script_name; > fastcgi_intercept_errors on; > fastcgi_pass localhost:9000; > > which is used in server blocks like this: > > server { > listen 80; > server_name www.example.com; > > location / { include drupal-controller.conf; } > location /foo { include drupal-controller.conf; } > location /bar { include drupal-controller.conf; } > location /baz { include drupal-controller.conf; } > } > > i cannot otherwise find a nice way to use the locations name as a > parameter without specifying the value as a variable again within > the location. I fail to see the need for a $location variable in a Drupal Nginx config. Can you elaborate why? The multiple inclusion is only needed if you redefine any of the fastcgi_param(s) in each location. Have you each site installed in a subdir? Is that the case? I think this will probably work in your case: location ~ ^([^/]*)/.*$ { include drupal-controller.conf; fastcgi_param SCRIPT_NAME $current_location_base/$fastcgi_script_name; } where drupal-controller.conf is: root /var/www/apps/drupal; try_files /index.php =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass localhost:9000; Try it out. --- appa > --- src/http/ngx_http_variables.c.orig Tue Jul 3 03:41:52 2012 > +++ src/http/ngx_http_variables.c Fri Jan 4 10:49:50 2013 > @@ -65,6 +65,8 @@ static ngx_int_t ngx_http_variable_request_filename(ng > ngx_http_variable_value_t *v, uintptr_t data); > static ngx_int_t ngx_http_variable_server_name(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data); > +static ngx_int_t ngx_http_variable_location(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data); > static ngx_int_t ngx_http_variable_request_method(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data); > static ngx_int_t ngx_http_variable_remote_user(ngx_http_request_t *r, > @@ -206,6 +208,10 @@ static ngx_http_variable_t ngx_http_core_variables[] > > { ngx_string("server_name"), NULL, ngx_http_variable_server_name, 0, > 0, 0 }, > > + { ngx_string("location"), NULL, > + ngx_http_variable_location, 0, > + NGX_HTTP_VAR_NOCACHEABLE, 0 }, > + { ngx_string("request_method"), NULL, > ngx_http_variable_request_method, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, > @@ -1382,6 +1388,28 @@ > ngx_http_variable_server_name(ngx_http_request_t *r, > v->no_cacheable = 0; > v->not_found = 0; > v->data = cscf->server_name.data; > + > + return NGX_OK; > +} > + > + > +static ngx_int_t > +ngx_http_variable_location(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data) > +{ > + ngx_http_core_loc_conf_t *clcf; > + > + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); > + > + if (clcf->regex) { > + v->not_found = 1; > + } else { > + v->len = clcf->name.len; > + v->valid = 1; > + v->no_cacheable = 0; > + v->not_found = 0; > + v->data = clcf->name.data; > + } > > return NGX_OK; > } > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From appa at perusio.net Fri Jan 4 02:10:38 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 04 Jan 2013 03:10:38 +0100 Subject: [PATCH] implement a $location variable In-Reply-To: <87zk0pn1yk.wl%appa@perusio.net> References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> <87zk0pn1yk.wl%appa@perusio.net> Message-ID: <87y5g9n1ep.wl%appa@perusio.net> On 4 Jan 2013 02h58 CET, appa at perusio.net wrote: > On 4 Jan 2013 02h22 CET, david at gwynne.id.au wrote: > >> here's a diff that provides $location for use in not regex location >> blocks. >> >> we're using it to provide easy to use mass hosting of drupals: >> >> xdlg at argon nginx$ cat drupal-controller.conf >> root /var/www/apps/drupal; >> try_files /index.php =404; >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_param SCRIPT_NAME $location$fastcgi_script_name; >> fastcgi_intercept_errors on; >> fastcgi_pass localhost:9000; >> >> which is used in server blocks like this: >> >> server { >> listen 80; >> server_name www.example.com; >> >> location / { include drupal-controller.conf; } >> location /foo { include drupal-controller.conf; } >> location /bar { include drupal-controller.conf; } >> location /baz { include drupal-controller.conf; } >> } >> >> i cannot otherwise find a nice way to use the locations name as a >> parameter without specifying the value as a variable again within >> the location. > > I fail to see the need for a $location variable in a Drupal Nginx > config. Can you elaborate why? The multiple inclusion is only needed > if you redefine any of the fastcgi_param(s) in each location. > > Have you each site installed in a subdir? Is that the case? > > > I think this will probably work in your case: > > location ~ ^([^/]*)/.*$ { include > drupal-controller.conf; fastcgi_param SCRIPT_NAME > $current_location_base/$fastcgi_script_name; } Oops. Wrong regex. Rather: location / { include drupal-controller.conf; fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ ^(/([^/]*).*$ { include drupal-controller.conf; fastcgi_param SCRIPT_NAME $current_location_base$fastcgi_script_name; } } --- appa From david at gwynne.id.au Fri Jan 4 02:18:58 2013 From: david at gwynne.id.au (David Gwynne) Date: Fri, 4 Jan 2013 12:18:58 +1000 Subject: [PATCH] implement a $location variable In-Reply-To: <87zk0pn1yk.wl%appa@perusio.net> References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> <87zk0pn1yk.wl%appa@perusio.net> Message-ID: On 04/01/2013, at 11:58 AM, Ant?nio P. P. Almeida wrote: > On 4 Jan 2013 02h22 CET, david at gwynne.id.au wrote: > >> here's a diff that provides $location for use in not regex location >> blocks. >> >> we're using it to provide easy to use mass hosting of drupals: >> >> xdlg at argon nginx$ cat drupal-controller.conf >> root /var/www/apps/drupal; >> try_files /index.php =404; >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_param SCRIPT_NAME $location$fastcgi_script_name; >> fastcgi_intercept_errors on; >> fastcgi_pass localhost:9000; >> >> which is used in server blocks like this: >> >> server { >> listen 80; >> server_name www.example.com; >> >> location / { include drupal-controller.conf; } >> location /foo { include drupal-controller.conf; } >> location /bar { include drupal-controller.conf; } >> location /baz { include drupal-controller.conf; } >> } >> >> i cannot otherwise find a nice way to use the locations name as a >> parameter without specifying the value as a variable again within >> the location. > > I fail to see the need for a $location variable in a Drupal Nginx > config. Can you elaborate why? The multiple inclusion is only needed > if you redefine any of the fastcgi_param(s) in each location. need is a strong work, its just a lot nicer. > Have you each site installed in a subdir? Is that the case? no. it is a single copy of the drupal codebase which is shared by all the sites, they just have separate settings.php files. the way nginx tells drupal which settings to use is via SCRIPT_NAME based on $location. > > > I think this will probably work in your case: > > location ~ ^([^/]*)/.*$ { > include drupal-controller.conf; > fastcgi_param SCRIPT_NAME $current_location_base/$fastcgi_script_name; > } > > where drupal-controller.conf is: > > root /var/www/apps/drupal; > try_files /index.php =404; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_intercept_errors on; > fastcgi_pass localhost:9000; > > Try it out. im sure it would work, im just arguing that the configurations could be a lot more straightforward and readable using the patch below, and i wouldnt need to run pcre to get what is basically a copy of the value from the location block. > > --- appa > > > >> --- src/http/ngx_http_variables.c.orig Tue Jul 3 03:41:52 2012 >> +++ src/http/ngx_http_variables.c Fri Jan 4 10:49:50 2013 >> @@ -65,6 +65,8 @@ static ngx_int_t ngx_http_variable_request_filename(ng >> ngx_http_variable_value_t *v, uintptr_t data); >> static ngx_int_t ngx_http_variable_server_name(ngx_http_request_t *r, >> ngx_http_variable_value_t *v, uintptr_t data); >> +static ngx_int_t ngx_http_variable_location(ngx_http_request_t *r, >> + ngx_http_variable_value_t *v, uintptr_t data); >> static ngx_int_t ngx_http_variable_request_method(ngx_http_request_t *r, >> ngx_http_variable_value_t *v, uintptr_t data); >> static ngx_int_t ngx_http_variable_remote_user(ngx_http_request_t *r, >> @@ -206,6 +208,10 @@ static ngx_http_variable_t ngx_http_core_variables[] >> >> { ngx_string("server_name"), NULL, ngx_http_variable_server_name, 0, >> 0, 0 }, >> >> + { ngx_string("location"), NULL, >> + ngx_http_variable_location, 0, >> + NGX_HTTP_VAR_NOCACHEABLE, 0 }, >> + { ngx_string("request_method"), NULL, >> ngx_http_variable_request_method, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, >> @@ -1382,6 +1388,28 @@ >> ngx_http_variable_server_name(ngx_http_request_t *r, >> v->no_cacheable = 0; >> v->not_found = 0; >> v->data = cscf->server_name.data; >> + >> + return NGX_OK; >> +} >> + >> + >> +static ngx_int_t >> +ngx_http_variable_location(ngx_http_request_t *r, >> + ngx_http_variable_value_t *v, uintptr_t data) >> +{ >> + ngx_http_core_loc_conf_t *clcf; >> + >> + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); >> + >> + if (clcf->regex) { >> + v->not_found = 1; >> + } else { >> + v->len = clcf->name.len; >> + v->valid = 1; >> + v->no_cacheable = 0; >> + v->not_found = 0; >> + v->data = clcf->name.data; >> + } >> >> return NGX_OK; >> } >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Fri Jan 4 03:27:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jan 2013 07:27:19 +0400 Subject: Nginx mutli-threaded plugin In-Reply-To: References: <20130101211922.GY40452@mdounin.ru> Message-ID: <20130104032719.GB12313@mdounin.ru> Hello! On Wed, Jan 02, 2013 at 10:37:48AM +0530, Fasih wrote: > Thank you Maxim. Will try to see if I can get to the single threaded model. > Basically when I was talking about accept_mutex, what I meant was, if the > worker thread is busy with my cpu-intensive work, I would obviously want > the other workers to take over. IIUC, if I have the accept mutex I am the > one who will take the requests, is it possible that my worker plugin is > caught up doing the work *and* has not released the mutex? In other words, > is it possible that my CPU intensive operation prevents other workers who > are free from taking requests with any configuration option > enabled/disabled. Accept mutex's job is to prevent multiple workers from accepting connections simulteneously, thus preventing unneded wakup of all worker processes for an accept() when new connection arrives. Accept mutex is released right after a wakup as soon as nginx reads list of events reported by kernel. It's not a load balancing mechanism. With a way things are normally done in nginx, busy workers are less likely to wait for kernel events and therefore less likely to get new connections (both with accept mutex enabled or accept mutex disabled), thus ensuring balancing between workers. > On Wed, Jan 2, 2013 at 2:49 AM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Jan 01, 2013 at 11:57:30PM +0530, Fasih wrote: > > > > > Hi guys > > > > > > I need to write a plugin which does some CPU intensive work. I want to be > > > able to create a thread which does some work and once done post an event > > > into nginx thread via ngx_add_timer. I tried compiling my plugin with > > > NGX_THREADS=pthread but that doesnt configure saying that the threading > > > support is broken. I can obviously switch to a single threaded mode but > > > that would be significantly difficult for me and would prefer to use only > > > as the very last resort. > > > > > > Can you please advice how to go about doing this? I am considering > > pulling > > > out the usage of 'ngx_event_timer_mutex' outside #if NGX_THREADS. > > Frankly I > > > am surprised how no other plugin had this kind of requirement, the > > "broken > > > support" commit has been there since quite some time now. > > > > Threading support as currently present was an experiment, and it's > > broken for a long time (some very limited part of it is still used > > on win32, but it's different story). I wouldn't expect you'll be > > able to work with threads without problems. > > > > > One final question, if I do the CPU intensive work in the nginx thread > > and > > > I have number of workers, how would the system behave? Will > > > accept_mutex on/off change the behavior? > > > > Accept mutex only ensures no two processes try to accept new > > connections at the same time, and it doesn't care how do you > > handle requests. Obviously enough it will change the behaviour, > > much like in any other case. > > > > -- > > Maxim Dounin > > http://nginx.com/support.html > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Jan 4 04:47:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jan 2013 08:47:30 +0400 Subject: Book for module Development In-Reply-To: <20130102160205.19550.qmail@f4mail-235-163.rediffmail.com> References: <20130102160205.19550.qmail@f4mail-235-163.rediffmail.com> Message-ID: <20130104044729.GD12313@mdounin.ru> Hello! On Wed, Jan 02, 2013 at 04:02:05PM -0000, Sagar wrote: > Hi all, > > I am php developer having 6+ years of experience building > websites,portals,applications etc. > > I need some guidance in developing 3rd party modules for nginx. > But, i couldn't found any specific book for that. > > Please help me to walk on correct path, as i don't want to get > messed/frustrated, by choosing wrong directions in early stage. > > Currently, i am having following article with me to start with: > http://www.evanmiller.org/nginx-modules-guide-advanced.html > > Hoping for a favourable reply. Evan Miller's guide is good (although might be a bit outdated). I would recommend you to start with: http://www.evanmiller.org/nginx-modules-guide.html (Advanced guide you link to covers more advanced topics and assumes you are already familiar with basic things.) If in doubt, lots of module samples are available in nginx sources. -- Maxim Dounin http://nginx.com/support.html From yaoweibin at gmail.com Fri Jan 4 05:12:50 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 4 Jan 2013 13:12:50 +0800 Subject: nginx custom module thread local ctx In-Reply-To: References: Message-ID: see the config as an example: https://github.com/pagespeed/ngx_pagespeed/blob/master/config You should add the library std++ with gcc. 2012/12/29 Ruslan Mullakhmetov > Hi! > > I develop some module for nginx and need to use c++ source files in it. > > I found the way to compile it (extern c + compiler recognize .cpp files), > but know i need to specify some flags. > > Could you suggest a method to tell build system which compiler (CXX) and > which flags to use in module/config file? > > I saw in Makefile that it use $(CC) despite of there is unused definition > of $(CPP) which is actually preprocessor and no CXXFLAFS /CPPFLAGS. > > I will greatly appreciate your help. > > I could ?orrect Makefile myself but it get's irrating with each > reconfigure. > > -- > BR, Ruslan Mullakhmetov > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From faskiri.devel at gmail.com Fri Jan 4 07:52:04 2013 From: faskiri.devel at gmail.com (Fasih) Date: Fri, 4 Jan 2013 13:22:04 +0530 Subject: Nginx mutli-threaded plugin In-Reply-To: <20130104032719.GB12313@mdounin.ru> References: <20130101211922.GY40452@mdounin.ru> <20130104032719.GB12313@mdounin.ru> Message-ID: Thank you again. Makes perfect sense On Fri, Jan 4, 2013 at 8:57 AM, Maxim Dounin wrote: > are normally done in nginx, busy workers are > less likely to wait for kernel events and therefore less likely to > get new connections (both with accept mutex enabled or accept > mutex disabled), thus ensuring balancing between workers. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Fri Jan 4 09:04:44 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 04 Jan 2013 10:04:44 +0100 Subject: [PATCH] implement a $location variable In-Reply-To: References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> <87zk0pn1yk.wl%appa@perusio.net> Message-ID: <87vcbdmi8j.wl%appa@perusio.net> On 4 Jan 2013 03h18 CET, david at gwynne.id.au wrote: > need is a strong work, its just a lot nicer. > >> Have you each site installed in a subdir? Is that the case? > > no. it is a single copy of the drupal codebase which is shared by > all the sites, they just have separate settings.php files. the way > nginx tells drupal which settings to use is via SCRIPT_NAME based on > $location. Ok. If I understood correctly you have a typical multi-site setup. Why are you doing the host resolution at the server level? Drupal does it at the application level. It's capable of handling all hosts for which the vhost is configured to support. See: http://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/conf_path/7 --- appa From david at gwynne.id.au Fri Jan 4 10:35:15 2013 From: david at gwynne.id.au (David Gwynne) Date: Fri, 4 Jan 2013 20:35:15 +1000 Subject: [PATCH] implement a $location variable In-Reply-To: <87vcbdmi8j.wl%appa@perusio.net> References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> <87zk0pn1yk.wl%appa@perusio.net> <87vcbdmi8j.wl%appa@perusio.net> Message-ID: <15ACD619-6E5B-4297-AFC3-BD3CD9A553D1@gwynne.id.au> On 04/01/2013, at 7:04 PM, Ant?nio P. P. Almeida wrote: > On 4 Jan 2013 03h18 CET, david at gwynne.id.au wrote: > >> need is a strong work, its just a lot nicer. >> >>> Have you each site installed in a subdir? Is that the case? >> >> no. it is a single copy of the drupal codebase which is shared by >> all the sites, they just have separate settings.php files. the way >> nginx tells drupal which settings to use is via SCRIPT_NAME based on >> $location. > > Ok. If I understood correctly you have a typical multi-site setup. Why > are you doing the host resolution at the server level? Drupal does it > at the application level. It's capable of handling all hosts for which > the vhost is configured to support. > > See: > http://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/conf_path/7 that makes it look like i can pass $request_uri as SCRIPT_NAME. however, while drupal does get the right site settings.php file doing that, for some reason it cant figure out which page its on. it either generates a 404 or only shows the home page no matter what uri i give it in a web browser. while i do take the point that there are other ways of making drupal do what i want with and unpatched nginx, that doesnt mean other things couldnt make good use of a $location variable. dlg > > --- appa > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From appa at perusio.net Fri Jan 4 11:06:56 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 04 Jan 2013 12:06:56 +0100 Subject: [PATCH] implement a $location variable In-Reply-To: <15ACD619-6E5B-4297-AFC3-BD3CD9A553D1@gwynne.id.au> References: <1B6DEF34-186D-4D03-B4BA-36B245EAA211@gwynne.id.au> <20121015145349.GG40452@mdounin.ru> <20121016144621.GT40452@mdounin.ru> <20130104012204.GA26729@animata.net> <87zk0pn1yk.wl%appa@perusio.net> <87vcbdmi8j.wl%appa@perusio.net> <15ACD619-6E5B-4297-AFC3-BD3CD9A553D1@gwynne.id.au> Message-ID: <87txqxmckv.wl%appa@perusio.net> On 4 Jan 2013 11h35 CET, david at gwynne.id.au wrote: > > On 04/01/2013, at 7:04 PM, Ant?nio P. P. Almeida > wrote: > >> On 4 Jan 2013 03h18 CET, david at gwynne.id.au wrote: >> >>> need is a strong work, its just a lot nicer. >>> >>>> Have you each site installed in a subdir? Is that the case? >>> >>> no. it is a single copy of the drupal codebase which is shared by >>> all the sites, they just have separate settings.php files. the way >>> nginx tells drupal which settings to use is via SCRIPT_NAME based >>> on $location. >> >> Ok. If I understood correctly you have a typical multi-site >> setup. Why are you doing the host resolution at the server level? >> Drupal does it at the application level. It's capable of handling >> all hosts for which the vhost is configured to support. >> >> See: >> http://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/conf_path/7 > > that makes it look like i can pass $request_uri as > SCRIPT_NAME. however, while drupal does get the right site > settings.php file doing that, for some reason it cant figure out > which page its on. it either generates a 404 or only shows the home > page no matter what uri i give it in a web browser. That should work out of the box without any "tinkering". Here's a Drupal config that works with multisites as designed: https://github.com/perusio/drupal-with-nginx Also there's a Drupal group for Nginx: http://groups.drupal.org/nginx > while i do take the point that there are other ways of making drupal > do what i want with and unpatched nginx, that doesnt mean other > things couldnt make good use of a $location variable. Sure. You have to sell the idea to the Nginx devs. --- appa From kirilk at cloudxcel.com Fri Jan 4 13:41:18 2013 From: kirilk at cloudxcel.com (Kiril Kalchev) Date: Fri, 4 Jan 2013 15:41:18 +0200 Subject: Export all log specific variables Message-ID: Hi Guys, I made a patch for exporting all log only variable as common nginx variables. I am wandering how can I submit the patch to the official nginx devs. I am attaching the patch in this email. If I have to do something more please write me back. Patch was made for nginx 1.3.10 which is the newest development nginx release. -------------- next part -------------- A non-text attachment was scrubbed... Name: http_variables.patch Type: application/octet-stream Size: 4284 bytes Desc: not available URL: -------------- next part -------------- Regards, Kiril From orz at loli.my Fri Jan 4 15:29:08 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Fri, 4 Jan 2013 23:29:08 +0800 Subject: Nginx 1.3.10 Segfault Message-ID: #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 zlib-1.2.3-27.el6.x86_64 (gdb) bt #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 #1 0x00000000004565c5 in ngx_http_log_variable (r=, buf=0xaf0fd08bab1b748d
, op=) at src/http/modules/ngx_http_log_module.c:901 #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at src/http/modules/ngx_http_log_module.c:308 #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc=) at src/http/ngx_http_request.c:3109 #4 ngx_http_free_request (r=0x1bf0070, rc=) at src/http/ngx_http_request.c:3064 #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at src/http/ngx_http_request.c:3017 #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at src/http/ngx_http_request.c:1214 #7 0x000000000044f890 in ngx_http_process_request_line (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:683 #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) at src/event/ngx_event.c:247 #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, data=) at src/os/unix/ngx_process_cycle.c:807 #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, proc=0x436180 , data=0x3, name=0x4e4875 "worker process", respawn=8) at src/os/unix/ngx_process.c:198 #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at src/os/unix/ngx_process_cycle.c:619 #13 ngx_master_process_cycle (cycle=0x15753e0) at src/os/unix/ngx_process_cycle.c:180 #14 0x0000000000414a0b in main (argc=, argv=) at src/core/nginx.c:412 (gdb) bt full #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 No symbol table info available. #1 0x00000000004565c5 in ngx_http_log_variable (r=, buf=0xaf0fd08bab1b748d
, op=) at src/http/modules/ngx_http_log_module.c:901 value = 0x1a82fe8 #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at src/http/modules/ngx_http_log_module.c:308 line = p = len = i = l = log = op = 0x1576bc0 buffer = 0x157e640 lcf = 0x1ae1868 #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc=) at src/http/ngx_http_request.c:3109 i = n = 1 log_handler = 0x1ca4cb8 #4 ngx_http_free_request (r=0x1bf0070, rc=) at src/http/ngx_http_request.c:3064 log = 0x1ba82b0 linger = {l_onoff = 28185920, l_linger = 0} cln = ctx = #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at src/http/ngx_http_request.c:3017 c = 0x7f39d3978610 #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at src/http/ngx_http_request.c:1214 n = rev = 0x7f39d37d4750 c = 0x7f39d3978610 #7 0x000000000044f890 in ngx_http_process_request_line (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 host = n = rc = -2 rv = c = 0x7f39d3978610 r = 0x1bf0070 #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:683 events = revents = instance = i = level = err = rev = 0x7f39d37d4750 wev = queue = c = 0x7f39d3978610 #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) at src/event/ngx_event.c:247 flags = timer = delta = 1357311866947 #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, data=) at src/os/unix/ngx_process_cycle.c:807 worker = i = c = #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, proc=0x436180 , data=0x3, name=0x4e4875 "worker process", respawn=8) at src/os/unix/ngx_process.c:198 on = 1 pid = 0 s = #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at src/os/unix/ngx_process_cycle.c:619 i = live = n = ch = {command = 2, pid = 22951, slot = 8, fd = -1} ccf = #13 ngx_master_process_cycle (cycle=0x15753e0) at src/os/unix/ngx_process_cycle.c:180 title = p = size = i = ---Type to continue, or q to quit--- n = sigio = 0 set = {__val = {0 }} itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, it_value = {tv_sec = 0, tv_usec = 0}} live = delay = 0 ls = ccf = 0x1575c78 #14 0x0000000000414a0b in main (argc=, argv=) at src/core/nginx.c:412 i = log = cycle = 0x14c2140 init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, data = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { len = 0, data = 0x0}} ccf = 0x14c3058 (gdb) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jan 4 15:55:51 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 4 Jan 2013 19:55:51 +0400 Subject: Nginx 1.3.10 Segfault In-Reply-To: References: Message-ID: <201301041955.51844.vbart@nginx.com> Please, try the patch from ticket 268: http://trac.nginx.org/nginx/ticket/268 wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html On Friday 04 January 2013 19:29:08 ????? wrote: > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > Missing separate debuginfos, use: debuginfo-install > glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 > krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 > libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 > zlib-1.2.3-27.el6.x86_64 > (gdb) bt > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > buf=0xaf0fd08bab1b748d
, > op=) at src/http/modules/ngx_http_log_module.c:901 > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > src/http/modules/ngx_http_log_module.c:308 > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= optimized out>) at src/http/ngx_http_request.c:3109 > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > src/http/ngx_http_request.c:3064 > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > src/http/ngx_http_request.c:3017 > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > src/http/ngx_http_request.c:1214 > #7 0x000000000044f890 in ngx_http_process_request_line > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > timer=, flags=) at > src/event/modules/ngx_epoll_module.c:683 > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > at src/event/ngx_event.c:247 > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > data=) at src/os/unix/ngx_process_cycle.c:807 > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, proc=0x436180 > , data=0x3, name=0x4e4875 "worker process", > respawn=8) at src/os/unix/ngx_process.c:198 > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > src/os/unix/ngx_process_cycle.c:619 > #13 ngx_master_process_cycle (cycle=0x15753e0) at > src/os/unix/ngx_process_cycle.c:180 > #14 0x0000000000414a0b in main (argc=, argv= optimized out>) at src/core/nginx.c:412 > (gdb) bt full > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > No symbol table info available. > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > buf=0xaf0fd08bab1b748d
, > op=) at src/http/modules/ngx_http_log_module.c:901 > value = 0x1a82fe8 > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > src/http/modules/ngx_http_log_module.c:308 > line = > p = > len = > i = > l = > log = > op = 0x1576bc0 > buffer = 0x157e640 > lcf = 0x1ae1868 > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= optimized out>) at src/http/ngx_http_request.c:3109 > i = > n = 1 > log_handler = 0x1ca4cb8 > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > src/http/ngx_http_request.c:3064 > log = 0x1ba82b0 > linger = {l_onoff = 28185920, l_linger = 0} > cln = > ctx = > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > src/http/ngx_http_request.c:3017 > c = 0x7f39d3978610 > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > src/http/ngx_http_request.c:1214 > n = > rev = 0x7f39d37d4750 > c = 0x7f39d3978610 > #7 0x000000000044f890 in ngx_http_process_request_line > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > host = > n = > rc = -2 > rv = > c = 0x7f39d3978610 > r = 0x1bf0070 > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > timer=, flags=) at > src/event/modules/ngx_epoll_module.c:683 > events = > revents = > instance = > i = > level = > err = > rev = 0x7f39d37d4750 > wev = > queue = > c = 0x7f39d3978610 > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > at src/event/ngx_event.c:247 > flags = > timer = > delta = 1357311866947 > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > data=) at src/os/unix/ngx_process_cycle.c:807 > worker = > i = > c = > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, proc=0x436180 > , data=0x3, name=0x4e4875 "worker process", > respawn=8) at src/os/unix/ngx_process.c:198 > on = 1 > pid = 0 > s = > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > src/os/unix/ngx_process_cycle.c:619 > i = > live = > n = > ch = {command = 2, pid = 22951, slot = 8, fd = -1} > ccf = > #13 ngx_master_process_cycle (cycle=0x15753e0) at > src/os/unix/ngx_process_cycle.c:180 > title = > p = > size = > i = > ---Type to continue, or q to quit--- > n = > sigio = 0 > set = {__val = {0 }} > itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, it_value = > {tv_sec = 0, tv_usec = 0}} > live = > delay = 0 > ls = > ccf = 0x1575c78 > #14 0x0000000000414a0b in main (argc=, argv= optimized out>) at src/core/nginx.c:412 > i = > log = > cycle = 0x14c2140 > init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, > new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = > 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, > free_connection_n = 0, reusable_connections_queue = {prev = 0x0, > next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool > = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, > nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = > {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, > nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, > connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, > conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, > conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, data > = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 > "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { > len = 0, data = 0x0}} > ccf = 0x14c3058 > (gdb) From orz at loli.my Fri Jan 4 16:14:56 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Sat, 5 Jan 2013 00:14:56 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: <201301041955.51844.vbart@nginx.com> References: <201301041955.51844.vbart@nginx.com> Message-ID: ok, I will try that. 2013/1/4 Valentin V. Bartenev > Please, try the patch from ticket 268: > http://trac.nginx.org/nginx/ticket/268 > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > > On Friday 04 January 2013 19:29:08 ????? wrote: > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > Missing separate debuginfos, use: debuginfo-install > > glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 > > krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 > > libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 > > zlib-1.2.3-27.el6.x86_64 > > (gdb) bt > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > (gdb) bt full > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > No symbol table info available. > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > value = 0x1a82fe8 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > line = > > p = > > len = > > i = > > l = > > log = > > op = 0x1576bc0 > > buffer = 0x157e640 > > lcf = 0x1ae1868 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > i = > > n = 1 > > log_handler = 0x1ca4cb8 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > log = 0x1ba82b0 > > linger = {l_onoff = 28185920, l_linger = 0} > > cln = > > ctx = > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > c = 0x7f39d3978610 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > n = > > rev = 0x7f39d37d4750 > > c = 0x7f39d3978610 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > host = > > n = > > rc = -2 > > rv = > > c = 0x7f39d3978610 > > r = 0x1bf0070 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > events = > > revents = > > instance = > > i = > > level = > > err = > > rev = 0x7f39d37d4750 > > wev = > > queue = > > c = 0x7f39d3978610 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > flags = > > timer = > > delta = 1357311866947 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > worker = > > i = > > c = > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > on = 1 > > pid = 0 > > s = > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > i = > > live = > > n = > > ch = {command = 2, pid = 22951, slot = 8, fd = -1} > > ccf = > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > title = > > p = > > size = > > i = > > ---Type to continue, or q to quit--- > > n = > > sigio = 0 > > set = {__val = {0 }} > > itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, it_value = > > {tv_sec = 0, tv_usec = 0}} > > live = > > delay = 0 > > ls = > > ccf = 0x1575c78 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > i = > > log = > > cycle = 0x14c2140 > > init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, > > new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = > > 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, > > free_connection_n = 0, reusable_connections_queue = {prev = > 0x0, > > next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > pool > > = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > > pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, > > nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory > = > > {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, > > nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, > > connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = > 0x0, > > conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, > > conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, > data > > = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 > > "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { > > len = 0, data = 0x0}} > > ccf = 0x14c3058 > > (gdb) > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From vshebordaev at mail.ru Fri Jan 4 18:04:01 2013 From: vshebordaev at mail.ru (Vladimir Shebordaev) Date: Fri, 04 Jan 2013 22:04:01 +0400 Subject: Export all log specific variables In-Reply-To: References: Message-ID: <50E71991.8060409@mail.ru> Hi, On 04.01.2013 17:41, Kiril Kalchev wrote: > Hi Guys, > > I made a patch for exporting all log only variable as common nginx variables. I am wandering how can I submit the patch to the official nginx devs. I am attaching the patch in this email. If I have to do something more please write me back. Patch was made for nginx 1.3.10 which is the newest development nginx release. > I guess, nginx-devel@ is the only proper place to send in a patch. Btw, please don't hesitate to check out this thread Regards, Vladimir > > Regards, > Kiril > > From orz at loli.my Sat Jan 5 04:30:57 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Sat, 5 Jan 2013 12:30:57 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: <201301041955.51844.vbart@nginx.com> References: <201301041955.51844.vbart@nginx.com> Message-ID: patched is still throw the same segfault. I think may be cause by 1.3.9 feature: *) Feature: the $request_time and $msec variables can now be used not only in the "log_format" directive. 2013/1/4 Valentin V. Bartenev > Please, try the patch from ticket 268: > http://trac.nginx.org/nginx/ticket/268 > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > > On Friday 04 January 2013 19:29:08 ????? wrote: > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > Missing separate debuginfos, use: debuginfo-install > > glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 > > krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 > > libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 > > zlib-1.2.3-27.el6.x86_64 > > (gdb) bt > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > (gdb) bt full > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > No symbol table info available. > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > value = 0x1a82fe8 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > line = > > p = > > len = > > i = > > l = > > log = > > op = 0x1576bc0 > > buffer = 0x157e640 > > lcf = 0x1ae1868 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > i = > > n = 1 > > log_handler = 0x1ca4cb8 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > log = 0x1ba82b0 > > linger = {l_onoff = 28185920, l_linger = 0} > > cln = > > ctx = > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > c = 0x7f39d3978610 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > n = > > rev = 0x7f39d37d4750 > > c = 0x7f39d3978610 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > host = > > n = > > rc = -2 > > rv = > > c = 0x7f39d3978610 > > r = 0x1bf0070 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > events = > > revents = > > instance = > > i = > > level = > > err = > > rev = 0x7f39d37d4750 > > wev = > > queue = > > c = 0x7f39d3978610 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > flags = > > timer = > > delta = 1357311866947 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > worker = > > i = > > c = > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > on = 1 > > pid = 0 > > s = > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > i = > > live = > > n = > > ch = {command = 2, pid = 22951, slot = 8, fd = -1} > > ccf = > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > title = > > p = > > size = > > i = > > ---Type to continue, or q to quit--- > > n = > > sigio = 0 > > set = {__val = {0 }} > > itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, it_value = > > {tv_sec = 0, tv_usec = 0}} > > live = > > delay = 0 > > ls = > > ccf = 0x1575c78 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > i = > > log = > > cycle = 0x14c2140 > > init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, > > new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = > > 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, > > free_connection_n = 0, reusable_connections_queue = {prev = > 0x0, > > next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > pool > > = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > > pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, > > nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory > = > > {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, > > nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, > > connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = > 0x0, > > conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, > > conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, > data > > = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 > > "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { > > len = 0, data = 0x0}} > > ccf = 0x14c3058 > > (gdb) > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From orz at loli.my Sat Jan 5 04:35:11 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Sat, 5 Jan 2013 12:35:11 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: <201301041955.51844.vbart@nginx.com> References: <201301041955.51844.vbart@nginx.com> Message-ID: here is my log_format log_format logger '$http_host||$remote_addr||$msec||$status||$request_length||$bytes_sent||"$request"||"$http_referer"||"$http_user_agent"||$upstream_cache_status||$upstream_status||$request_time||$upstream_response_time'; 2013/1/4 Valentin V. Bartenev > Please, try the patch from ticket 268: > http://trac.nginx.org/nginx/ticket/268 > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > > On Friday 04 January 2013 19:29:08 ????? wrote: > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > Missing separate debuginfos, use: debuginfo-install > > glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 > > krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 > > libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 > > zlib-1.2.3-27.el6.x86_64 > > (gdb) bt > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > (gdb) bt full > > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 > > No symbol table info available. > > #1 0x00000000004565c5 in ngx_http_log_variable (r=, > > buf=0xaf0fd08bab1b748d
, > > op=) at src/http/modules/ngx_http_log_module.c:901 > > value = 0x1a82fe8 > > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at > > src/http/modules/ngx_http_log_module.c:308 > > line = > > p = > > len = > > i = > > l = > > log = > > op = 0x1576bc0 > > buffer = 0x157e640 > > lcf = 0x1ae1868 > > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc= > optimized out>) at src/http/ngx_http_request.c:3109 > > i = > > n = 1 > > log_handler = 0x1ca4cb8 > > #4 ngx_http_free_request (r=0x1bf0070, rc=) at > > src/http/ngx_http_request.c:3064 > > log = 0x1ba82b0 > > linger = {l_onoff = 28185920, l_linger = 0} > > cln = > > ctx = > > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) at > > src/http/ngx_http_request.c:3017 > > c = 0x7f39d3978610 > > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at > > src/http/ngx_http_request.c:1214 > > n = > > rev = 0x7f39d37d4750 > > c = 0x7f39d3978610 > > #7 0x000000000044f890 in ngx_http_process_request_line > > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 > > host = > > n = > > rc = -2 > > rv = > > c = 0x7f39d3978610 > > r = 0x1bf0070 > > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, > > timer=, flags=) at > > src/event/modules/ngx_epoll_module.c:683 > > events = > > revents = > > instance = > > i = > > level = > > err = > > rev = 0x7f39d37d4750 > > wev = > > queue = > > c = 0x7f39d3978610 > > #9 0x000000000042e957 in ngx_process_events_and_timers (cycle=0x15753e0) > > at src/event/ngx_event.c:247 > > flags = > > timer = > > delta = 1357311866947 > > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, > > data=) at src/os/unix/ngx_process_cycle.c:807 > > worker = > > i = > > c = > > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, > proc=0x436180 > > , data=0x3, name=0x4e4875 "worker process", > > respawn=8) at src/os/unix/ngx_process.c:198 > > on = 1 > > pid = 0 > > s = > > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:619 > > i = > > live = > > n = > > ch = {command = 2, pid = 22951, slot = 8, fd = -1} > > ccf = > > #13 ngx_master_process_cycle (cycle=0x15753e0) at > > src/os/unix/ngx_process_cycle.c:180 > > title = > > p = > > size = > > i = > > ---Type to continue, or q to quit--- > > n = > > sigio = 0 > > set = {__val = {0 }} > > itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, it_value = > > {tv_sec = 0, tv_usec = 0}} > > live = > > delay = 0 > > ls = > > ccf = 0x1575c78 > > #14 0x0000000000414a0b in main (argc=, argv= > optimized out>) at src/core/nginx.c:412 > > i = > > log = > > cycle = 0x14c2140 > > init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, > > new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = > > 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, > > free_connection_n = 0, reusable_connections_queue = {prev = > 0x0, > > next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > pool > > = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, > > pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, > > nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory > = > > {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, > > nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, > > connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = > 0x0, > > conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, > > conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, > data > > = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 > > "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { > > len = 0, data = 0x0}} > > ccf = 0x14c3058 > > (gdb) > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Sat Jan 5 05:32:50 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Sat, 5 Jan 2013 13:32:50 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: References: <201301041955.51844.vbart@nginx.com> Message-ID: We confirm this bug. And the patch could fix this bug. 2013/1/5 ????? > here is my log_format > log_format logger > '$http_host||$remote_addr||$msec||$status||$request_length||$bytes_sent||"$request"||"$http_referer"||"$http_user_agent"||$upstream_cache_status||$upstream_status||$request_time||$upstream_response_time'; > > > > 2013/1/4 Valentin V. Bartenev > >> Please, try the patch from ticket 268: >> http://trac.nginx.org/nginx/ticket/268 >> >> >> wbr, Valentin V. Bartenev >> >> -- >> http://nginx.com/support.html >> http://nginx.org/en/donation.html >> >> >> On Friday 04 January 2013 19:29:08 ????? wrote: >> > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 >> > Missing separate debuginfos, use: debuginfo-install >> > glibc-2.12-1.80.el6_3.6.x86_64 keyutils-libs-1.4-4.el6.x86_64 >> > krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64 >> > libselinux-2.0.94-5.3.el6.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 >> > zlib-1.2.3-27.el6.x86_64 >> > (gdb) bt >> > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 >> > #1 0x00000000004565c5 in ngx_http_log_variable (r=> out>, >> > buf=0xaf0fd08bab1b748d
, >> > op=) at src/http/modules/ngx_http_log_module.c:901 >> > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at >> > src/http/modules/ngx_http_log_module.c:308 >> > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc=> > optimized out>) at src/http/ngx_http_request.c:3109 >> > #4 ngx_http_free_request (r=0x1bf0070, rc=) at >> > src/http/ngx_http_request.c:3064 >> > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) >> at >> > src/http/ngx_http_request.c:3017 >> > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at >> > src/http/ngx_http_request.c:1214 >> > #7 0x000000000044f890 in ngx_http_process_request_line >> > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 >> > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, >> > timer=, flags=) at >> > src/event/modules/ngx_epoll_module.c:683 >> > #9 0x000000000042e957 in ngx_process_events_and_timers >> (cycle=0x15753e0) >> > at src/event/ngx_event.c:247 >> > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, >> > data=) at src/os/unix/ngx_process_cycle.c:807 >> > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, >> proc=0x436180 >> > , data=0x3, name=0x4e4875 "worker process", >> > respawn=8) at src/os/unix/ngx_process.c:198 >> > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at >> > src/os/unix/ngx_process_cycle.c:619 >> > #13 ngx_master_process_cycle (cycle=0x15753e0) at >> > src/os/unix/ngx_process_cycle.c:180 >> > #14 0x0000000000414a0b in main (argc=, argv=> > optimized out>) at src/core/nginx.c:412 >> > (gdb) bt full >> > #0 0x000000348a289087 in memcpy () from /lib64/libc.so.6 >> > No symbol table info available. >> > #1 0x00000000004565c5 in ngx_http_log_variable (r=> out>, >> > buf=0xaf0fd08bab1b748d
, >> > op=) at src/http/modules/ngx_http_log_module.c:901 >> > value = 0x1a82fe8 >> > #2 0x0000000000456ec5 in ngx_http_log_handler (r=0x1bf0070) at >> > src/http/modules/ngx_http_log_module.c:308 >> > line = >> > p = >> > len = >> > i = >> > l = >> > log = >> > op = 0x1576bc0 >> > buffer = 0x157e640 >> > lcf = 0x1ae1868 >> > #3 0x000000000044b847 in ngx_http_log_request (r=0x1bf0070, rc=> > optimized out>) at src/http/ngx_http_request.c:3109 >> > i = >> > n = 1 >> > log_handler = 0x1ca4cb8 >> > #4 ngx_http_free_request (r=0x1bf0070, rc=) at >> > src/http/ngx_http_request.c:3064 >> > log = 0x1ba82b0 >> > linger = {l_onoff = 28185920, l_linger = 0} >> > cln = >> > ctx = >> > #5 0x000000000044ca9b in ngx_http_close_request (r=0x1bf0070, rc=400) >> at >> > src/http/ngx_http_request.c:3017 >> > c = 0x7f39d3978610 >> > #6 0x000000000044ed1e in ngx_http_read_request_header (r=0x1bf0070) at >> > src/http/ngx_http_request.c:1214 >> > n = >> > rev = 0x7f39d37d4750 >> > c = 0x7f39d3978610 >> > #7 0x000000000044f890 in ngx_http_process_request_line >> > (rev=0x7f39d37d4750) at src/http/ngx_http_request.c:743 >> > host = >> > n = >> > rc = -2 >> > rv = >> > c = 0x7f39d3978610 >> > r = 0x1bf0070 >> > #8 0x0000000000438dc1 in ngx_epoll_process_events (cycle=0x15753e0, >> > timer=, flags=) at >> > src/event/modules/ngx_epoll_module.c:683 >> > events = >> > revents = >> > instance = >> > i = >> > level = >> > err = >> > rev = 0x7f39d37d4750 >> > wev = >> > queue = >> > c = 0x7f39d3978610 >> > #9 0x000000000042e957 in ngx_process_events_and_timers >> (cycle=0x15753e0) >> > at src/event/ngx_event.c:247 >> > flags = >> > timer = >> > delta = 1357311866947 >> > #10 0x0000000000436249 in ngx_worker_process_cycle (cycle=0x15753e0, >> > data=) at src/os/unix/ngx_process_cycle.c:807 >> > worker = >> > i = >> > c = >> > #11 0x00000000004347d4 in ngx_spawn_process (cycle=0x15753e0, >> proc=0x436180 >> > , data=0x3, name=0x4e4875 "worker process", >> > respawn=8) at src/os/unix/ngx_process.c:198 >> > on = 1 >> > pid = 0 >> > s = >> > #12 0x000000000043723f in ngx_reap_children (cycle=0x15753e0) at >> > src/os/unix/ngx_process_cycle.c:619 >> > i = >> > live = >> > n = >> > ch = {command = 2, pid = 22951, slot = 8, fd = -1} >> > ccf = >> > #13 ngx_master_process_cycle (cycle=0x15753e0) at >> > src/os/unix/ngx_process_cycle.c:180 >> > title = >> > p = >> > size = >> > i = >> > ---Type to continue, or q to quit--- >> > n = >> > sigio = 0 >> > set = {__val = {0 }} >> > itv = {it_interval = {tv_sec = 0, tv_usec = 21766488}, >> it_value = >> > {tv_sec = 0, tv_usec = 0}} >> > live = >> > delay = 0 >> > ls = >> > ccf = 0x1575c78 >> > #14 0x0000000000414a0b in main (argc=, argv=> > optimized out>) at src/core/nginx.c:412 >> > i = >> > log = >> > cycle = 0x14c2140 >> > init_cycle = {conf_ctx = 0x0, pool = 0x14c1360, log = 0x728f80, >> > new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data >> = >> > 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, >> > free_connection_n = 0, reusable_connections_queue = {prev = >> 0x0, >> > next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, >> pool >> > = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, >> > pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, >> > nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, >> shared_memory = >> > {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, >> > nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, >> > connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = >> 0x0, >> > conf_file = {len = 21, data = 0x4e0370 "/etc/nginx/nginx.conf"}, >> > conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, >> data >> > = 0x4e0370 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4e0364 >> > "/usr/local/"}, lock_file = {len = 0, data = 0x0}, hostname = { >> > len = 0, data = 0x0}} >> > ccf = 0x14c3058 >> > (gdb) >> >> >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Jan 5 06:23:19 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 5 Jan 2013 10:23:19 +0400 Subject: Nginx 1.3.10 Segfault In-Reply-To: References: <201301041955.51844.vbart@nginx.com> Message-ID: <201301051023.19845.vbart@nginx.com> On Saturday 05 January 2013 08:30:57 ????? wrote: > patched is still throw the same segfault. I think may be cause by 1.3.9 > feature: > Do you mean that the problem also occurs on 1.3.9? > *) Feature: the $request_time and $msec variables can now be used not > only in the "log_format" directive. > > No, this change have nothing to do with the access log module. Are you sure that you applied patch well? Enabling buffered logs should also fix the problem. See: http://nginx.org/r/access_log ("buffer" or "gzip" parameters). wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From orz at loli.my Sat Jan 5 07:51:44 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Sat, 5 Jan 2013 15:51:44 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: <201301051023.19845.vbart@nginx.com> References: <201301041955.51844.vbart@nginx.com> <201301051023.19845.vbart@nginx.com> Message-ID: I have checking 1.3.9 for this bug. I have confirmed 1.3.8 haven't this bug. 2013/1/5 Valentin V. Bartenev > On Saturday 05 January 2013 08:30:57 ????? wrote: > > patched is still throw the same segfault. I think may be cause by 1.3.9 > > feature: > > > > Do you mean that the problem also occurs on 1.3.9? > > > *) Feature: the $request_time and $msec variables can now be used not > > only in the "log_format" directive. > > > > > > No, this change have nothing to do with the access log module. > > Are you sure that you applied patch well? Enabling buffered logs should > also fix > the problem. See: http://nginx.org/r/access_log ("buffer" or "gzip" > parameters). > > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crk_world at yahoo.com.cn Sat Jan 5 09:13:52 2013 From: crk_world at yahoo.com.cn (chen cw) Date: Sat, 5 Jan 2013 17:13:52 +0800 Subject: nginx custom module thread local ctx In-Reply-To: References: Message-ID: @Mr Yao Is there something connected with CPPFLAGS in the link you gave out? Is your actual idea is to build c++ libraries separately and only link them in nginx makefile? On Fri, Jan 4, 2013 at 1:12 PM, ??? wrote: > see the config as an example: > https://github.com/pagespeed/ngx_pagespeed/blob/master/config > > You should add the library std++ with gcc. > > > 2012/12/29 Ruslan Mullakhmetov > >> Hi! >> >> I develop some module for nginx and need to use c++ source files in it. >> >> I found the way to compile it (extern c + compiler recognize .cpp files), >> but know i need to specify some flags. >> >> Could you suggest a method to tell build system which compiler (CXX) and >> which flags to use in module/config file? >> >> I saw in Makefile that it use $(CC) despite of there is unused definition >> of $(CPP) which is actually preprocessor and no CXXFLAFS /CPPFLAGS. >> >> I will greatly appreciate your help. >> >> I could ?orrect Makefile myself but it get's irrating with each >> reconfigure. >> >> -- >> BR, Ruslan Mullakhmetov >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- -- Charles Chen Software Engineer Server Platforms Team at Taobao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From crk_world at yahoo.com.cn Sat Jan 5 09:55:20 2013 From: crk_world at yahoo.com.cn (chen cw) Date: Sat, 5 Jan 2013 17:55:20 +0800 Subject: nginx is not compatible with gcov of gcc 4.1.2 Message-ID: Hi I happened to see all the nginx workers crushed when they were exiting. When I looked into this, I found they would crush every time they were exiting, on the function gcov_exit(). Now I confirm it is connected with these options "--with-cc-opt='-fprofile-arcs -ftest-coverage' --with-ld-opt=-lgcov", they are used for code coverage testing. gcov is a part of gcc, the version of gcc I use is 4.1.2. Moreover, I found that in most cases, nginx workers will crush, except for they work on very simple configuration, eg., only 7 empty locations in only one server in the configuration file. We confirmed from 1.0.15 to 1.2.6, nginx of all versions have this problem. -- Charles Chen Software Engineer Server Platforms Team at Taobao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From crk_world at yahoo.com.cn Sat Jan 5 10:10:27 2013 From: crk_world at yahoo.com.cn (chen cw) Date: Sat, 5 Jan 2013 18:10:27 +0800 Subject: "http://nginx.org/r/directive" will not redirect to correct language version of directive Message-ID: Hi I am a Chinese user and my language is zh-CN, however, when I access the document http://nginx.org/r/allow, I see the English page even if there is a Chinese page over there. Is there any method allowing to redirect us to the page with correct language? Thank you -- -- Charles Chen Software Engineer Server Platforms Team at Taobao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jan 5 11:45:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 5 Jan 2013 15:45:49 +0400 Subject: "http://nginx.org/r/directive" will not redirect to correct language version of directive In-Reply-To: References: Message-ID: <20130105114549.GK12313@mdounin.ru> Hello! On Sat, Jan 05, 2013 at 06:10:27PM +0800, chen cw wrote: > Hi > I am a Chinese user and my language is zh-CN, however, when I access > the document http://nginx.org/r/allow, I see the English page even if there > is a Chinese page over there. > Is there any method allowing to redirect us to the page with correct > language? It doesn't do any automatic language detection intentionally, and defaults to English. It supports explicit definition of a desired language via optional suffix, e.g. http://nginx.org/r/allow/ru will redirect to a Russian docs, but it doesn't currently recognize "/cn" as a valid suffix. It probably should be allowed now as we have most of docs translated to Chinese (thanks for all involved, BTW!). Ruslan? -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Sat Jan 5 12:35:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 5 Jan 2013 16:35:13 +0400 Subject: nginx is not compatible with gcov of gcc 4.1.2 In-Reply-To: References: Message-ID: <20130105123513.GL12313@mdounin.ru> Hello! On Sat, Jan 05, 2013 at 05:55:20PM +0800, chen cw wrote: > Hi > I happened to see all the nginx workers crushed when they were exiting. > When I looked into this, I found they would crush every time they were > exiting, on the function gcov_exit(). > Now I confirm it is connected with these options > "--with-cc-opt='-fprofile-arcs -ftest-coverage' --with-ld-opt=-lgcov", they > are used for code coverage testing. gcov is a part of gcc, the version of > gcc I use is 4.1.2. > Moreover, I found that in most cases, nginx workers will crush, except > for they work on very simple configuration, eg., only 7 empty locations in > only one server in the configuration file. > We confirmed from 1.0.15 to 1.2.6, nginx of all versions have this > problem. Works fine here (though I don't have gcc 4.1.2 here on hand to test exactly the same gcc version). You may want to debug further what goes wrong in your case, or just upgrade to a recent version of gcc to see if it helps. -- Maxim Dounin http://nginx.com/support.html From sagar.sonawane at rediffmail.com Sat Jan 5 15:06:12 2013 From: sagar.sonawane at rediffmail.com (Sagar) Date: 5 Jan 2013 15:06:12 -0000 Subject: Book for module Development Message-ID: <1357274857.S.2084.13486.H.Tk1heGltIERvdW5pbgBSZTogQm9vayBmb3IgbW9kdWxlIERldmVsb3BtZW50.RU.rfs261, rfs261, 981, 253.f4-234-185.old.1357398372.12739@webmail.rediffmail.com> Hello sir, Thank you very much.. I am already some of the modules source code to get understanding..though it is time consuming better than nothing!! I wonder why there is no books on nginx module development. Thank you very much for guidance..i will definitely start with that article. Regards, Sagar Sonawane From: Maxim Dounin <mdounin at mdounin.ru> Sent: Fri, 04 Jan 2013 10:17:37 To: nginx-devel at nginx.org, sagar.sonawane at rediffmail.com Subject: Re: Book for module Development Hello! On Wed, Jan 02, 2013 at 04:02:05PM -0000, Sagar wrote: > Hi all, > > I am php developer having 6+ years of experience building > websites,portals,applications etc. > > I need some guidance in developing 3rd party modules for nginx. > But, i couldn't found any specific book for that. > > Please help me to walk on correct path, as i don't want to get > messed/frustrated, by choosing wrong directions in early stage. > > Currently, i am having following article with me to start with: > http://www.evanmiller.org/nginx-modules-guide-advanced.html > > Hoping for a favourable reply. Evan Miller's guide is good (although might be a bit outdated).  I would recommend you to start with: http://www.evanmiller.org/nginx-modules-guide.html (Advanced guide you link to covers more advanced topics and assumes you are already familiar with basic things.) If in doubt, lots of module samples are available in nginx sources. -- Maxim Dounin http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From orz at loli.my Sun Jan 6 04:51:03 2013 From: orz at loli.my (=?UTF-8?B?44OT44Oq44OT44Oq4oWk?=) Date: Sun, 6 Jan 2013 12:51:03 +0800 Subject: Nginx 1.3.10 Segfault In-Reply-To: References: <201301041955.51844.vbart@nginx.com> <201301051023.19845.vbart@nginx.com> Message-ID: confirmed, only 1.3.10 have this bug, 1.3.9 is no problem. 2013/1/5 ????? > I have checking 1.3.9 for this bug. I have confirmed 1.3.8 haven't this > bug. > > > 2013/1/5 Valentin V. Bartenev > > On Saturday 05 January 2013 08:30:57 ????? wrote: >> > patched is still throw the same segfault. I think may be cause by 1.3.9 >> > feature: >> > >> >> Do you mean that the problem also occurs on 1.3.9? >> >> > *) Feature: the $request_time and $msec variables can now be used not >> > only in the "log_format" directive. >> > >> > >> >> No, this change have nothing to do with the access log module. >> >> Are you sure that you applied patch well? Enabling buffered logs should >> also fix >> the problem. See: http://nginx.org/r/access_log ("buffer" or "gzip" >> parameters). >> >> >> wbr, Valentin V. Bartenev >> >> -- >> http://nginx.com/support.html >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Sun Jan 6 20:39:11 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 7 Jan 2013 00:39:11 +0400 Subject: "http://nginx.org/r/directive" will not redirect to correct language version of directive In-Reply-To: <20130105114549.GK12313@mdounin.ru> References: <20130105114549.GK12313@mdounin.ru> Message-ID: <20130106203911.GB20240@lo0.su> On Sat, Jan 05, 2013 at 03:45:49PM +0400, Maxim Dounin wrote: > On Sat, Jan 05, 2013 at 06:10:27PM +0800, chen cw wrote: > > > Hi > > I am a Chinese user and my language is zh-CN, however, when I access > > the document http://nginx.org/r/allow, I see the English page even if there > > is a Chinese page over there. > > Is there any method allowing to redirect us to the page with correct > > language? > > It doesn't do any automatic language detection intentionally, and > defaults to English. It supports explicit definition of a desired > language via optional suffix, e.g. > > http://nginx.org/r/allow/ru > > will redirect to a Russian docs, but it doesn't currently > recognize "/cn" as a valid suffix. It probably should be allowed > now as we have most of docs translated to Chinese (thanks for all > involved, BTW!). > > Ruslan? http://nginx.org/r/allow/cn should work now. From crk_world at yahoo.com.cn Mon Jan 7 05:10:44 2013 From: crk_world at yahoo.com.cn (chen cw) Date: Mon, 7 Jan 2013 13:10:44 +0800 Subject: "http://nginx.org/r/directive" will not redirect to correct language version of directive In-Reply-To: <20130106203911.GB20240@lo0.su> References: <20130105114549.GK12313@mdounin.ru> <20130106203911.GB20240@lo0.su> Message-ID: yes, thanks On Mon, Jan 7, 2013 at 4:39 AM, Ruslan Ermilov wrote: > On Sat, Jan 05, 2013 at 03:45:49PM +0400, Maxim Dounin wrote: > > On Sat, Jan 05, 2013 at 06:10:27PM +0800, chen cw wrote: > > > > > Hi > > > I am a Chinese user and my language is zh-CN, however, when I > access > > > the document http://nginx.org/r/allow, I see the English page even if > there > > > is a Chinese page over there. > > > Is there any method allowing to redirect us to the page with > correct > > > language? > > > > It doesn't do any automatic language detection intentionally, and > > defaults to English. It supports explicit definition of a desired > > language via optional suffix, e.g. > > > > http://nginx.org/r/allow/ru > > > > will redirect to a Russian docs, but it doesn't currently > > recognize "/cn" as a valid suffix. It probably should be allowed > > now as we have most of docs translated to Chinese (thanks for all > > involved, BTW!). > > > > Ruslan? > > http://nginx.org/r/allow/cn should work now. > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- -- Charles Chen Software Engineer Server Platforms Team at Taobao.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssehic at armorlogic.com Mon Jan 7 11:06:15 2013 From: ssehic at armorlogic.com (=?UTF-8?B?U3JlYnJlbmtvIMWgZWhpxIc=?=) Date: Mon, 7 Jan 2013 12:06:15 +0100 Subject: totally transparent proxying with nginx on openbsd In-Reply-To: <20101028141447.GZ3116@animata.net> References: <20101028141447.GZ3116@animata.net> Message-ID: Hi, I tried this with 1.2.6 and it works just fine. Have you given this any scrutiny in real life production systems? Thx, ssehic Best regards, Srebrenko Sehic Armorlogic Phone: +45 70 27 77 32 Mobile: +45 26 10 02 76 http://www.armorlogic.com On Thu, Oct 28, 2010 at 4:14 PM, David Gwynne wrote: > openbsd has a setsockopt option called SO_BINDANY that allows a > process to bind to any ip address, even if it is not local to the > system. the patch below uses it to allow nginx to connect to a > backend server using the ip of the client making the request. > > my main goal here is to allow the backend server to know the ip of > the client actually making the request without having to look at > extra hhtp headers > > i thought id throw this out there to get some help since this is > my first attempt at tweaking nginx. there are a few issues with > this implementation: > > 1. it is completely specific to openbsd. > 2. it needs root privileges to use the SO_BINDANY sockopt. > 3. im not sure if connections to backends are cached. if so then > it is probable that a different client will reuse a previous clients > proxy connection, so it will appear that the same client made both > requests to the backend. > > to use this you just configure nginx to run as root and add > "proxy_transparent on" to the sections you want this feature enabled > on. you will need to add appropriate "pass out proto tcp divert-reply" > rules to pf for the SO_BINDANY sockopt to work too. > > if anyone has some tips on how to handle problems 2 and 3 i would > be grateful. > > cheers, > dlg > > > --- src/event/ngx_event_connect.c.orig Thu Nov 26 04:03:59 2009 > +++ src/event/ngx_event_connect.c Thu Oct 28 23:22:37 2010 > @@ -11,7 +11,7 @@ > > > ngx_int_t > -ngx_event_connect_peer(ngx_peer_connection_t *pc) > +ngx_event_connect_peer(ngx_peer_connection_t *pc, ngx_connection_t *cc) > { > int rc; > ngx_int_t event; > @@ -20,6 +20,7 @@ ngx_event_connect_peer(ngx_peer_connection_t *pc) > ngx_socket_t s; > ngx_event_t *rev, *wev; > ngx_connection_t *c; > + int bindany; > > rc = pc->get(pc, pc->data); > if (rc != NGX_OK) { > @@ -46,6 +47,40 @@ ngx_event_connect_peer(ngx_peer_connection_t *pc) > } > > return NGX_ERROR; > + } > + > + if (cc != NULL) { > + bindany = 1; > + if (setsockopt(s, SOL_SOCKET, SO_BINDANY, > + &bindany, sizeof(bindany)) == -1) > + { > + ngx_log_error(NGX_LOG_ALERT, pc->log, ngx_socket_errno, > + "setsockopt(SO_BINDANY) failed"); > + > + ngx_free_connection(c); > + > + if (ngx_close_socket(s) == -1) { > + ngx_log_error(NGX_LOG_ALERT, pc->log, ngx_socket_errno, > + ngx_close_socket_n " failed"); > + } > + > + return NGX_ERROR; > + } > + > + if (bind(s, cc->sockaddr, cc->socklen) == -1) > + { > + ngx_log_error(NGX_LOG_ALERT, pc->log, ngx_socket_errno, > + "bind() failed"); > + > + ngx_free_connection(c); > + > + if (ngx_close_socket(s) == -1) { > + ngx_log_error(NGX_LOG_ALERT, pc->log, ngx_socket_errno, > + ngx_close_socket_n " failed"); > + } > + > + return NGX_ERROR; > + } > } > > if (pc->rcvbuf) { > --- src/event/ngx_event_connect.h.orig Tue Nov 3 01:24:02 2009 > +++ src/event/ngx_event_connect.h Thu Oct 28 23:22:37 2010 > @@ -68,7 +68,8 @@ struct ngx_peer_connection_s { > }; > > > -ngx_int_t ngx_event_connect_peer(ngx_peer_connection_t *pc); > +ngx_int_t ngx_event_connect_peer(ngx_peer_connection_t *pc, > + ngx_connection_t *cc); > ngx_int_t ngx_event_get_peer(ngx_peer_connection_t *pc, void *data); > > > --- src/http/modules/ngx_http_proxy_module.c.orig Mon May 24 > 21:01:05 2010 > +++ src/http/modules/ngx_http_proxy_module.c Thu Oct 28 23:42:10 2010 > @@ -71,6 +71,7 @@ typedef struct { > ngx_http_proxy_vars_t vars; > > ngx_flag_t redirect; > + ngx_flag_t transparent; > > ngx_uint_t headers_hash_max_size; > ngx_uint_t headers_hash_bucket_size; > @@ -196,6 +197,13 @@ static ngx_command_t ngx_http_proxy_commands[] = { > 0, > NULL }, > > + { ngx_string("proxy_transparent"), > + > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, > + ngx_conf_set_flag_slot, > + NGX_HTTP_LOC_CONF_OFFSET, > + offsetof(ngx_http_proxy_loc_conf_t, transparent), > + NULL }, > + > { ngx_string("proxy_store"), > > NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, > ngx_http_proxy_store, > @@ -626,6 +634,7 @@ ngx_http_proxy_handler(ngx_http_request_t *r) > u->abort_request = ngx_http_proxy_abort_request; > u->finalize_request = ngx_http_proxy_finalize_request; > r->state = 0; > + r->transparent = (plcf->transparent == 1); > > if (plcf->redirects) { > u->rewrite_redirect = ngx_http_proxy_rewrite_redirect; > @@ -1940,6 +1949,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_t *cf) > conf->upstream.cyclic_temp_file = 0; > > conf->redirect = NGX_CONF_UNSET; > + conf->transparent = NGX_CONF_UNSET; > conf->upstream.change_buffering = 1; > > conf->headers_hash_max_size = NGX_CONF_UNSET_UINT; > @@ -2214,6 +2224,8 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void > *pa > } > } > } > + > + ngx_conf_merge_value(conf->transparent, prev->transparent, 0); > > /* STUB */ > if (prev->proxy_lengths) { > --- src/http/ngx_http_request.h.orig Mon May 24 22:35:10 2010 > +++ src/http/ngx_http_request.h Thu Oct 28 23:22:37 2010 > @@ -511,6 +511,8 @@ struct ngx_http_request_s { > unsigned stat_writing:1; > #endif > > + unsigned transparent:1; > + > /* used to parse HTTP headers */ > > ngx_uint_t state; > --- src/http/ngx_http_upstream.c.orig Mon May 24 22:35:10 2010 > +++ src/http/ngx_http_upstream.c Thu Oct 28 23:22:37 2010 > @@ -1066,7 +1066,8 @@ ngx_http_upstream_connect(ngx_http_request_t *r, > ngx_h > u->state->response_sec = tp->sec; > u->state->response_msec = tp->msec; > > - rc = ngx_event_connect_peer(&u->peer); > + rc = ngx_event_connect_peer(&u->peer, r->transparent ? > + r->connection : NULL); > > ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > "http upstream connect: %i", rc); > --- src/mail/ngx_mail_auth_http_module.c.orig Fri May 14 19:56:37 2010 > +++ src/mail/ngx_mail_auth_http_module.c Thu Oct 28 23:22:37 2010 > @@ -191,7 +191,7 @@ ngx_mail_auth_http_init(ngx_mail_session_t *s) > ctx->peer.log = s->connection->log; > ctx->peer.log_error = NGX_ERROR_ERR; > > - rc = ngx_event_connect_peer(&ctx->peer); > + rc = ngx_event_connect_peer(&ctx->peer, NULL); > > if (rc == NGX_ERROR || rc == NGX_BUSY || rc == NGX_DECLINED) { > if (ctx->peer.connection) { > --- src/mail/ngx_mail_proxy_module.c.orig Thu Oct 28 23:32:15 2010 > +++ src/mail/ngx_mail_proxy_module.c Thu Oct 28 23:30:53 2010 > @@ -147,7 +147,7 @@ ngx_mail_proxy_init(ngx_mail_session_t *s, ngx_addr_t > p->upstream.log = s->connection->log; > p->upstream.log_error = NGX_ERROR_ERR; > > - rc = ngx_event_connect_peer(&p->upstream); > + rc = ngx_event_connect_peer(&p->upstream, NULL); > > if (rc == NGX_ERROR || rc == NGX_BUSY || rc == NGX_DECLINED) { > ngx_mail_proxy_internal_server_error(s); > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibobrik at gmail.com Mon Jan 7 13:01:56 2013 From: ibobrik at gmail.com (ivan babrou) Date: Mon, 7 Jan 2013 17:01:56 +0400 Subject: image_filter enhancement In-Reply-To: <20121225160542.GD40452@mdounin.ru> References: <20121127160328.GZ40452@mdounin.ru> <20121204130841.GI40452@mdounin.ru> <20121205145244.GO40452@mdounin.ru> <20121218105341.GK40452@mdounin.ru> <20121225160542.GD40452@mdounin.ru> Message-ID: Btw, here's latest version with zero-termination and other fixes: diff --git a/ngx_http_image_filter_module.c b/ngx_http_image_filter_module.c index 3aee1a4..b086e3c 100644 --- a/ngx_http_image_filter_module.c +++ b/ngx_http_image_filter_module.c @@ -32,6 +32,11 @@ #define NGX_HTTP_IMAGE_GIF 2 #define NGX_HTTP_IMAGE_PNG 3 +#define NGX_HTTP_IMAGE_OFFSET_CENTER 0 +#define NGX_HTTP_IMAGE_OFFSET_LEFT 1 +#define NGX_HTTP_IMAGE_OFFSET_RIGHT 2 +#define NGX_HTTP_IMAGE_OFFSET_TOP 3 +#define NGX_HTTP_IMAGE_OFFSET_BOTTOM 4 #define NGX_HTTP_IMAGE_BUFFERED 0x08 @@ -43,11 +48,15 @@ typedef struct { ngx_uint_t angle; ngx_uint_t jpeg_quality; ngx_uint_t sharpen; + ngx_uint_t offset_x; + ngx_uint_t offset_y; ngx_flag_t transparency; ngx_http_complex_value_t *wcv; ngx_http_complex_value_t *hcv; + ngx_http_complex_value_t *oxcv; + ngx_http_complex_value_t *oycv; ngx_http_complex_value_t *acv; ngx_http_complex_value_t *jqcv; ngx_http_complex_value_t *shcv; @@ -66,6 +75,8 @@ typedef struct { ngx_uint_t height; ngx_uint_t max_width; ngx_uint_t max_height; + ngx_uint_t offset_x; + ngx_uint_t offset_y; ngx_uint_t angle; ngx_uint_t phase; @@ -110,6 +121,8 @@ static char *ngx_http_image_filter_jpeg_quality(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_image_filter_sharpen(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_image_filter_offset(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf); static ngx_int_t ngx_http_image_filter_init(ngx_conf_t *cf); @@ -150,6 +163,13 @@ static ngx_command_t ngx_http_image_filter_commands[] = { offsetof(ngx_http_image_filter_conf_t, buffer_size), NULL }, + { ngx_string("image_filter_crop_offset"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE2, + ngx_http_image_filter_offset, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + ngx_null_command }; @@ -737,7 +757,8 @@ ngx_http_image_resize(ngx_http_request_t *r, ngx_http_image_filter_ctx_t *ctx) { int sx, sy, dx, dy, ox, oy, ax, ay, size, colors, palette, transparent, sharpen, - red, green, blue, t; + red, green, blue, t, + offset_x, offset_y; u_char *out; ngx_buf_t *b; ngx_uint_t resize; @@ -932,8 +953,24 @@ transparent: return NULL; } - ox /= 2; - oy /= 2; + offset_x = ngx_http_image_filter_get_value(r, conf->oxcv, + conf->offset_x); + offset_y = ngx_http_image_filter_get_value(r, conf->oycv, + conf->offset_y); + + if (offset_x == NGX_HTTP_IMAGE_OFFSET_LEFT) { + ox = 0; + + } else if (offset_x == NGX_HTTP_IMAGE_OFFSET_CENTER) { + ox /= 2; + } + + if (offset_y == NGX_HTTP_IMAGE_OFFSET_TOP) { + oy = 0; + + } else if (offset_y == NGX_HTTP_IMAGE_OFFSET_CENTER) { + oy /= 2; + } ngx_log_debug4(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "image crop: %d x %d @ %d x %d", @@ -1141,17 +1178,43 @@ ngx_http_image_filter_get_value(ngx_http_request_t *r, static ngx_uint_t -ngx_http_image_filter_value(ngx_str_t *value) +ngx_http_image_filter_value(ngx_str_t *v) { ngx_int_t n; - if (value->len == 1 && value->data[0] == '-') { + if (v->len == 1 && v->data[0] == '-') { return (ngx_uint_t) -1; } - n = ngx_atoi(value->data, value->len); + n = ngx_atoi(v->data, v->len); + + if (n == NGX_ERROR) { + + if (v->len == sizeof("left") - 1 + && ngx_strncmp(v->data, "left", v->len) == 0) + { + return NGX_HTTP_IMAGE_OFFSET_LEFT; + + } else if (v->len == sizeof("right") - 1 + && ngx_strncmp(v->data, "right", sizeof("right") - 1) == 0) + { + return NGX_HTTP_IMAGE_OFFSET_RIGHT; + + } else if (v->len == sizeof("top") - 1 + && ngx_strncmp(v->data, "top", sizeof("top") - 1) == 0) + { + return NGX_HTTP_IMAGE_OFFSET_TOP; - if (n > 0) { + } else if (v->len == sizeof("bottom") - 1 + && ngx_strncmp(v->data, "bottom", sizeof("bottom") - 1) == 0) + { + return NGX_HTTP_IMAGE_OFFSET_BOTTOM; + + } else { + return NGX_HTTP_IMAGE_OFFSET_CENTER; + } + + } else if (n > 0) { return (ngx_uint_t) n; } @@ -1175,6 +1238,8 @@ ngx_http_image_filter_create_conf(ngx_conf_t *cf) conf->angle = NGX_CONF_UNSET_UINT; conf->transparency = NGX_CONF_UNSET; conf->buffer_size = NGX_CONF_UNSET_SIZE; + conf->offset_x = NGX_CONF_UNSET_UINT; + conf->offset_y = NGX_CONF_UNSET_UINT; return conf; } @@ -1230,6 +1295,24 @@ ngx_http_image_filter_merge_conf(ngx_conf_t *cf, void *parent, void *child) ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, 1 * 1024 * 1024); + if (conf->offset_x == NGX_CONF_UNSET_UINT) { + ngx_conf_merge_uint_value(conf->offset_x, prev->offset_x, + NGX_HTTP_IMAGE_OFFSET_CENTER); + + if (conf->oxcv == NULL) { + conf->oxcv = prev->oxcv; + } + } + + if (conf->offset_y == NGX_CONF_UNSET_UINT) { + ngx_conf_merge_uint_value(conf->offset_y, prev->offset_y, + NGX_HTTP_IMAGE_OFFSET_CENTER); + + if (conf->oycv == NULL) { + conf->oycv = prev->oycv; + } + } + return NGX_CONF_OK; } @@ -1481,6 +1564,66 @@ ngx_http_image_filter_sharpen(ngx_conf_t *cf, ngx_command_t *cmd, } +static char * +ngx_http_image_filter_offset(ngx_conf_t *cf, ngx_command_t *cmd, + void *conf) +{ + ngx_http_image_filter_conf_t *imcf = conf; + + ngx_str_t *value; + ngx_http_complex_value_t cv; + ngx_http_compile_complex_value_t ccv; + + value = cf->args->elts; + + ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = &value[1]; + ccv.complex_value = &cv; + + if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (cv.lengths == NULL) { + imcf->offset_x = ngx_http_image_filter_value(&value[1]); + + } else { + imcf->oxcv = ngx_palloc(cf->pool, sizeof(ngx_http_complex_value_t)); + if (imcf->oxcv == NULL) { + return NGX_CONF_ERROR; + } + + *imcf->oxcv = cv; + } + + ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = &value[2]; + ccv.complex_value = &cv; + + if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + + if (cv.lengths == NULL) { + imcf->offset_y = ngx_http_image_filter_value(&value[2]); + + } else { + imcf->oycv = ngx_palloc(cf->pool, sizeof(ngx_http_complex_value_t)); + if (imcf->oycv == NULL) { + return NGX_CONF_ERROR; + } + + *imcf->oycv = cv; + } + + return NGX_CONF_OK; +} + + static ngx_int_t ngx_http_image_filter_init(ngx_conf_t *cf) { On 25 December 2012 20:05, Maxim Dounin wrote: > Hello! > > On Thu, Dec 20, 2012 at 09:01:38PM +0400, ivan babrou wrote: > > > I've send patch for configuration semantics in separate thread. > > > > Here's lastest version of my patch that may be applied after. Is there > > something else that I should fix? > > I would like to see clarification on how are you going to handle > just bare numbers specified. As of now the code seems to don't do > any special processing and seems to assume all values are correct, > which might not be true and will result in some mostly arbitray > crop offset selection. It might make sense to actually support > percents as in Maxim Bublis's patch[1]. > > [1] http://mailman.nginx.org/pipermail/nginx-devel/2012-April/002156.html > > [...] > > > @@ -1151,7 +1188,24 @@ ngx_http_image_filter_value(ngx_str_t *value) > > > > n = ngx_atoi(value->data, value->len); > > > > - if (n > 0) { > > + if (n == NGX_ERROR) { > > + if (ngx_strncmp(value->data, "left", value->len) == 0) { > > + return NGX_HTTP_IMAGE_OFFSET_LEFT; > > + > > + } else if (ngx_strncmp(value->data, "right", value->len) == 0) { > > + return NGX_HTTP_IMAGE_OFFSET_RIGHT; > > + > > + } else if (ngx_strncmp(value->data, "top", value->len) == 0) { > > + return NGX_HTTP_IMAGE_OFFSET_TOP; > > + > > + } else if (ngx_strncmp(value->data, "bottom", value->len) == 0) > { > > + return NGX_HTTP_IMAGE_OFFSET_BOTTOM; > > + > > + } else { > > + return NGX_HTTP_IMAGE_OFFSET_CENTER; > > + } > > + > > + } else if (n > 0) { > > return (ngx_uint_t) n; > > } > > > > The ngx_strncmp() checks are incorrect, see here: > http://mailman.nginx.org/pipermail/nginx-devel/2012-December/003065.html > > You may also move them below "if (n > 0)" the check for better > readability. > > > @@ -1175,6 +1229,8 @@ ngx_http_image_filter_create_conf(ngx_conf_t *cf) > > conf->angle = NGX_CONF_UNSET_UINT; > > conf->transparency = NGX_CONF_UNSET; > > conf->buffer_size = NGX_CONF_UNSET_SIZE; > > + conf->crop_offset_x = NGX_CONF_UNSET_UINT; > > + conf->crop_offset_y = NGX_CONF_UNSET_UINT; > > The "crop_" prefix probably should be dropped here for better > readability (and to match other names in the code like "oxcv"). > > [...] > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -- Regards, Ian Babrou http://bobrik.name http://twitter.com/ibobrik skype:i.babrou -------------- next part -------------- An HTML attachment was scrubbed... URL: From toli at webforge.bg Tue Jan 8 09:09:02 2013 From: toli at webforge.bg (Anatoli Marinov) Date: Tue, 08 Jan 2013 11:09:02 +0200 Subject: how ngx.shared.DICT could be locked Message-ID: <50EBE22E.80408@webforge.bg> Hello Colleagues, I am wondering is there a method for shared dictionary locking. In my script I have to flush all records from the dictionary and after that the script will put new records. In this time I do not want other workers to read the partially loaded dictionary. So is it possible to lock it for a very small period of time? Thanks in advance Anatoli Marinov From vshebordaev at mail.ru Tue Jan 8 10:01:25 2013 From: vshebordaev at mail.ru (Vladimir Shebordaev) Date: Tue, 08 Jan 2013 14:01:25 +0400 Subject: how ngx.shared.DICT could be locked In-Reply-To: <50EBE22E.80408@webforge.bg> References: <50EBE22E.80408@webforge.bg> Message-ID: <50EBEE75.1010802@mail.ru> Hi! On 08.01.2013 13:09, Anatoli Marinov wrote: > Hello Colleagues, > I am wondering is there a method for shared dictionary locking. > In my script I have to flush all records from the dictionary and > after that the script will put new records. In this time I do not > want other workers to read the partially loaded dictionary. > So is it possible to lock it for a very small period of time? > This heavily depends on how your dictionary is accessed from within nginx. If once the dictionary resides in shared memory region, you could also use standard system wide IPC primitive like a semaphore to lock it from your load script. > > Thanks in advance > Anatoli Marinov > Regards, Vladimir From niq at apache.org Tue Jan 8 12:34:43 2013 From: niq at apache.org (Nick Kew) Date: Tue, 8 Jan 2013 12:34:43 +0000 Subject: Filter insertion and ordering Message-ID: <8067EFB7-D199-41A1-BB5E-1B854C7CF4DE@apache.org> OK, so my module needs to insert filters. I put the usual in a post_config function: /* Insert headers_out filter */ ngx_http_next_header_filter = ngx_http_top_header_filter; ngx_http_top_header_filter = my_header_filter; /* Insert body_out filter */ ngx_http_next_body_filter = ngx_http_top_body_filter; ngx_http_top_body_filter = my_body_filter; Only they never get called! With the aid of gdb I put a breakpoint in my post_config, and I see ngx_http_top_[header|body]_filters are NULL at the point where the post_config is being called. Must mean my module is getting called first, right? Guess that figures if it was linked in last and works in Apache 1.x-style order. Setting a breakpoint in another module's filter, I get to: 613 return ngx_http_write_filter(r, &out); (gdb) bt #0 ngx_http_header_filter (r=0x7f7f728b6000) at ngx_http_header_filter_module.c:613 #1 0x0000000109020834 in ngx_http_chunked_header_filter (r=0x7f7f728b6000) at ngx_http_chunked_filter_module.c:68 ? [more header filters] ... #9 0x0000000108fe2945 in ngx_http_send_header (r=0x7f7f728b6000) at ngx_http_core_module.c:1941 #10 0x0000000109013def in ngx_http_upstream_send_response (r=0x7f7f728b6000, u=0x7f7f72838450) at ngx_http_upstream.c:2063 I infer ngx_http_header_filter must've inserted itself after my module, and is not calling a "next" filter. Clearly not useful! Apart from the obvious WTF, this raises another issue: how does my module determine its order in the chain? Its body filter needs to come before anything that might encode it (like compression or chunking), but the header filter should ideally come after those operations to see exactly what will be sent to the Client! How do I take control? -- Nick Kew From mdounin at mdounin.ru Tue Jan 8 13:52:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jan 2013 17:52:05 +0400 Subject: Filter insertion and ordering In-Reply-To: <8067EFB7-D199-41A1-BB5E-1B854C7CF4DE@apache.org> References: <8067EFB7-D199-41A1-BB5E-1B854C7CF4DE@apache.org> Message-ID: <20130108135205.GE73378@mdounin.ru> Hello! On Tue, Jan 08, 2013 at 12:34:43PM +0000, Nick Kew wrote: > OK, so my module needs to insert filters. > I put the usual in a post_config function: > > /* Insert headers_out filter */ > ngx_http_next_header_filter = ngx_http_top_header_filter; > ngx_http_top_header_filter = my_header_filter; > > /* Insert body_out filter */ > ngx_http_next_body_filter = ngx_http_top_body_filter; > ngx_http_top_body_filter = my_body_filter; > > Only they never get called! > > With the aid of gdb I put a breakpoint in my post_config, > and I see ngx_http_top_[header|body]_filters are NULL > at the point where the post_config is being called. > Must mean my module is getting called first, right? > Guess that figures if it was linked in last and works > in Apache 1.x-style order. > > Setting a breakpoint in another module's filter, I get to: > > 613 return ngx_http_write_filter(r, &out); > (gdb) bt > #0 ngx_http_header_filter (r=0x7f7f728b6000) at ngx_http_header_filter_module.c:613 > #1 0x0000000109020834 in ngx_http_chunked_header_filter (r=0x7f7f728b6000) at ngx_http_chunked_filter_module.c:68 > ? [more header filters] ... > #9 0x0000000108fe2945 in ngx_http_send_header (r=0x7f7f728b6000) at ngx_http_core_module.c:1941 > #10 0x0000000109013def in ngx_http_upstream_send_response (r=0x7f7f728b6000, u=0x7f7f72838450) at ngx_http_upstream.c:2063 > > I infer ngx_http_header_filter must've inserted itself after my module, > and is not calling a "next" filter. Clearly not useful! > > Apart from the obvious WTF, this raises another issue: how does my > module determine its order in the chain? Its body filter needs to come > before anything that might encode it (like compression or chunking), > but the header filter should ideally come after those operations to > see exactly what will be sent to the Client! > > How do I take control? Filter ordering is determinded during configure - config script of your module should place it into appropriate list, usually HTTP_AUX_FILTER_MODULES. Your symptoms suggests you've added your filter module into HTTP_MODULES list instead. You may want to refer here for basic instructions: http://www.evanmiller.org/nginx-modules-guide.html#compiling And to auto/modules code for all the details. -- Maxim Dounin http://nginx.com/support.html From vbart at nginx.com Tue Jan 8 14:01:58 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Tue, 8 Jan 2013 14:01:58 +0000 Subject: [nginx] svn commit: r5002 - trunk/src/core Message-ID: <20130108140158.DDCBC3F9C48@mail.nginx.com> Author: vbart Date: 2013-01-08 14:01:57 +0000 (Tue, 08 Jan 2013) New Revision: 5002 URL: http://trac.nginx.org/nginx/changeset/5002/nginx Log: The data pointer in ngx_open_file_t objects must be initialized. Uninitialized pointer may result in arbitrary segfaults if access_log is used without buffer and without variables in file path. Patch by Tatsuhiko Kubo (ticket #268). Modified: trunk/src/core/ngx_conf_file.c Modified: trunk/src/core/ngx_conf_file.c =================================================================== --- trunk/src/core/ngx_conf_file.c 2012-12-31 22:08:19 UTC (rev 5001) +++ trunk/src/core/ngx_conf_file.c 2013-01-08 14:01:57 UTC (rev 5002) @@ -946,6 +946,7 @@ } file->flush = NULL; + file->data = NULL; return file; } From vbart at nginx.com Tue Jan 8 14:03:37 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Tue, 8 Jan 2013 14:03:37 +0000 Subject: [nginx] svn commit: r5003 - trunk/src/event Message-ID: <20130108140337.CA81E3F9C48@mail.nginx.com> Author: vbart Date: 2013-01-08 14:03:37 +0000 (Tue, 08 Jan 2013) New Revision: 5003 URL: http://trac.nginx.org/nginx/changeset/5003/nginx Log: Events: added check for duplicate "events" directive. Modified: trunk/src/event/ngx_event.c Modified: trunk/src/event/ngx_event.c =================================================================== --- trunk/src/event/ngx_event.c 2013-01-08 14:01:57 UTC (rev 5002) +++ trunk/src/event/ngx_event.c 2013-01-08 14:03:37 UTC (rev 5003) @@ -892,6 +892,10 @@ ngx_conf_t pcf; ngx_event_module_t *m; + if (*(void **) conf) { + return "is duplicate"; + } + /* count the number of the event modules and set up their indices */ ngx_event_max_module = 0; From mdounin at mdounin.ru Wed Jan 9 14:11:49 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Wed, 9 Jan 2013 14:11:49 +0000 Subject: [nginx] svn commit: r5004 - trunk/src/event Message-ID: <20130109141149.33C6F3F9F8A@mail.nginx.com> Author: mdounin Date: 2013-01-09 14:11:48 +0000 (Wed, 09 Jan 2013) New Revision: 5004 URL: http://trac.nginx.org/nginx/changeset/5004/nginx Log: SSL: speedup loading of configs with many ssl servers. The patch saves one EC_KEY_generate_key() call per server{} block by informing OpenSSL about SSL_OP_SINGLE_ECDH_USE we are going to use before the SSL_CTX_set_tmp_ecdh() call. For a configuration file with 10k simple server{} blocks with SSL enabled this change reduces startup time from 18s to 5s on a slow test box here. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-08 14:03:37 UTC (rev 5003) +++ trunk/src/event/ngx_event_openssl.c 2013-01-09 14:11:48 UTC (rev 5004) @@ -643,10 +643,10 @@ return NGX_ERROR; } + SSL_CTX_set_options(ssl->ctx, SSL_OP_SINGLE_ECDH_USE); + SSL_CTX_set_tmp_ecdh(ssl->ctx, ecdh); - SSL_CTX_set_options(ssl->ctx, SSL_OP_SINGLE_ECDH_USE); - EC_KEY_free(ecdh); #endif #endif From greearb at candelatech.com Wed Jan 9 23:19:57 2013 From: greearb at candelatech.com (Ben Greear) Date: Wed, 09 Jan 2013 15:19:57 -0800 Subject: Patch to enable SO_BINDTODEVICE Message-ID: <50EDFB1D.1040104@candelatech.com> In order to play some tricks and send requests to myself over external network interfaces, I need to enable the SO_BINDTODEVICE socket option. Here's the patch that I'm trying out. If there is a better patch format or place to send this, please let me know. Thanks, Ben From fe7226036848bd1f4f74dd8186a176b17f974614 Mon Sep 17 00:00:00 2001 From: Ben Greear Date: Wed, 9 Jan 2013 15:01:35 -0800 Subject: [PATCH] Support bind_dev=[ifname] in accept configuration clause. If this option is enabled, nginx will call SO_BINDTODEVICE on the particular device name. This can be helpful for some types of firewalling, and routing setups. Signed-off-by: Ben Greear --- src/core/ngx_connection.c | 21 +++++++++++++++++++++ src/core/ngx_connection.h | 1 + src/http/ngx_http.c | 13 +++++++++++++ src/http/ngx_http_core_module.c | 6 ++++++ src/http/ngx_http_core_module.h | 1 + 5 files changed, 42 insertions(+) diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c index c818114..a3a8101 100644 --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -397,6 +397,27 @@ ngx_open_listening_sockets(ngx_cycle_t *cycle) continue; } + if (ls[i].dev_name[0]) { +#ifdef SO_BINDTODEVICE + if (setsockopt(s, SOL_SOCKET, SO_BINDTODEVICE, + ls[i].dev_name, strlen(ls[i].dev_name))) { + ngx_log_error(NGX_LOG_EMERG, log, errno, "setsockopt (%i, BINDTODEVICE, %s) failed", + s, ls[i].dev_name); + return NGX_ERROR; + } + else { + ngx_log_error(NGX_LOG_EMERG, log, 0, "setsockopt (%i, BINDTODEVICE, %s) succeeded!", + s, ls[i].dev_name); + } +#else + ngx_log_error(NGX_LOG_EMERG, log, 0, + "setsockopt (%i, BINDTODEVICE, %s) not supported on this platform. Please remove the bind_dev= option for 'listen' directive.", + s, ls[i].dev_name); + return NGX_ERROR; +#endif + } + + #if (NGX_HAVE_UNIX_DOMAIN) if (ls[i].sockaddr->sa_family == AF_UNIX) { diff --git a/src/core/ngx_connection.h b/src/core/ngx_connection.h index 87fd087..e5c030c 100644 --- a/src/core/ngx_connection.h +++ b/src/core/ngx_connection.h @@ -18,6 +18,7 @@ typedef struct ngx_listening_s ngx_listening_t; struct ngx_listening_s { ngx_socket_t fd; + char dev_name[32]; /* Use SO_BINDTODEVICE if this is not zero-length */ struct sockaddr *sockaddr; socklen_t socklen; /* size of sockaddr */ size_t addr_text_max_len; diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c index f1f8a48..9aa732b 100644 --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -1618,6 +1618,16 @@ ngx_http_cmp_conf_addrs(const void *one, const void *two) return -1; } + if (first->opt.dev_name[0] && !second->opt.dev_name[0]) { + /* shift explicit bind_dev()ed addresses to the start */ + return -1; + } + + if (!first->opt.dev_name[0] && second->opt.dev_name[0]) { + /* shift explicit bind_dev()ed addresses to the start */ + return 1; + } + if (first->opt.bind && !second->opt.bind) { /* shift explicit bind()ed addresses to the start */ return -1; @@ -1768,6 +1778,9 @@ ngx_http_add_listening(ngx_conf_t *cf, ngx_http_conf_addr_t *addr) ls->rcvbuf = addr->opt.rcvbuf; ls->sndbuf = addr->opt.sndbuf; + strncpy(ls->dev_name, addr->opt.dev_name, sizeof(ls->dev_name)); + ls->dev_name[sizeof(ls->dev_name) - 1] = 0; + ls->keepalive = addr->opt.so_keepalive; #if (NGX_HAVE_KEEPALIVE_TUNABLE) ls->keepidle = addr->opt.tcp_keepidle; diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c index 27f082e..664f9b7 100644 --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3937,6 +3937,12 @@ ngx_http_core_listen(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) continue; } + if (ngx_strncmp(value[n].data, "bind_dev=", 9) == 0) { + strncpy(lsopt.dev_name, (char*)(value[n].data + 9), sizeof(lsopt.dev_name)); + lsopt.dev_name[sizeof(lsopt.dev_name) - 1] = 0; + continue; + } + #if (NGX_HAVE_SETFIB) if (ngx_strncmp(value[n].data, "setfib=", 7) == 0) { lsopt.setfib = ngx_atoi(value[n].data + 7, value[n].len - 7); diff --git a/src/http/ngx_http_core_module.h b/src/http/ngx_http_core_module.h index ff1c2df..8ac0a7a 100644 --- a/src/http/ngx_http_core_module.h +++ b/src/http/ngx_http_core_module.h @@ -67,6 +67,7 @@ typedef struct { } u; socklen_t socklen; + char dev_name[32]; /* for use with bind_dev */ unsigned set:1; unsigned default_server:1; -- 1.7.11.7 -- Ben Greear Candela Technologies Inc http://www.candelatech.com From zls.sogou at gmail.com Thu Jan 10 11:31:07 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Thu, 10 Jan 2013 19:31:07 +0800 Subject: [PATCH] Honor the redirect response overwritten in error_page for absolute urls Message-ID: Absolute url in error_page is always treated by an internal redirect, even if the response code is specified with 301/302. This patch honors the response code specified and makes it possible to redirect with absolute url in error_page easily. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: absolute_url_redirect.patch Type: application/octet-stream Size: 1739 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Jan 10 11:38:15 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Thu, 10 Jan 2013 11:38:15 +0000 Subject: [nginx] svn commit: r5005 - trunk/misc Message-ID: <20130110113815.598203FAB39@mail.nginx.com> Author: mdounin Date: 2013-01-10 11:38:14 +0000 (Thu, 10 Jan 2013) New Revision: 5005 URL: http://trac.nginx.org/nginx/changeset/5005/nginx Log: Updated PCRE used for win32 builds. Modified: trunk/misc/GNUmakefile Modified: trunk/misc/GNUmakefile =================================================================== --- trunk/misc/GNUmakefile 2013-01-09 14:11:48 UTC (rev 5004) +++ trunk/misc/GNUmakefile 2013-01-10 11:38:14 UTC (rev 5005) @@ -8,7 +8,7 @@ OBJS = objs.msvc8 OPENSSL = openssl-1.0.1c ZLIB = zlib-1.2.7 -PCRE = pcre-8.31 +PCRE = pcre-8.32 release: From mdounin at mdounin.ru Thu Jan 10 12:32:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Jan 2013 16:32:43 +0400 Subject: [PATCH] Honor the redirect response overwritten in error_page for absolute urls In-Reply-To: References: Message-ID: <20130110123243.GH80623@mdounin.ru> Hello! On Thu, Jan 10, 2013 at 07:31:07PM +0800, lanshun zhou wrote: > Absolute url in error_page is always treated by an internal redirect, even > if the response > code is specified with 301/302. This patch honors the response code > specified and > makes it possible to redirect with absolute url in error_page easily. I don't like this change. Current behaviour is well-defined and allows to do anything (with an additional step in some case). On the other hand, introducing logic which takes new response code into account will complicate things. The difference between these two lines will be counterintuitive: error_page 301 /foo.html; error_page 301 =302 /foo.html; Additionally, it will break configs like this: error_page 301 =302 /foo.html; location = /foo.html { ... } (I.e. change the response code and return content from the "/foo.html".) -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Thu Jan 10 12:58:55 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 10 Jan 2013 12:58:55 +0000 Subject: [nginx] svn commit: r5006 - in trunk/src: core http Message-ID: <20130110125855.C43003FAA79@mail.nginx.com> Author: ru Date: 2013-01-10 12:58:55 +0000 (Thu, 10 Jan 2013) New Revision: 5006 URL: http://trac.nginx.org/nginx/changeset/5006/nginx Log: Fixed "proxy_pass" with IP address and no port (ticket #276). Upstreams created by "proxy_pass" with IP address and no port were broken in 1.3.10, by not initializing port in u->sockaddr. API change: ngx_parse_url() was modified to always initialize port (in u->sockaddr and in u->port), even for the u->no_resolve case; ngx_http_upstream() and ngx_http_upstream_add() were adopted. Modified: trunk/src/core/ngx_inet.c trunk/src/http/ngx_http_upstream.c trunk/src/http/ngx_http_upstream.h trunk/src/http/ngx_http_upstream_round_robin.c Modified: trunk/src/core/ngx_inet.c =================================================================== --- trunk/src/core/ngx_inet.c 2013-01-10 11:38:14 UTC (rev 5005) +++ trunk/src/core/ngx_inet.c 2013-01-10 12:58:55 UTC (rev 5006) @@ -707,11 +707,8 @@ } u->no_port = 1; - - if (!u->no_resolve) { - u->port = u->default_port; - sin->sin_port = htons(u->default_port); - } + u->port = u->default_port; + sin->sin_port = htons(u->default_port); } len = last - host; @@ -868,11 +865,8 @@ } else { u->no_port = 1; - - if (!u->no_resolve) { - u->port = u->default_port; - sin6->sin6_port = htons(u->default_port); - } + u->port = u->default_port; + sin6->sin6_port = htons(u->default_port); } } Modified: trunk/src/http/ngx_http_upstream.c =================================================================== --- trunk/src/http/ngx_http_upstream.c 2013-01-10 11:38:14 UTC (rev 5005) +++ trunk/src/http/ngx_http_upstream.c 2013-01-10 12:58:55 UTC (rev 5006) @@ -4121,6 +4121,7 @@ value = cf->args->elts; u.host = value[1]; u.no_resolve = 1; + u.no_port = 1; uscf = ngx_http_upstream_add(cf, &u, NGX_HTTP_UPSTREAM_CREATE |NGX_HTTP_UPSTREAM_WEIGHT @@ -4391,14 +4392,14 @@ return NULL; } - if ((uscfp[i]->flags & NGX_HTTP_UPSTREAM_CREATE) && u->port) { + if ((uscfp[i]->flags & NGX_HTTP_UPSTREAM_CREATE) && !u->no_port) { ngx_conf_log_error(NGX_LOG_WARN, cf, 0, "upstream \"%V\" may not have port %d", &u->host, u->port); return NULL; } - if ((flags & NGX_HTTP_UPSTREAM_CREATE) && uscfp[i]->port) { + if ((flags & NGX_HTTP_UPSTREAM_CREATE) && !uscfp[i]->no_port) { ngx_log_error(NGX_LOG_WARN, cf->log, 0, "upstream \"%V\" may not have port %d in %s:%ui", &u->host, uscfp[i]->port, @@ -4406,7 +4407,9 @@ return NULL; } - if (uscfp[i]->port != u->port) { + if (uscfp[i]->port && u->port + && uscfp[i]->port != u->port) + { continue; } @@ -4434,6 +4437,7 @@ uscf->line = cf->conf_file->line; uscf->port = u->port; uscf->default_port = u->default_port; + uscf->no_port = u->no_port; if (u->naddrs == 1) { uscf->servers = ngx_array_create(cf->pool, 1, Modified: trunk/src/http/ngx_http_upstream.h =================================================================== --- trunk/src/http/ngx_http_upstream.h 2013-01-10 11:38:14 UTC (rev 5005) +++ trunk/src/http/ngx_http_upstream.h 2013-01-10 12:58:55 UTC (rev 5006) @@ -116,6 +116,7 @@ ngx_uint_t line; in_port_t port; in_port_t default_port; + ngx_uint_t no_port; /* unsigned no_port:1 */ }; Modified: trunk/src/http/ngx_http_upstream_round_robin.c =================================================================== --- trunk/src/http/ngx_http_upstream_round_robin.c 2013-01-10 11:38:14 UTC (rev 5005) +++ trunk/src/http/ngx_http_upstream_round_robin.c 2013-01-10 12:58:55 UTC (rev 5006) @@ -161,7 +161,7 @@ /* an upstream implicitly defined by proxy_pass, etc. */ - if (us->port == 0 && us->default_port == 0) { + if (us->port == 0) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "no port in upstream \"%V\" in %s:%ui", &us->host, us->file_name, us->line); @@ -171,7 +171,7 @@ ngx_memzero(&u, sizeof(ngx_url_t)); u.host = us->host; - u.port = (in_port_t) (us->port ? us->port : us->default_port); + u.port = us->port; if (ngx_inet_resolve_host(cf->pool, &u) != NGX_OK) { if (u.err) { From mdounin at mdounin.ru Thu Jan 10 13:17:04 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Thu, 10 Jan 2013 13:17:04 +0000 Subject: [nginx] svn commit: r5007 - trunk/docs/xml/nginx Message-ID: <20130110131708.3CDE43FA116@mail.nginx.com> Author: mdounin Date: 2013-01-10 13:17:04 +0000 (Thu, 10 Jan 2013) New Revision: 5007 URL: http://trac.nginx.org/nginx/changeset/5007/nginx Log: nginx-1.3.11-RELEASE Modified: trunk/docs/xml/nginx/changes.xml Modified: trunk/docs/xml/nginx/changes.xml =================================================================== --- trunk/docs/xml/nginx/changes.xml 2013-01-10 12:58:55 UTC (rev 5006) +++ trunk/docs/xml/nginx/changes.xml 2013-01-10 13:17:04 UTC (rev 5007) @@ -4,6 +4,59 @@ + + + + +??? ?????? ? ??? ??? ??????????? segmentation fault; +?????? ????????? ? 1.3.10. + + +a segmentation fault might occur if logging was used; +the bug had appeared in 1.3.10. + + + + + +????????? proxy_pass ?? ???????? ? IP-???????? +??? ?????? ???????? ?????; +?????? ????????? ? 1.3.10. + + +the "proxy_pass" directive did not work with IP addresses +without port specified; +the bug had appeared in 1.3.10. + + + + + +?? ?????? ??? ?? ????? ???????????????? ?????????? segmentation fault, +???? ????????? keepalive ???? ??????? ????????? ??? +? ????? ????? upstream. + + +a segmentation fault occurred on start or during reconfiguration +if the "keepalive" directive was specified more than once +in a single upstream block. + + + + + +???????? default ????????? geo ?? ????????? ???????? ?? ????????? +??? IPv6-???????. + + +parameter "default" of the "geo" directive did not set default value +for IPv6 addresses. + + + + + + From mdounin at mdounin.ru Thu Jan 10 13:17:29 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Thu, 10 Jan 2013 13:17:29 +0000 Subject: [nginx] svn commit: r5008 - tags Message-ID: <20130110131729.A6FF53FA11D@mail.nginx.com> Author: mdounin Date: 2013-01-10 13:17:29 +0000 (Thu, 10 Jan 2013) New Revision: 5008 URL: http://trac.nginx.org/nginx/changeset/5008/nginx Log: release-1.3.11 tag Added: tags/release-1.3.11/ From zls.sogou at gmail.com Thu Jan 10 15:57:58 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Thu, 10 Jan 2013 23:57:58 +0800 Subject: [PATCH] Honor the redirect response overwritten in error_page for absolute urls In-Reply-To: <20130110123243.GH80623@mdounin.ru> References: <20130110123243.GH80623@mdounin.ru> Message-ID: Yeah, this will break current supported configs like the one you provided. But only the ones with 30x response code overwritten and without named location are affected. I think =30x in error_page is something like the "redirect" flag in "rewrite", which means "I want the redirect" by default. I could think of these kinds of configs broken before the patch, and just not sure why introducing a location for 30x response. Is the content for 30x response valuable? or to generate the "Location" header dynamically which can not be expressed by the url param with variables supported? But this patch does break supported configs, although named locations can be used instead as before to do anything. I just don't l like introducing a location to redirect for 404 error_page :) error_page 404 @notfound; location @notfound { return 302 /error.html; # or rewrite ^ /error.html redirect; } 2013/1/10 Maxim Dounin > Hello! > > On Thu, Jan 10, 2013 at 07:31:07PM +0800, lanshun zhou wrote: > > > Absolute url in error_page is always treated by an internal redirect, > even > > if the response > > code is specified with 301/302. This patch honors the response code > > specified and > > makes it possible to redirect with absolute url in error_page easily. > > I don't like this change. Current behaviour is well-defined and > allows to do anything (with an additional step in some case). On > the other hand, introducing logic which takes new response code > into account will complicate things. The difference between these > two lines will be counterintuitive: > > error_page 301 /foo.html; > error_page 301 =302 /foo.html; > > Additionally, it will break configs like this: > > error_page 301 =302 /foo.html; > > location = /foo.html { > ... > } > > (I.e. change the response code and return content from the > "/foo.html".) > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 10 16:55:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Jan 2013 20:55:16 +0400 Subject: [PATCH] Honor the redirect response overwritten in error_page for absolute urls In-Reply-To: References: <20130110123243.GH80623@mdounin.ru> Message-ID: <20130110165515.GR80623@mdounin.ru> Hello! On Thu, Jan 10, 2013 at 11:57:58PM +0800, lanshun zhou wrote: > Yeah, this will break current supported configs like the one you provided. > But only > the ones with 30x response code overwritten and without named location are > affected. I think =30x in error_page is something like the "redirect" flag > in "rewrite", > which means "I want the redirect" by default. No, =NNN means "I want NNN code to be returned", nothing more. It allows to overwrite code returned if redirect is requested (via something like "http://..." set in uri parameter), but it doesn't request redirect by itself. > I could think of these kinds of configs broken before the patch, and > just not sure > why introducing a location for 30x response. Is the content > for 30x response > valuable? or to generate the "Location" header dynamically which can not be > expressed by the url param with variables supported? Some time ago returning correct content (more specifically, correct type of content) in redirections was required for WAP phones to handle them. > But this patch does break supported configs, although named locations > can be used > instead as before to do anything. I just don't l like introducing a > location to redirect > for 404 error_page :) > > error_page 404 @notfound; > > location @notfound { > return 302 /error.html; # or rewrite ^ /error.html redirect; > } Use something like this instead: error_page 404 http://example.com/error.html; -- Maxim Dounin http://nginx.com/support.html From zls.sogou at gmail.com Fri Jan 11 06:18:19 2013 From: zls.sogou at gmail.com (lanshun zhou) Date: Fri, 11 Jan 2013 14:18:19 +0800 Subject: [PATCH] Honor the redirect response overwritten in error_page for absolute urls In-Reply-To: <20130110165515.GR80623@mdounin.ru> References: <20130110123243.GH80623@mdounin.ru> <20130110165515.GR80623@mdounin.ru> Message-ID: All right. We'll use $scheme://$host/error.html in http block, because we have lots of server blocks in the deploy. Thanks. 2013/1/11 Maxim Dounin > Hello! > > On Thu, Jan 10, 2013 at 11:57:58PM +0800, lanshun zhou wrote: > > > Yeah, this will break current supported configs like the one you > provided. > > But only > > the ones with 30x response code overwritten and without named location > are > > affected. I think =30x in error_page is something like the "redirect" > flag > > in "rewrite", > > which means "I want the redirect" by default. > > No, =NNN means "I want NNN code to be returned", nothing more. It > allows to overwrite code returned if redirect is requested (via > something like "http://..." set in uri parameter), but it doesn't > request redirect by itself. > > > I could think of these kinds of configs broken before the patch, and > > just not sure > > why introducing a location for 30x response. Is the content > > for 30x response > > valuable? or to generate the "Location" header dynamically which can not > be > > expressed by the url param with variables supported? > > Some time ago returning correct content (more specifically, > correct type of content) in redirections was required for WAP > phones to handle them. > > > But this patch does break supported configs, although named locations > > can be used > > instead as before to do anything. I just don't l like introducing a > > location to redirect > > for 404 error_page :) > > > > error_page 404 @notfound; > > > > location @notfound { > > return 302 /error.html; # or rewrite ^ /error.html redirect; > > } > > Use something like this instead: > > error_page 404 http://example.com/error.html; > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toli at webforge.bg Fri Jan 11 09:56:15 2013 From: toli at webforge.bg (Anatoli Marinov) Date: Fri, 11 Jan 2013 11:56:15 +0200 Subject: rewrite phases order Message-ID: <50EFE1BF.9080909@webforge.bg> Hello colleagues, I am wondering is there an way to order function calls for rewrite phase. For example I have 3 modules that may reject request and all they register a function for request rewrite phase. The modules are A, B and C. From log messages I can see that the function are called in "unordered" way for example B, C, A. How can I order the calls to A, B, C ? Thanks in advance Anatoli Marinov From mdounin at mdounin.ru Fri Jan 11 10:51:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jan 2013 14:51:15 +0400 Subject: rewrite phases order In-Reply-To: <50EFE1BF.9080909@webforge.bg> References: <50EFE1BF.9080909@webforge.bg> Message-ID: <20130111105115.GZ80623@mdounin.ru> Hello! On Fri, Jan 11, 2013 at 11:56:15AM +0200, Anatoli Marinov wrote: > Hello colleagues, > I am wondering is there an way to order function calls for rewrite phase. > For example I have 3 modules that may reject request and all they > register a function for request rewrite phase. > The modules are A, B and C. From log messages I can see that the > function are called in "unordered" way for example B, C, A. How can > I order the calls to A, B, C ? Yes, last added module is always called first. If module order is important in your case, trivial solution would be to reorder --add-module configure arguments for the correct order. -- Maxim Dounin http://nginx.com/support.html From toli at webforge.bg Fri Jan 11 11:37:07 2013 From: toli at webforge.bg (Anatoli Marinov) Date: Fri, 11 Jan 2013 13:37:07 +0200 Subject: rewrite phases order In-Reply-To: <20130111105115.GZ80623@mdounin.ru> References: <50EFE1BF.9080909@webforge.bg> <20130111105115.GZ80623@mdounin.ru> Message-ID: <50EFF963.9080404@webforge.bg> Thanks :) On 01/11/2013 12:51 PM, Maxim Dounin wrote: > Hello! > > On Fri, Jan 11, 2013 at 11:56:15AM +0200, Anatoli Marinov wrote: > >> Hello colleagues, >> I am wondering is there an way to order function calls for rewrite phase. >> For example I have 3 modules that may reject request and all they >> register a function for request rewrite phase. >> The modules are A, B and C. From log messages I can see that the >> function are called in "unordered" way for example B, C, A. How can >> I order the calls to A, B, C ? > Yes, last added module is always called first. If module order is > important in your case, trivial solution would be to reorder --add-module > configure arguments for the correct order. > From goelvivek2011 at gmail.com Sun Jan 13 07:51:22 2013 From: goelvivek2011 at gmail.com (Vivek Goel) Date: Sun, 13 Jan 2013 13:21:22 +0530 Subject: Nginx memory usage issue Message-ID: Hi, I have one question. Do I need to call ngx_http_finalize_request for every request after calling ngx_http_send_response(r, NGX_HTTP_OK, &ngx_http_json_type, cv); ? If yes. Can I successfully call it without checking return code or any variable like ? ngx_http_send_response(r, NGX_HTTP_OK, &ngx_http_json_type, cv); ngx_http_finalize_request regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Tue Jan 15 09:36:17 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 15 Jan 2013 13:36:17 +0400 Subject: nginx.org repository has been migrated Message-ID: <89390C95-D9A5-4E5A-8AD8-F25815386019@waeme.net> Hi, nginx.org site and documentation repository has been migrated from subversion to mercurial. Mercurial repo is available at http://hg.nginx.org/nginx.org From niq at apache.org Tue Jan 15 10:35:15 2013 From: niq at apache.org (Nick Kew) Date: Tue, 15 Jan 2013 10:35:15 +0000 Subject: Filter insertion and ordering In-Reply-To: <20130108135205.GE73378@mdounin.ru> References: <8067EFB7-D199-41A1-BB5E-1B854C7CF4DE@apache.org> <20130108135205.GE73378@mdounin.ru> Message-ID: On 8 Jan 2013, at 13:52, Maxim Dounin wrote: > Filter ordering is determinded during configure - config script of > your module should place it into appropriate list, usually > HTTP_AUX_FILTER_MODULES. Your symptoms suggests you've added your > filter module into HTTP_MODULES list instead. Thanks. Yes, that was of course the problem. I guess control of my ordering relative to other modules isn't available? It seems my module's handlers for request phases are perfectly fine compiled with HTTP_AUX_FILTER_MODULES, too. Is that right, or am I setting myself up for trouble that has just yet to bite? -- Nick Kew From brian at akins.org Tue Jan 15 11:54:23 2013 From: brian at akins.org (Brian Akins) Date: Tue, 15 Jan 2013 06:54:23 -0500 Subject: Filter insertion and ordering In-Reply-To: References: <8067EFB7-D199-41A1-BB5E-1B854C7CF4DE@apache.org> <20130108135205.GE73378@mdounin.ru> Message-ID: <4557D047-07D0-4244-8570-82F0ADA710DF@akins.org> On Jan 15, 2013, at 5:35 AM, Nick Kew wrote: > > Thanks. Yes, that was of course the problem. > > I guess control of my ordering relative to other modules isn't available? > Nick, FWIW, we write most of our nginx modules as Lua modules now. Sometimes almost all of the module is actually in C. We can control the exact order per request using Lua. Contrived example: local foo = require 'nginx.foo_filter' local bar = require 'nginx.bar_filter' if something then return run_filters(ngx, { foo, bar }) else return run_filters(ngx, { bar, foo}) end --Brian From ru at nginx.com Wed Jan 16 09:42:58 2013 From: ru at nginx.com (ru at nginx.com) Date: Wed, 16 Jan 2013 09:42:58 +0000 Subject: [nginx] svn commit: r5009 - in trunk/src/http: . modules Message-ID: <20130116094258.C879F3F9F41@mail.nginx.com> Author: ru Date: 2013-01-16 09:42:57 +0000 (Wed, 16 Jan 2013) New Revision: 5009 URL: http://trac.nginx.org/nginx/changeset/5009/nginx Log: Fixed and improved the "*_bind" directives of proxying modules. The "proxy_bind", "fastcgi_bind", "uwsgi_bind", "scgi_bind" and "memcached_bind" directives are now inherited; inherited value can be reset by the "off" parameter. Duplicate directives are now detected. Parameter value can now contain variables. Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c trunk/src/http/modules/ngx_http_memcached_module.c trunk/src/http/modules/ngx_http_proxy_module.c trunk/src/http/modules/ngx_http_scgi_module.c trunk/src/http/modules/ngx_http_uwsgi_module.c trunk/src/http/ngx_http.h trunk/src/http/ngx_http_upstream.c trunk/src/http/ngx_http_upstream.h Modified: trunk/src/http/modules/ngx_http_fastcgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_fastcgi_module.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/modules/ngx_http_fastcgi_module.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -2106,6 +2106,8 @@ conf->upstream.buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; + conf->upstream.local = NGX_CONF_UNSET_PTR; + conf->upstream.connect_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.send_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.read_timeout = NGX_CONF_UNSET_MSEC; @@ -2177,6 +2179,9 @@ ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); + ngx_conf_merge_ptr_value(conf->upstream.local, + prev->upstream.local, NULL); + ngx_conf_merge_msec_value(conf->upstream.connect_timeout, prev->upstream.connect_timeout, 60000); Modified: trunk/src/http/modules/ngx_http_memcached_module.c =================================================================== --- trunk/src/http/modules/ngx_http_memcached_module.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/modules/ngx_http_memcached_module.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -574,6 +574,7 @@ * conf->upstream.location = NULL; */ + conf->upstream.local = NGX_CONF_UNSET_PTR; conf->upstream.connect_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.send_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.read_timeout = NGX_CONF_UNSET_MSEC; @@ -607,6 +608,9 @@ ngx_http_memcached_loc_conf_t *prev = parent; ngx_http_memcached_loc_conf_t *conf = child; + ngx_conf_merge_ptr_value(conf->upstream.local, + prev->upstream.local, NULL); + ngx_conf_merge_msec_value(conf->upstream.connect_timeout, prev->upstream.connect_timeout, 60000); Modified: trunk/src/http/modules/ngx_http_proxy_module.c =================================================================== --- trunk/src/http/modules/ngx_http_proxy_module.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/modules/ngx_http_proxy_module.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -2369,6 +2369,8 @@ conf->upstream.buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; + conf->upstream.local = NGX_CONF_UNSET_PTR; + conf->upstream.connect_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.send_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.read_timeout = NGX_CONF_UNSET_MSEC; @@ -2453,6 +2455,9 @@ ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); + ngx_conf_merge_ptr_value(conf->upstream.local, + prev->upstream.local, NULL); + ngx_conf_merge_msec_value(conf->upstream.connect_timeout, prev->upstream.connect_timeout, 60000); Modified: trunk/src/http/modules/ngx_http_scgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_scgi_module.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/modules/ngx_http_scgi_module.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -1067,6 +1067,8 @@ conf->upstream.buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; + conf->upstream.local = NGX_CONF_UNSET_PTR; + conf->upstream.connect_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.send_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.read_timeout = NGX_CONF_UNSET_MSEC; @@ -1135,6 +1137,9 @@ ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); + ngx_conf_merge_ptr_value(conf->upstream.local, + prev->upstream.local, NULL); + ngx_conf_merge_msec_value(conf->upstream.connect_timeout, prev->upstream.connect_timeout, 60000); Modified: trunk/src/http/modules/ngx_http_uwsgi_module.c =================================================================== --- trunk/src/http/modules/ngx_http_uwsgi_module.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/modules/ngx_http_uwsgi_module.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -1104,6 +1104,8 @@ conf->upstream.buffering = NGX_CONF_UNSET; conf->upstream.ignore_client_abort = NGX_CONF_UNSET; + conf->upstream.local = NGX_CONF_UNSET_PTR; + conf->upstream.connect_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.send_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.read_timeout = NGX_CONF_UNSET_MSEC; @@ -1172,6 +1174,9 @@ ngx_conf_merge_value(conf->upstream.ignore_client_abort, prev->upstream.ignore_client_abort, 0); + ngx_conf_merge_ptr_value(conf->upstream.local, + prev->upstream.local, NULL); + ngx_conf_merge_msec_value(conf->upstream.connect_timeout, prev->upstream.connect_timeout, 60000); Modified: trunk/src/http/ngx_http.h =================================================================== --- trunk/src/http/ngx_http.h 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/ngx_http.h 2013-01-16 09:42:57 UTC (rev 5009) @@ -28,11 +28,11 @@ #include #include +#include #include #include #include #include -#include #include #if (NGX_HTTP_CACHE) Modified: trunk/src/http/ngx_http_upstream.c =================================================================== --- trunk/src/http/ngx_http_upstream.c 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/ngx_http_upstream.c 2013-01-16 09:42:57 UTC (rev 5009) @@ -134,6 +134,9 @@ static char *ngx_http_upstream_server(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static ngx_addr_t *ngx_http_upstream_get_local(ngx_http_request_t *r, + ngx_http_upstream_local_t *local); + static void *ngx_http_upstream_create_main_conf(ngx_conf_t *cf); static char *ngx_http_upstream_init_main_conf(ngx_conf_t *cf, void *conf); @@ -507,7 +510,7 @@ return; } - u->peer.local = u->conf->local; + u->peer.local = ngx_http_upstream_get_local(r, u->conf->local); clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); @@ -4474,24 +4477,63 @@ { char *p = conf; - ngx_int_t rc; - ngx_str_t *value; - ngx_addr_t **paddr; + ngx_int_t rc; + ngx_str_t *value; + ngx_http_complex_value_t cv; + ngx_http_upstream_local_t **plocal, *local; + ngx_http_compile_complex_value_t ccv; - paddr = (ngx_addr_t **) (p + cmd->offset); + plocal = (ngx_http_upstream_local_t **) (p + cmd->offset); - *paddr = ngx_palloc(cf->pool, sizeof(ngx_addr_t)); - if (*paddr == NULL) { - return NGX_CONF_ERROR; + if (*plocal != NGX_CONF_UNSET_PTR) { + return "is duplicate"; } value = cf->args->elts; - rc = ngx_parse_addr(cf->pool, *paddr, value[1].data, value[1].len); + if (ngx_strcmp(value[1].data, "off") == 0) { + *plocal = NULL; + return NGX_CONF_OK; + } + ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + + ccv.cf = cf; + ccv.value = &value[1]; + ccv.complex_value = &cv; + + if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + return NGX_CONF_ERROR; + } + + local = ngx_pcalloc(cf->pool, sizeof(ngx_http_upstream_local_t)); + if (local == NULL) { + return NGX_CONF_ERROR; + } + + *plocal = local; + + if (cv.lengths) { + local->value = ngx_palloc(cf->pool, sizeof(ngx_http_complex_value_t)); + if (local->value == NULL) { + return NGX_CONF_ERROR; + } + + *local->value = cv; + + return NGX_CONF_OK; + } + + local->addr = ngx_palloc(cf->pool, sizeof(ngx_addr_t)); + if (local->addr == NULL) { + return NGX_CONF_ERROR; + } + + rc = ngx_parse_addr(cf->pool, local->addr, value[1].data, value[1].len); + switch (rc) { case NGX_OK: - (*paddr)->name = value[1]; + local->addr->name = value[1]; return NGX_CONF_OK; case NGX_DECLINED: @@ -4505,6 +4547,53 @@ } +static ngx_addr_t * +ngx_http_upstream_get_local(ngx_http_request_t *r, + ngx_http_upstream_local_t *local) +{ + ngx_int_t rc; + ngx_str_t val; + ngx_addr_t *addr; + + if (local == NULL) { + return NULL; + } + + if (local->value == NULL) { + return local->addr; + } + + if (ngx_http_complex_value(r, local->value, &val) != NGX_OK) { + return NULL; + } + + if (val.len == 0) { + return NULL; + } + + addr = ngx_palloc(r->pool, sizeof(ngx_addr_t)); + if (addr == NULL) { + return NULL; + } + + rc = ngx_parse_addr(r->pool, addr, val.data, val.len); + + switch (rc) { + case NGX_OK: + addr->name = val; + return addr; + + case NGX_DECLINED: + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "invalid local address \"%V\"", &val); + /* fall through */ + + default: + return NULL; + } +} + + char * ngx_http_upstream_param_set_slot(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) Modified: trunk/src/http/ngx_http_upstream.h =================================================================== --- trunk/src/http/ngx_http_upstream.h 2013-01-10 13:17:29 UTC (rev 5008) +++ trunk/src/http/ngx_http_upstream.h 2013-01-16 09:42:57 UTC (rev 5009) @@ -121,6 +121,12 @@ typedef struct { + ngx_addr_t *addr; + ngx_http_complex_value_t *value; +} ngx_http_upstream_local_t; + + +typedef struct { ngx_http_upstream_srv_conf_t *upstream; ngx_msec_t connect_timeout; @@ -158,7 +164,7 @@ ngx_array_t *hide_headers; ngx_array_t *pass_headers; - ngx_addr_t *local; + ngx_http_upstream_local_t *local; #if (NGX_HTTP_CACHE) ngx_shm_zone_t *cache; From ru at nginx.com Thu Jan 17 09:55:37 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 17 Jan 2013 09:55:37 +0000 Subject: [nginx] svn commit: r5010 - in trunk/src: core http/modules/perl Message-ID: <20130117095538.738A33FA023@mail.nginx.com> Author: ru Date: 2013-01-17 09:55:36 +0000 (Thu, 17 Jan 2013) New Revision: 5010 URL: http://trac.nginx.org/nginx/changeset/5010/nginx Log: Version bump. Modified: trunk/src/core/nginx.h trunk/src/http/modules/perl/nginx.pm Modified: trunk/src/core/nginx.h =================================================================== --- trunk/src/core/nginx.h 2013-01-16 09:42:57 UTC (rev 5009) +++ trunk/src/core/nginx.h 2013-01-17 09:55:36 UTC (rev 5010) @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1003011 -#define NGINX_VERSION "1.3.11" +#define nginx_version 1003012 +#define NGINX_VERSION "1.3.12" #define NGINX_VER "nginx/" NGINX_VERSION #define NGINX_VAR "NGINX" Modified: trunk/src/http/modules/perl/nginx.pm =================================================================== --- trunk/src/http/modules/perl/nginx.pm 2013-01-16 09:42:57 UTC (rev 5009) +++ trunk/src/http/modules/perl/nginx.pm 2013-01-17 09:55:36 UTC (rev 5010) @@ -50,7 +50,7 @@ HTTP_INSUFFICIENT_STORAGE ); -our $VERSION = '1.3.11'; +our $VERSION = '1.3.12'; require XSLoader; XSLoader::load('nginx', $VERSION); From d.chekaliuk at invisilabs.com Thu Jan 17 10:07:51 2013 From: d.chekaliuk at invisilabs.com (Dmitrii Chekaliuk) Date: Thu, 17 Jan 2013 12:07:51 +0200 Subject: [feature] Add skip_location directive to ngx_http_rewrite Message-ID: <50F7CD77.5000702@invisilabs.com> Hi guys, I'm new to nginx-devel, so, please, take my apologies in case this question was already discussed and rejected. Nginx is awesome, but I think it lacks the flexibility of configuration selection, e.g. it can't throw away the selected config on demand within the rewrite phase to select another one without a need to actually rewrite URI. Think of Apache mod_rewrite's RewriteCond+RewriteRule, it would be great to have the ability to write something like this: location ~ ^/some/path$ { if ($request_method !~* ^GET|HEAD$) { skip_location; } # ... } location ~ ^/some/path$ { # ... } to replicate mod_rewrite's: RewriteCond %{REQUEST_URI} ^/some/path$ RewriteCond %{REQUEST_METHOD} ^GET|HEAD$ [NC] RewriteRule ... RewriteCond %{REQUEST_URI} ^/some/path$ RewriteRule ... I know we could rewrite original URI with method added, like $request_method/$uri, and make our location regexes aware of this, but what if such a logic is needed only for some locations? So, I suggest adding the before mentioned skip_location directive to the http_rewrite module. There is a commit on GitHub with the results of my research: https://github.com/lazyhammer/nginx/commit/88637c563f861a50cc8f6d029b1d2d8cf0c27e36 What do you think? -- Yours faithfully, Dmitrii From niq at apache.org Thu Jan 17 11:32:55 2013 From: niq at apache.org (Nick Kew) Date: Thu, 17 Jan 2013 11:32:55 +0000 Subject: [feature] Add skip_location directive to ngx_http_rewrite In-Reply-To: <50F7CD77.5000702@invisilabs.com> References: <50F7CD77.5000702@invisilabs.com> Message-ID: On 17 Jan 2013, at 10:07, Dmitrii Chekaliuk wrote: > Hi guys, > > I'm new to nginx-devel, so, please, take my apologies in case this question was already discussed and rejected. Nginx is awesome, but I think it lacks the flexibility of configuration selection, e.g. it can't throw away the selected config on demand within the rewrite phase to select another one without a need to actually rewrite URI. Think of Apache mod_rewrite's RewriteCond+RewriteRule, it would be great to have the ability to write something like this: I'm not sufficiently familiar with nginx config to comment on how it might work. But with Apache we implemented to do (among other things) exactly what you're asking for, in a saner manner than pseudo-programming with mod_rewrite. Your suggestion looks like "IF foo GOTO bar", but wouldn't you prefer IF/ELSE/ENDIF with a similar structure to a Location block? -- Nick Kew From ccmbrulak at gmail.com Thu Jan 17 15:19:21 2013 From: ccmbrulak at gmail.com (Chris Brulak) Date: Thu, 17 Jan 2013 08:19:21 -0700 Subject: Serving rails assets with rails3 Message-ID: Is this the right list to ask about issues with the latest SPDY patch? I'm having a hard time getting rails 3 assets served up over SPDY. Will post more details if this is the right list Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.chekaliuk at invisilabs.com Thu Jan 17 15:32:35 2013 From: d.chekaliuk at invisilabs.com (Dmitrii Chekaliuk) Date: Thu, 17 Jan 2013 17:32:35 +0200 Subject: [feature] Add skip_location directive to ngx_http_rewrite In-Reply-To: References: <50F7CD77.5000702@invisilabs.com> Message-ID: <50F81993.50701@invisilabs.com> Hmm, you pushed me to rethink this. Actually, the logic needed for my particular task could be done with IFs and BREAKs like this: location / { if ($uri ~ ^/some/path) { if ($request_method ~ ^GET|HEAD$) { break; # config 1 } if ($request_method = "PUT") { break; # config 2 } } if ($uri ~ ^/some/path/subpath$) { # config 3 } } But nested IFs need to be allowed for that purpose. I can't see any obstacles for this, as It seems like adding NGX_HTTP_SIF_CONF and NGX_HTTP_LIF_CONF to IF's type would be enough, though I'm not very familiar with nginx codebase to know for sure. Or there was some objective reason to disallow nesting? -- Yours faithfully, Dmitrii On 17.01.2013 13:32, Nick Kew wrote: > On 17 Jan 2013, at 10:07, Dmitrii Chekaliuk wrote: > >> Hi guys, >> >> I'm new to nginx-devel, so, please, take my apologies in case this question was already discussed and rejected. Nginx is awesome, but I think it lacks the flexibility of configuration selection, e.g. it can't throw away the selected config on demand within the rewrite phase to select another one without a need to actually rewrite URI. Think of Apache mod_rewrite's RewriteCond+RewriteRule, it would be great to have the ability to write something like this: > I'm not sufficiently familiar with nginx config to comment on how it might work. > > But with Apache we implemented to do (among other things) > exactly what you're asking for, in a saner manner than pseudo-programming > with mod_rewrite. Your suggestion looks like "IF foo GOTO bar", but wouldn't > you prefer IF/ELSE/ENDIF with a similar structure to a Location block? > From brian at akins.org Thu Jan 17 23:18:05 2013 From: brian at akins.org (Brian Akins) Date: Thu, 17 Jan 2013 18:18:05 -0500 Subject: [feature] Add skip_location directive to ngx_http_rewrite In-Reply-To: <50F81993.50701@invisilabs.com> References: <50F7CD77.5000702@invisilabs.com> <50F81993.50701@invisilabs.com> Message-ID: If you need that type of logic look into using Lua or Perl, IMO. From niq at apache.org Sun Jan 20 13:04:44 2013 From: niq at apache.org (Nick Kew) Date: Sun, 20 Jan 2013 13:04:44 +0000 Subject: No handler of last resort? Message-ID: <4B544C14-4C70-4687-B570-5DDEBE27902A@apache.org> In test-driving my plugin, I tried provoking a gateway timeout, whereupon I find nginx unexpectedly eats the system before crashing. Executive summary: it goes into an internal redirect to serve an errordocument. But the errordocument fails, leading into an infinite loop as it tries to invoke the error for that failure. It seems nginx has no handler of last resort! The error log shows the original timeout and error: 2013/01/20 11:17:07 [debug] 11635#0: kevent timer: 60000, changes: 3 2013/01/20 11:28:13 [debug] 11635#0: kevent events: 1 2013/01/20 11:28:13 [debug] 11635#0: kevent: 3: ft:-2 fl:0025 ff:00000000 d:146988 ud:00007FC64188B6E1 2013/01/20 11:28:13 [debug] 11635#0: *7 http run request: "/?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream check client, write event:1, "/" 2013/01/20 11:28:13 [debug] 11635#0: timer delta: 666881 2013/01/20 11:28:13 [debug] 11635#0: *7 event timer del: 11: 1358680687001 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream request: "/?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream send request handler 2013/01/20 11:28:13 [debug] 11635#0: *7 http next upstream, 4 2013/01/20 11:28:13 [debug] 11635#0: *7 free rr peer 1 4 2013/01/20 11:28:13 [error] 11635#0: *7 upstream timed out (60: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "POST / HTTP/1.1", upstream: "http://xx.xx.xx.xx:80/", host: "127.0.0.1" 2013/01/20 11:28:13 [debug] 11635#0: *7 finalize http upstream request: 504 2013/01/20 11:28:13 [debug] 11635#0: *7 finalize http proxy request 2013/01/20 11:28:13 [debug] 11635#0: *7 free rr peer 0 0 2013/01/20 11:28:13 [debug] 11635#0: *7 close http upstream connection: 11 2013/01/20 11:28:13 [debug] 11635#0: *7 free: 00007FC64142D510, unused: 48 2013/01/20 11:28:13 [debug] 11635#0: *7 reusable connection: 0 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: 504, "/?" a:1, c:2 2013/01/20 11:28:13 [debug] 11635#0: *7 http special response: 504, "/?" 2013/01/20 11:28:13 [debug] 11635#0: *7 internal redirect: "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 rewrite phase: 1 2013/01/20 11:28:13 [debug] 11635#0: *7 test location: "/" 2013/01/20 11:28:13 [debug] 11635#0: *7 test location: "50x.html" 2013/01/20 11:28:13 [debug] 11635#0: *7 using configuration "=/50x.html" 2013/01/20 11:28:13 [debug] 11635#0: *7 http cl:1034 max:1048576 2013/01/20 11:28:13 [debug] 11635#0: *7 rewrite phase: 3 2013/01/20 11:28:13 [debug] 11635#0: *7 post rewrite phase: 4 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 5 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 6 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 7 2013/01/20 11:28:13 [debug] 11635#0: *7 access phase: 8 2013/01/20 11:28:13 [debug] 11635#0: *7 access phase: 9 2013/01/20 11:28:13 [debug] 11635#0: *7 post access phase: 10 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 11 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 12 leading into a loop: 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 2013/01/20 11:28:13 [debug] 11635#0: *7 http filename: "/usr/local/nginx/html/50x.html" 2013/01/20 11:28:13 [debug] 11635#0: *7 add cleanup: 00007FC641837E08 2013/01/20 11:28:13 [debug] 11635#0: *7 http static fd: 11 2013/01/20 11:28:13 [debug] 11635#0: *7 http output filter "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: -5 "/50x.html?" 2013/01/20 11:28:13 [error] 11635#0: *7 no handler found while sending response to client, client: 127.0.0.1, server: localhost, request: "POST / HTTP/1.1", upstream: "http://**.**.**.**:80/", host: "127.0.0.1" 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: 404, "/50x.html?" a:1, c:3 2013/01/20 11:28:13 [debug] 11635#0: *7 http special response: 404, "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http output filter "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: -5 "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: -5, "/50x.html?" a:1, c:3 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 which repeats with http static fd incremented each time until file descriptors run out and it reaches a steady state: 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 2013/01/20 11:28:13 [debug] 11635#0: *7 http filename: "/usr/local/nginx/html/50x.html" 2013/01/20 11:28:13 [debug] 11635#0: *7 add cleanup: 00007FC6418DDDB0 2013/01/20 11:28:13 [crit] 11635#0: *7 open() "/usr/local/nginx/html/50x.html" failed (24: Too many open files) while sending response to client, client: 127.0.0.1, server: localhost, request: "POST / HTTP/1.1", upstream: "http://**.**.**.**:80/", host: "127.0.0.1" 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: 500, "/50x.html?" a:1, c:3 2013/01/20 11:28:13 [debug] 11635#0: *7 http special response: 500, "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http output filter "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: -5 "/50x.html?" 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: -5, "/50x.html?" a:1, c:3 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 I expect I can deal with this in my module, but how come there's no handler of last resort that would prevent this happening? -- Nick Kew From pgnet.dev at gmail.com Sun Jan 20 19:45:56 2013 From: pgnet.dev at gmail.com (pgndev) Date: Sun, 20 Jan 2013 11:45:56 -0800 Subject: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module Message-ID: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module summary: (1) nginx 1.3.11 + spdy 54 `make -j10` OK (2) nginx 1.3.11 + spdy 58_1.3.11 `make -j10` OK (3) nginx 1.3.11 + spdy 54 + lua-nginx-module/git `make -j10` OK (4) nginx 1.3.11 + spdy 58_1.3.11 + lua-nginx-module/git `make -j10` FAIL `make -j1` FAIL (5) nginx 1.3.11 + spdy 58_1.3.11 `make -j10` OK details: cd /usr/local/src/ rm -rf nginx* patch.spdy* wget http://nginx.org/download/nginx-1.3.11.tar.gz wget http://nginx.org/patches/spdy/patch.spdy-54.txt wget http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt tar zxvf nginx-1.3.11.tar.gz cp -af nginx-1.3.11 nginx-1.3.11.ORIG export LUAJIT_LIB="/usr/local/lib64/libluajit-5.1.so" export LUAJIT_INC="/usr/local/include/luajit-2.0" (1) nginx 1.3.11 + spdy 54 cd /usr/local/src/ rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 cd /usr/local/src/nginx-1.3.11 patch -p0 < ../patch.spdy-54.txt ./configure \ --with-debug --user=nobody --group=nobody --prefix=/usr/local/nginx-test --with-pcre=/usr/local/src/pcre --with-pcre-jit --with-ipv6 --with-md5-asm --with-sha1-asm --with-http_ssl_module --with-cc=/usr/bin/gcc-4.7 --with-cpp=/usr/bin/cpp-4.7 --with-cc-opt='-O2 -mtune=amdfam10 -fPIC -DPIC -D_GNU_SOURCE -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4' --with-ld-opt='-L/usr/local/ssl/lib64 -Wl,-rpath,/usr/local/ssl/lib64 -lssl -lcrypto -ldl -lz' make -j10 objs/nginx -v nginx version: nginx/1.3.11 (2) nginx 1.3.11 + spdy 58_1.3.11 cd /usr/local/src/ rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 cd /usr/local/src/nginx-1.3.11 patch -p1 < ../patch.spdy-58_1.3.11.txt ./configure $( ... CONFIGURE OPTIONS as above ... ) make -j10 objs/nginx -v nginx version: nginx/1.3.11 (3) nginx 1.3.11 + spdy 54 + lua-nginx-module cd /usr/local/src/ rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 cd /usr/local/src/nginx-1.3.11 patch -p0 < ../patch.spdy-54.txt ./configure $( ... CONFIGURE OPTIONS as above ... ) \ --add-module=/usr/local/src/lua-nginx-module make -j10 objs/nginx -v nginx version: nginx/1.3.11 (4) nginx 1.3.11 + spdy 58_1.3.11 + lua-nginx-module cd /usr/local/src/ rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 cd /usr/local/src/nginx-1.3.11 patch -p1 < ../patch.spdy-58_1.3.11.txt ./configure $( ... CONFIGURE OPTIONS as above ... ) \ --add-module=/usr/local/src/lua-nginx-module make -j10 ... /usr/bin/gcc-4.7 -c -fmessage-length=0 -O2 -march=amdfam10 -mtune=amdfam10 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -O2 -mtune=amdfam10 -fPIC -DPIC -D_GNU_SOURCE -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/local/include/luajit-2.0 -I /usr/local/src/lua-nginx-module/src/api -I /usr/local/src/pcre -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/src/ngx_http_lua_bodyfilterby.o \ /usr/local/src/lua-nginx-module/src/ngx_http_lua_bodyfilterby.c make[1]: *** [objs/addon/src/ngx_http_lua_socket_tcp.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make[1]: Leaving directory `/usr/local/src/nginx-1.3.11' make: *** [build] Error 2 make clean ./configure $( ... CONFIGURE OPTIONS as above ... ) \ --add-module=/usr/local/src/lua-nginx-module make -j1 ... /usr/bin/gcc-4.7 -c -fmessage-length=0 -O2 -march=amdfam10 -mtune=amdfam10 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -O2 -mtune=amdfam10 -fPIC -DPIC -D_GNU_SOURCE -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/local/include/luajit-2.0 -I /usr/local/src/lua-nginx-module/src/api -I /usr/local/src/pcre -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/src/ngx_http_lua_socket_tcp.o \ /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c: In function ?ngx_http_lua_socket_tcp_connect?: /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c:444:18: error: ?ngx_connection_t? has no member named ?single_connection? make[1]: *** [objs/addon/src/ngx_http_lua_socket_tcp.o] Error 1 make[1]: Leaving directory `/usr/local/src/nginx-1.3.11' make: *** [build] Error 2 (5) nginx 1.3.11 + spdy 58_1.3.11 cd /usr/local/src/ rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 cd /usr/local/src/nginx-1.3.11 patch -p1 < ../patch.spdy-58_1.3.11 ./configure $( ... CONFIGURE OPTIONS as above ... ) make -j10 objs/nginx -v nginx version: nginx/1.3.11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Sun Jan 20 21:02:37 2013 From: pgnet.dev at gmail.com (pgndev) Date: Sun, 20 Jan 2013 13:02:37 -0800 Subject: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module In-Reply-To: References: Message-ID: from the nginx team on this ... it is clear from compilation error: /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c:444:18: error: ?ngx_connection_t? has no member named ?single_connection? The "single_connection" flag was removed by the patch. The obvious fix is to remove it from line 444 of ngx_http_lua_socket_tcp.c too. But I can't guarantee that this is the only fix required. ... main difference from the stable versions of nginx. So, some modules can be broken until the authors have updated their code. On Sun, Jan 20, 2013 at 11:45 AM, pgndev wrote: > SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module > > summary: > > (1) nginx 1.3.11 + spdy 54 `make -j10` OK > (2) nginx 1.3.11 + spdy 58_1.3.11 `make -j10` OK > (3) nginx 1.3.11 + spdy 54 + lua-nginx-module/git `make -j10` OK > (4) nginx 1.3.11 + spdy 58_1.3.11 + lua-nginx-module/git `make -j10` FAIL > `make -j1` FAIL > (5) nginx 1.3.11 + spdy 58_1.3.11 `make -j10` OK > > > details: > > cd /usr/local/src/ > rm -rf nginx* patch.spdy* > > wget http://nginx.org/download/nginx-1.3.11.tar.gz > wget http://nginx.org/patches/spdy/patch.spdy-54.txt > wget http://nginx.org/patches/spdy/patch.spdy-58_1.3.11.txt > > tar zxvf nginx-1.3.11.tar.gz > cp -af nginx-1.3.11 nginx-1.3.11.ORIG > > export LUAJIT_LIB="/usr/local/lib64/libluajit-5.1.so" > export LUAJIT_INC="/usr/local/include/luajit-2.0" > > > (1) nginx 1.3.11 + spdy 54 > > cd /usr/local/src/ > rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 > cd /usr/local/src/nginx-1.3.11 > patch -p0 < ../patch.spdy-54.txt > ./configure \ > --with-debug --user=nobody --group=nobody --prefix=/usr/local/nginx-test > --with-pcre=/usr/local/src/pcre --with-pcre-jit --with-ipv6 --with-md5-asm > --with-sha1-asm --with-http_ssl_module --with-cc=/usr/bin/gcc-4.7 > --with-cpp=/usr/bin/cpp-4.7 --with-cc-opt='-O2 -mtune=amdfam10 -fPIC -DPIC > -D_GNU_SOURCE -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4' > --with-ld-opt='-L/usr/local/ssl/lib64 -Wl,-rpath,/usr/local/ssl/lib64 -lssl > -lcrypto -ldl -lz' > make -j10 > objs/nginx -v > nginx version: nginx/1.3.11 > > > (2) nginx 1.3.11 + spdy 58_1.3.11 > > cd /usr/local/src/ > rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 > cd /usr/local/src/nginx-1.3.11 > patch -p1 < ../patch.spdy-58_1.3.11.txt > ./configure $( ... CONFIGURE OPTIONS as above ... ) > make -j10 > objs/nginx -v > nginx version: nginx/1.3.11 > > (3) nginx 1.3.11 + spdy 54 + lua-nginx-module > > cd /usr/local/src/ > rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 > cd /usr/local/src/nginx-1.3.11 > patch -p0 < ../patch.spdy-54.txt > ./configure $( ... CONFIGURE OPTIONS as above ... ) \ > --add-module=/usr/local/src/lua-nginx-module > make -j10 > objs/nginx -v > nginx version: nginx/1.3.11 > > (4) nginx 1.3.11 + spdy 58_1.3.11 + lua-nginx-module > > cd /usr/local/src/ > rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 > cd /usr/local/src/nginx-1.3.11 > patch -p1 < ../patch.spdy-58_1.3.11.txt > ./configure $( ... CONFIGURE OPTIONS as above ... ) \ > --add-module=/usr/local/src/lua-nginx-module > make -j10 > ... > /usr/bin/gcc-4.7 -c -fmessage-length=0 -O2 -march=amdfam10 > -mtune=amdfam10 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables > -fasynchronous-unwind-tables -O2 -mtune=amdfam10 -fPIC -DPIC -D_GNU_SOURCE > -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions > -fstack-protector --param=ssp-buffer-size=4 -DNDK_SET_VAR -I src/core -I > src/event -I src/event/modules -I src/os/unix -I > /usr/local/include/luajit-2.0 -I /usr/local/src/lua-nginx-module/src/api -I > /usr/local/src/pcre -I objs -I src/http -I src/http/modules -I src/mail \ > -o objs/addon/src/ngx_http_lua_bodyfilterby.o \ > > /usr/local/src/lua-nginx-module/src/ngx_http_lua_bodyfilterby.c > make[1]: *** [objs/addon/src/ngx_http_lua_socket_tcp.o] Error 1 > make[1]: *** Waiting for unfinished jobs.... > make[1]: Leaving directory `/usr/local/src/nginx-1.3.11' > make: *** [build] Error 2 > > make clean > ./configure $( ... CONFIGURE OPTIONS as above ... ) \ > --add-module=/usr/local/src/lua-nginx-module > make -j1 > ... > /usr/bin/gcc-4.7 -c -fmessage-length=0 -O2 -march=amdfam10 > -mtune=amdfam10 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables > -fasynchronous-unwind-tables -O2 -mtune=amdfam10 -fPIC -DPIC -D_GNU_SOURCE > -fno-strict-aliasing -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions > -fstack-protector --param=ssp-buffer-size=4 -DNDK_SET_VAR -I src/core -I > src/event -I src/event/modules -I src/os/unix -I > /usr/local/include/luajit-2.0 -I /usr/local/src/lua-nginx-module/src/api -I > /usr/local/src/pcre -I objs -I src/http -I src/http/modules -I src/mail \ > -o objs/addon/src/ngx_http_lua_socket_tcp.o \ > > /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c > /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c: In > function ?ngx_http_lua_socket_tcp_connect?: > > /usr/local/src/lua-nginx-module/src/ngx_http_lua_socket_tcp.c:444:18: error: > ?ngx_connection_t? has no member named ?single_connection? > make[1]: *** [objs/addon/src/ngx_http_lua_socket_tcp.o] Error 1 > make[1]: Leaving directory `/usr/local/src/nginx-1.3.11' > make: *** [build] Error 2 > > (5) nginx 1.3.11 + spdy 58_1.3.11 > > cd /usr/local/src/ > rm -rf nginx-1.3.11 && cp -af nginx-1.3.11.ORIG nginx-1.3.11 > cd /usr/local/src/nginx-1.3.11 > patch -p1 < ../patch.spdy-58_1.3.11 > ./configure $( ... CONFIGURE OPTIONS as above ... ) > make -j10 > objs/nginx -v > nginx version: nginx/1.3.11 > From agentzh at gmail.com Sun Jan 20 21:45:43 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 20 Jan 2013 13:45:43 -0800 Subject: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module In-Reply-To: References: Message-ID: Hello! On Sun, Jan 20, 2013 at 11:45 AM, pgndev wrote: > SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module > It is known that ngx_lua does not really work with the SPDY patch and ngx_lua is not compatible with Nginx 1.3.9+. I'll update ngx_lua for both the SPDY patch and Nginx 1.3.9+ in the next few weeks or so. Please stay tuned :) Best regards, -agentzh From mdounin at mdounin.ru Sun Jan 20 22:06:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 02:06:19 +0400 Subject: No handler of last resort? In-Reply-To: <4B544C14-4C70-4687-B570-5DDEBE27902A@apache.org> References: <4B544C14-4C70-4687-B570-5DDEBE27902A@apache.org> Message-ID: <20130120220619.GA99404@mdounin.ru> Hello! On Sun, Jan 20, 2013 at 01:04:44PM +0000, Nick Kew wrote: > In test-driving my plugin, I tried provoking a gateway timeout, > whereupon I find nginx unexpectedly eats the system before > crashing. > > Executive summary: it goes into an internal redirect to serve an > errordocument. But the errordocument fails, leading into an > infinite loop as it tries to invoke the error for that failure. > It seems nginx has no handler of last resort! There is, but it looks like output chain was broken due to some incorrect output filter module installed. See below. > The error log shows the original timeout and error: > > 2013/01/20 11:17:07 [debug] 11635#0: kevent timer: 60000, changes: 3 > 2013/01/20 11:28:13 [debug] 11635#0: kevent events: 1 > 2013/01/20 11:28:13 [debug] 11635#0: kevent: 3: ft:-2 fl:0025 ff:00000000 d:146988 ud:00007FC64188B6E1 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http run request: "/?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream check client, write event:1, "/" > 2013/01/20 11:28:13 [debug] 11635#0: timer delta: 666881 > 2013/01/20 11:28:13 [debug] 11635#0: *7 event timer del: 11: 1358680687001 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream request: "/?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http upstream send request handler > 2013/01/20 11:28:13 [debug] 11635#0: *7 http next upstream, 4 > 2013/01/20 11:28:13 [debug] 11635#0: *7 free rr peer 1 4 > 2013/01/20 11:28:13 [error] 11635#0: *7 upstream timed out (60: Operation timed out) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "POST / HTTP/1.1", upstream: "http://xx.xx.xx.xx:80/", host: "127.0.0.1" > 2013/01/20 11:28:13 [debug] 11635#0: *7 finalize http upstream request: 504 > 2013/01/20 11:28:13 [debug] 11635#0: *7 finalize http proxy request > 2013/01/20 11:28:13 [debug] 11635#0: *7 free rr peer 0 0 > 2013/01/20 11:28:13 [debug] 11635#0: *7 close http upstream connection: 11 > 2013/01/20 11:28:13 [debug] 11635#0: *7 free: 00007FC64142D510, unused: 48 > 2013/01/20 11:28:13 [debug] 11635#0: *7 reusable connection: 0 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: 504, "/?" a:1, c:2 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http special response: 504, "/?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 internal redirect: "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 rewrite phase: 1 > 2013/01/20 11:28:13 [debug] 11635#0: *7 test location: "/" > 2013/01/20 11:28:13 [debug] 11635#0: *7 test location: "50x.html" > 2013/01/20 11:28:13 [debug] 11635#0: *7 using configuration "=/50x.html" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http cl:1034 max:1048576 > 2013/01/20 11:28:13 [debug] 11635#0: *7 rewrite phase: 3 > 2013/01/20 11:28:13 [debug] 11635#0: *7 post rewrite phase: 4 > 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 5 > 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 6 > 2013/01/20 11:28:13 [debug] 11635#0: *7 generic phase: 7 > 2013/01/20 11:28:13 [debug] 11635#0: *7 access phase: 8 > 2013/01/20 11:28:13 [debug] 11635#0: *7 access phase: 9 > 2013/01/20 11:28:13 [debug] 11635#0: *7 post access phase: 10 > 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 11 > 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 12 So far so good. > leading into a loop: > > 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http filename: "/usr/local/nginx/html/50x.html" > 2013/01/20 11:28:13 [debug] 11635#0: *7 add cleanup: 00007FC641837E08 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http static fd: 11 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http output filter "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: -5 "/50x.html?" > 2013/01/20 11:28:13 [error] 11635#0: *7 no handler found while sending response to client, client: 127.0.0.1, server: localhost, request: "POST / HTTP/1.1", upstream: "http://**.**.**.**:80/", host: "127.0.0.1" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: 404, "/50x.html?" a:1, c:3 > 2013/01/20 11:28:13 [debug] 11635#0: *7 http special response: 404, "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http output filter "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http copy filter: -5 "/50x.html?" > 2013/01/20 11:28:13 [debug] 11635#0: *7 http finalize request: -5, "/50x.html?" a:1, c:3 > 2013/01/20 11:28:13 [debug] 11635#0: *7 content phase: 13 Strange thing happens here: output filter chain returned -5, NGX_DECLINED, which should never happen and results in an undefined behaviour. I would suggest it's incorrect filter in a chain which breaks things. The "no handler found" error logged is already result of this (it should never happen unless static module isn't compiled in, which requires manual intervention into build process). As output filter chain is broken, even returning predefined response from memory doesn't work - it still results in NGX_DECLINED, which again leads to an undefined behaviour. Please note that it's very easy to break things by writing incorrect code in your module. If you see something wierd, it's usually good idea to try to reproduce what you see without any your or 3rd party modules compiled in, even if you are sure your module isn't guilty. -- Maxim Dounin http://nginx.com/support.html From appa at perusio.net Sun Jan 20 22:26:22 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 20 Jan 2013 23:26:22 +0100 Subject: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module In-Reply-To: References: Message-ID: <878v7n3301.wl%appa@perusio.net> On 20 Jan 2013 22h45 CET, agentzh at gmail.com wrote: > Hello! > > On Sun, Jan 20, 2013 at 11:45 AM, pgndev > wrote: >> SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + >> lua-nginx-module >> > > It is known that ngx_lua does not really work with the SPDY patch > and ngx_lua is not compatible with Nginx 1.3.9+. Well I'm using it with LuaJIT and have no issues so far. I haven't applied the SPDY patch though. > I'll update ngx_lua for both the SPDY patch and Nginx 1.3.9+ in the > next few weeks or so. Please stay tuned :) It works for me, so far. --- appa From mdounin at mdounin.ru Sun Jan 20 22:45:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 02:45:35 +0400 Subject: Serving rails assets with rails3 In-Reply-To: References: Message-ID: <20130120224534.GD99404@mdounin.ru> Hello! On Thu, Jan 17, 2013 at 08:19:21AM -0700, Chris Brulak wrote: > Is this the right list to ask about issues with the latest SPDY patch? I'm > having a hard time getting rails 3 assets served up over SPDY. > > Will post more details if this is the right list Yes, it's right list (and, BTW, it is explicitly specified at http://nginx.org/patches/spdy/README.txt). -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Mon Jan 21 02:20:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jan 2013 06:20:15 +0400 Subject: [feature] Add skip_location directive to ngx_http_rewrite In-Reply-To: <50F81993.50701@invisilabs.com> References: <50F7CD77.5000702@invisilabs.com> <50F81993.50701@invisilabs.com> Message-ID: <20130121022014.GL99404@mdounin.ru> Hello! On Thu, Jan 17, 2013 at 05:32:35PM +0200, Dmitrii Chekaliuk wrote: > Hmm, you pushed me to rethink this. Actually, the logic needed for > my particular task could be done with IFs and BREAKs like this: > > location / { > if ($uri ~ ^/some/path) { > if ($request_method ~ ^GET|HEAD$) { > break; > # config 1 > } > > if ($request_method = "PUT") { > break; > # config 2 > } > } > > if ($uri ~ ^/some/path/subpath$) { > # config 3 > } > } What you need is something like this: location / { # not marked in your config } location /some/path { # config 1, config 2 based on $request_method - either # using if, or limit_except } location = /some/path/subpath { # config 3 } Note that nginx configuration is declarative in general, and you may have better luck not trying to do imperative programming in it with rewrite module directives. See also this article with some more examples of how to convert Apache rewrite rules into nginx configuration properly: http://nginx.org/en/docs/http/converting_rewrite_rules.html [...] -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Mon Jan 21 13:15:30 2013 From: ru at nginx.com (ru at nginx.com) Date: Mon, 21 Jan 2013 13:15:30 +0000 Subject: [nginx] svn commit: r5011 - trunk/src/http Message-ID: <20130121131531.20E4D3F9FD5@mail.nginx.com> Author: ru Date: 2013-01-21 13:15:29 +0000 (Mon, 21 Jan 2013) New Revision: 5011 URL: http://trac.nginx.org/nginx/changeset/5011/nginx Log: Variables $pipe, $request_length, $time_iso8601, and $time_local. Log module counterparts are preserved for efficiency. Based on patch by Kiril Kalchev. Modified: trunk/src/http/ngx_http_variables.c Modified: trunk/src/http/ngx_http_variables.c =================================================================== --- trunk/src/http/ngx_http_variables.c 2013-01-17 09:55:36 UTC (rev 5010) +++ trunk/src/http/ngx_http_variables.c 2013-01-21 13:15:29 UTC (rev 5011) @@ -75,12 +75,16 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_body_bytes_sent(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_pipe(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_completion(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_body(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_body_file(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_request_length(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_request_time(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_status(ngx_http_request_t *r, @@ -114,6 +118,10 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_msec(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_time_iso8601(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_time_local(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); /* * TODO: @@ -231,6 +239,9 @@ { ngx_string("body_bytes_sent"), NULL, ngx_http_variable_body_bytes_sent, 0, 0, 0 }, + { ngx_string("pipe"), NULL, ngx_http_variable_pipe, + 0, 0, 0 }, + { ngx_string("request_completion"), NULL, ngx_http_variable_request_completion, 0, 0, 0 }, @@ -243,6 +254,9 @@ ngx_http_variable_request_body_file, 0, 0, 0 }, + { ngx_string("request_length"), NULL, ngx_http_variable_request_length, + 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("request_time"), NULL, ngx_http_variable_request_time, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, @@ -297,6 +311,12 @@ { ngx_string("msec"), NULL, ngx_http_variable_msec, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("time_iso8601"), NULL, ngx_http_variable_time_iso8601, + 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + + { ngx_string("time_local"), NULL, ngx_http_variable_time_local, + 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + #if (NGX_HAVE_TCP_INFO) { ngx_string("tcpinfo_rtt"), NULL, ngx_http_variable_tcpinfo, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, @@ -1556,6 +1576,20 @@ static ngx_int_t +ngx_http_variable_pipe(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + v->data = (u_char *) (r->pipeline ? "p" : "."); + v->len = 1; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_status(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { @@ -1890,6 +1924,27 @@ static ngx_int_t +ngx_http_variable_request_length(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + u_char *p; + + p = ngx_pnalloc(r->pool, NGX_OFF_T_LEN); + if (p == NULL) { + return NGX_ERROR; + } + + v->len = ngx_sprintf(p, "%O", r->request_length) - p; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = p; + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_request_time(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { @@ -2033,6 +2088,53 @@ } +static ngx_int_t +ngx_http_variable_time_iso8601(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + u_char *p; + + p = ngx_pnalloc(r->pool, ngx_cached_http_log_iso8601.len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, ngx_cached_http_log_iso8601.data, + ngx_cached_http_log_iso8601.len); + + v->len = ngx_cached_http_log_iso8601.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = p; + + return NGX_OK; +} + + +static ngx_int_t +ngx_http_variable_time_local(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + u_char *p; + + p = ngx_pnalloc(r->pool, ngx_cached_http_log_time.len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, ngx_cached_http_log_time.data, ngx_cached_http_log_time.len); + + v->len = ngx_cached_http_log_time.len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = p; + + return NGX_OK; +} + + void * ngx_http_map_find(ngx_http_request_t *r, ngx_http_map_t *map, ngx_str_t *match) { From sb at waeme.net Mon Jan 21 15:05:54 2013 From: sb at waeme.net (sb at waeme.net) Date: Mon, 21 Jan 2013 15:05:54 +0000 Subject: [nginx] svn commit: r5012 - trunk/auto/cc Message-ID: <20130121150554.DF3BB3F9F35@mail.nginx.com> Author: fabler Date: 2013-01-21 15:05:54 +0000 (Mon, 21 Jan 2013) New Revision: 5012 URL: http://trac.nginx.org/nginx/changeset/5012/nginx Log: Removed redundant variable assignment. Modified: trunk/auto/cc/msvc Modified: trunk/auto/cc/msvc =================================================================== --- trunk/auto/cc/msvc 2013-01-21 13:15:29 UTC (rev 5011) +++ trunk/auto/cc/msvc 2013-01-21 15:05:54 UTC (rev 5012) @@ -73,9 +73,6 @@ # disable logo CFLAGS="$CFLAGS -nologo" - -LINK="\$(CC)" - # the link flags CORE_LINK="$CORE_LINK -link -verbose:lib" From appa at perusio.net Tue Jan 22 10:21:44 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 22 Jan 2013 11:21:44 +0100 Subject: Transforming SSL server cert and private key in variables. Message-ID: <87mww14ix3.wl%appa@perusio.net> Hello, I've not yet ventured into Nginx C module coding, but I would like to know if changing the current SSL module directives: ssl_certificate and ssl_certificate_key, so that instead of strings they can be variables (complex values) is feasible, or due to the fact that SSL happens below the protocol layer, is much more difficult, than, for instance, the recent transformation in variables of the auth_basic module directives? Thank you, --- appa From mdounin at mdounin.ru Tue Jan 22 11:21:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 15:21:04 +0400 Subject: Transforming SSL server cert and private key in variables. In-Reply-To: <87mww14ix3.wl%appa@perusio.net> References: <87mww14ix3.wl%appa@perusio.net> Message-ID: <20130122112104.GB9787@mdounin.ru> Hello! On Tue, Jan 22, 2013 at 11:21:44AM +0100, Ant?nio P. P. Almeida wrote: > Hello, > > I've not yet ventured into Nginx C module coding, but I would like to > know if changing the current SSL module directives: > ssl_certificate and ssl_certificate_key, so that instead of strings > they can be variables (complex values) is feasible, or due to the fact > that SSL happens below the protocol layer, is much more difficult, than, > for instance, the recent transformation in variables of the auth_basic > module directives? It is going to be much more difficult, as you have to reload certificates and keys into SSL context before asking OpenSSL to establish connection, and you'll likely need at least some caching layer in place to make things at least somewhat reasonable from performance point of view. Besides that, the only connection-specific info available when establishing SSL connection is remote address (in all cases) and server name indicated by a client (in case of SNI). Which makes it mostly useless, as remote address destinction is mostly useless (and/or should be done at layer 3), and server{} blocks are here to handle server name distinction. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Tue Jan 22 12:36:01 2013 From: mdounin at mdounin.ru (mdounin at mdounin.ru) Date: Tue, 22 Jan 2013 12:36:01 +0000 Subject: [nginx] svn commit: r5013 - trunk/src/http/modules Message-ID: <20130122123601.B53983F9FD5@mail.nginx.com> Author: mdounin Date: 2013-01-22 12:36:00 +0000 (Tue, 22 Jan 2013) New Revision: 5013 URL: http://trac.nginx.org/nginx/changeset/5013/nginx Log: Proxy: fixed proxy_method to always add space. Before the patch if proxy_method was specified at http{} level the code to add trailing space wasn't executed, resulting in incorrect requests to upstream. Modified: trunk/src/http/modules/ngx_http_proxy_module.c Modified: trunk/src/http/modules/ngx_http_proxy_module.c =================================================================== --- trunk/src/http/modules/ngx_http_proxy_module.c 2013-01-21 15:05:54 UTC (rev 5012) +++ trunk/src/http/modules/ngx_http_proxy_module.c 2013-01-22 12:36:00 UTC (rev 5013) @@ -2353,7 +2353,7 @@ * conf->upstream.store_lengths = NULL; * conf->upstream.store_values = NULL; * - * conf->method = NULL; + * conf->method = { 0, NULL }; * conf->headers_source = NULL; * conf->headers_set_len = NULL; * conf->headers_set = NULL; @@ -2657,10 +2657,11 @@ #endif - if (conf->method.len == 0) { - conf->method = prev->method; + ngx_conf_merge_str_value(conf->method, prev->method, ""); - } else { + if (conf->method.len + && conf->method.data[conf->method.len - 1] != ' ') + { conf->method.data[conf->method.len] = ' '; conf->method.len++; } From appa at perusio.net Tue Jan 22 13:11:59 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 22 Jan 2013 14:11:59 +0100 Subject: Transforming SSL server cert and private key in variables. In-Reply-To: <20130122112104.GB9787@mdounin.ru> References: <87mww14ix3.wl%appa@perusio.net> <20130122112104.GB9787@mdounin.ru> Message-ID: <87libl4b1c.wl%appa@perusio.net> On 22 Jan 2013 12h21 CET, mdounin at mdounin.ru wrote: > Hello! Hello Maxim, Thank you for your reply. > On Tue, Jan 22, 2013 at 11:21:44AM +0100, Ant?nio P. P. Almeida > wrote: > >> Hello, >> >> I've not yet ventured into Nginx C module coding, but I would like >> to know if changing the current SSL module directives: >> ssl_certificate and ssl_certificate_key, so that instead of strings >> they can be variables (complex values) is feasible, or due to the >> fact that SSL happens below the protocol layer, is much more >> difficult, than, for instance, the recent transformation in >> variables of the auth_basic module directives? > > It is going to be much more difficult, as you have to reload > certificates and keys into SSL context before asking OpenSSL to > establish connection, and you'll likely need at least some caching > layer in place to make things at least somewhat reasonable from > performance point of view. > > Besides that, the only connection-specific info available when > establishing SSL connection is remote address (in all cases) and > server name indicated by a client (in case of SNI). Which makes > it mostly useless, as remote address destinction is mostly useless > (and/or should be done at layer 3), and server{} blocks are here > to handle server name distinction. It's precisely for SNI in mass SSL hosting. Wouldn't be much more efficient if there was a callback that returned the host (SNI) and that would select a proper (cert, key) pair so that instead of reloading we could proceed without having to reload the config? I would like for something of the kind. map $sni_host $cert { example.net example.net.pem; example.com example.com.pem; ... } map $sni_host $privkey { example.net key.example.net.pem; example.com key.example.com.pem; ... } Then in the server block: server { listen 80; listen 443 ssl; server_name *.example.*; ssl_certificate $cert; ssl_certificate_key $privkey; ... } Also this will avoid having a plethora of server {}. Thank you, --- appa From mdounin at mdounin.ru Tue Jan 22 13:34:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 17:34:40 +0400 Subject: Transforming SSL server cert and private key in variables. In-Reply-To: <87libl4b1c.wl%appa@perusio.net> References: <87mww14ix3.wl%appa@perusio.net> <20130122112104.GB9787@mdounin.ru> <87libl4b1c.wl%appa@perusio.net> Message-ID: <20130122133440.GH9787@mdounin.ru> Hello! On Tue, Jan 22, 2013 at 02:11:59PM +0100, Ant?nio P. P. Almeida wrote: > On 22 Jan 2013 12h21 CET, mdounin at mdounin.ru wrote: > > > Hello! > > Hello Maxim, > > Thank you for your reply. > > > On Tue, Jan 22, 2013 at 11:21:44AM +0100, Ant?nio P. P. Almeida > > wrote: > > > >> Hello, > >> > >> I've not yet ventured into Nginx C module coding, but I would like > >> to know if changing the current SSL module directives: > >> ssl_certificate and ssl_certificate_key, so that instead of strings > >> they can be variables (complex values) is feasible, or due to the > >> fact that SSL happens below the protocol layer, is much more > >> difficult, than, for instance, the recent transformation in > >> variables of the auth_basic module directives? > > > > It is going to be much more difficult, as you have to reload > > certificates and keys into SSL context before asking OpenSSL to > > establish connection, and you'll likely need at least some caching > > layer in place to make things at least somewhat reasonable from > > performance point of view. > > > > Besides that, the only connection-specific info available when > > establishing SSL connection is remote address (in all cases) and > > server name indicated by a client (in case of SNI). Which makes > > it mostly useless, as remote address destinction is mostly useless > > (and/or should be done at layer 3), and server{} blocks are here > > to handle server name distinction. > > It's precisely for SNI in mass SSL hosting. Wouldn't be much more > efficient if there was a callback that returned the host (SNI) and > that would select a proper (cert, key) pair so that instead of > reloading we could proceed without having to reload the config? > > I would like for something of the kind. > > map $sni_host $cert { > example.net example.net.pem; > example.com example.com.pem; > ... > } > > map $sni_host $privkey { > example.net key.example.net.pem; > example.com key.example.com.pem; > ... > } > > Then in the server block: > > server { > listen 80; > listen 443 ssl; > server_name *.example.*; > > ssl_certificate $cert; > ssl_certificate_key $privkey; > > ... > } > > Also this will avoid having a plethora of server {}. As long as there will be cache layer to avoid re-reading certs - it might be efficient enough to be usable. It will require much more code than just adding variables support though, and things like OCSP stapling won't be available. Overall I would recommend using server{} blocks instead. -- Maxim Dounin http://nginx.com/support.html From valyala at gmail.com Tue Jan 22 16:15:08 2013 From: valyala at gmail.com (Aliaksandr Valialkin) Date: Tue, 22 Jan 2013 18:15:08 +0200 Subject: Proposal: new caching backend for nginx Message-ID: Hi all, I'm the author of ybc - fast in-process key-value caching library - https://github.com/valyala/ybc . This library has the following features, which may be essential to nginx: - compact source code written in C; - optimized for speed. Avoids using syscalls, dynamic memory allocations and memory copying in hot paths; - has no external dependencies; - cross-platform design - all platform-specific code is located in one place - https://github.com/valyala/ybc/tree/master/platform . Currently only linux is supported, but other platforms can be easily added by implementing the corresponding platform-specific functions; - designed for efficient caching of millions big objects on both SSDs and HDDs. Easily deals with huge caches exceeding available RAM by multiple orders of magnitude. - persists cached objects across application restarts. - provides functionality similar to proxy_cache_lock and proxy_cache_use_stale from http://nginx.org/en/docs/http/ngx_http_proxy_module.html out of the box. - unlike nginx's file-based cache, is free of additional "cache manager" tasks. - unlike nginx's file-based cache, never exceeds maximum cache size set at startup. - unlike nginx's file-based cache, stores all cached data in two files - one file is used for index, while the other file is used for cached data. These files aren't vulnerable to fragmentation, since their sizes never change after the application start. Other ybc features can be found at https://github.com/valyala/ybc/blob/master/README . ybc API is located at https://github.com/valyala/ybc/blob/master/ybc.h . It would be great if somebody familiar with nginx internals, who is interested in faster and cleaner caching backend for nginx could help me substituting file-based backend with ybc. -- Best Regards, Aliaksandr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 22 18:17:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2013 22:17:34 +0400 Subject: Proposal: new caching backend for nginx In-Reply-To: References: Message-ID: <20130122181734.GM9787@mdounin.ru> Hello! On Tue, Jan 22, 2013 at 06:15:08PM +0200, Aliaksandr Valialkin wrote: > Hi all, > > I'm the author of ybc - fast in-process key-value caching library - > https://github.com/valyala/ybc . This library has the following features, > which may be essential to nginx: > - compact source code written in C; > - optimized for speed. Avoids using syscalls, dynamic memory allocations > and memory copying in hot paths; > - has no external dependencies; > - cross-platform design - all platform-specific code is located in one > place - https://github.com/valyala/ybc/tree/master/platform . Currently > only linux is supported, but other platforms can be easily added by > implementing the corresponding platform-specific functions; Cross-platform with only linux supported sounds cool. :) > - designed for efficient caching of millions big objects on both SSDs and > HDDs. Easily deals with huge caches exceeding available RAM by multiple > orders of magnitude. > - persists cached objects across application restarts. > - provides functionality similar to proxy_cache_lock and > proxy_cache_use_stale from > http://nginx.org/en/docs/http/ngx_http_proxy_module.html out of the box. > - unlike nginx's file-based cache, is free of additional "cache manager" > tasks. > - unlike nginx's file-based cache, never exceeds maximum cache size set at > startup. > - unlike nginx's file-based cache, stores all cached data in two files - > one file is used for index, while the other file is used for cached data. > These files aren't vulnerable to fragmentation, since their sizes never > change after the application start. > Other ybc features can be found at > https://github.com/valyala/ybc/blob/master/README . > > ybc API is located at https://github.com/valyala/ybc/blob/master/ybc.h . > > It would be great if somebody familiar with nginx internals, who is > interested in faster and cleaner caching backend for nginx could help me > substituting file-based backend with ybc. I don't think that it will fit as a cache store for nginx. In particular, with quick look through sources I don't see any interface to store data with size not known in advance, which happens often in HTTP world. Additionally, it looks like it doesn't provide async disk IO support. -- Maxim Dounin http://nginx.com/support.html From agentzh at gmail.com Tue Jan 22 22:56:34 2013 From: agentzh at gmail.com (agentzh) Date: Tue, 22 Jan 2013 14:56:34 -0800 Subject: [PATCH] Fixing a segmentation fault when resolver and poll are used together Message-ID: Hello! I've noticed a segmentation fault caught by Valgrind/Memcheck when using ngx_resolver and ngx_poll_module together with Nginx 1.2.6 on Linux x86_64 (and i386 too): ==25191== Jump to the invalid address stated on the next line ==25191== at 0x0: ??? ==25191== by 0x42CC67: ngx_event_process_posted (ngx_event_posted.c:40) ==25191== by 0x42C7E7: ngx_process_events_and_timers (ngx_event.c:274) ==25191== by 0x434BCB: ngx_single_process_cycle (ngx_process_cycle.c:315) ==25191== by 0x416981: main (nginx.c:409) ==25191== Address 0x0 is not stack'd, malloc'd or (recently) free'd ==25191== It crashes on the following source line in src/event/ngx_event_posted.c: ev->handler(ev); The minimized config sample is as follows: events { use poll; } http { server { listen 8080; location /t { set $myserver nginx.org; proxy_pass http://$myserver/; resolver 127.0.0.1; } } } assuming that there's nothing listening on the local port 53. Basically, when POLLERR happens ngx_poll_module will generate both a write event and a read event for the current connection but there is no write event handler registered for the UDP connection created in ngx_resolver.c, thus leading to calling a NULL function pointer. The simplest fix (though maybe not the most reasonable) is to register an empty write event handler in ngx_resolver.c, as shown in the patch attached to this mail (already tested on my side). Best regards, -agentzh --- nginx-1.2.6/src/core/ngx_resolver.c 2012-11-12 10:47:07.000000000 -0800 +++ nginx-1.2.6-patched/src/core/ngx_resolver.c 2013-01-22 14:52:42.716434183 -0800 @@ -91,6 +91,7 @@ static void *ngx_resolver_dup(ngx_resolv static in_addr_t *ngx_resolver_rotate(ngx_resolver_t *r, in_addr_t *src, ngx_uint_t n); static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, size_t len); +static void ngx_resolver_empty_handler(ngx_event_t *ev); ngx_resolver_t * @@ -2258,6 +2259,8 @@ ngx_udp_connect(ngx_udp_connection_t *uc rev->log = &uc->log; wev->log = &uc->log; + wev->handler = ngx_resolver_empty_handler; + uc->connection = c; c->number = ngx_atomic_fetch_add(ngx_connection_counter, 1); @@ -2311,3 +2314,9 @@ ngx_udp_connect(ngx_udp_connection_t *uc return NGX_OK; } + + +static void +ngx_resolver_empty_handler(ngx_event_t *ev) +{ +} -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.2.6-resolver_wev_handler_segfault_with_poll.patch Type: application/octet-stream Size: 890 bytes Desc: not available URL: From yaoweibin at gmail.com Wed Jan 23 03:46:56 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 23 Jan 2013 11:46:56 +0800 Subject: A patch to force the gunzip filter module work Message-ID: Hi, I have used the gunzip filter module to inflate the compressed response. This module is very efficient and help us a lot. But it It just work when the request don't send the Accept-Encoding: Gzip header. If the client can accept compressed response, it will not work at all. I have changed this module and added a gunzip_force directive. Then it will always inflate the compressed response when the directive is turned on. This patch could be helpful for other filter modules, like ssi module and substitute module etc. It can save the bandwidth with backend servers. In our company, the intranet bandwidth really kills us. This patch is from the separated gunzip module. It should be similar with the Nginx official source code. Hope this be helpful. -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gunzip_force.patch Type: application/octet-stream Size: 1651 bytes Desc: not available URL: From igor at sysoev.ru Wed Jan 23 08:00:40 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 23 Jan 2013 12:00:40 +0400 Subject: Proposal: new caching backend for nginx In-Reply-To: References: Message-ID: <6AB81D2B-3181-402B-A9E8-87F97B220AE3@sysoev.ru> On Jan 22, 2013, at 20:15 , Aliaksandr Valialkin wrote: > - unlike nginx's file-based cache, stores all cached data in two files - one file is used for index, while the other file is used for cached data. These files aren't vulnerable to fragmentation, since their sizes never change after the application start. Well it eliminates the cache file fragmentation on file system, but how do you eliminate cached objects fragmentation inside the cache file? -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Jan 23 08:13:41 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 23 Jan 2013 12:13:41 +0400 Subject: [PATCH] Fixing a segmentation fault when resolver and poll are used together In-Reply-To: References: Message-ID: <20130123081341.GA9413@lo0.su> On Tue, Jan 22, 2013 at 02:56:34PM -0800, agentzh wrote: > I've noticed a segmentation fault caught by Valgrind/Memcheck when > using ngx_resolver and ngx_poll_module together with Nginx 1.2.6 on > Linux x86_64 (and i386 too): > > ==25191== Jump to the invalid address stated on the next line > ==25191== at 0x0: ??? > ==25191== by 0x42CC67: ngx_event_process_posted (ngx_event_posted.c:40) > ==25191== by 0x42C7E7: ngx_process_events_and_timers (ngx_event.c:274) > ==25191== by 0x434BCB: ngx_single_process_cycle (ngx_process_cycle.c:315) > ==25191== by 0x416981: main (nginx.c:409) > ==25191== Address 0x0 is not stack'd, malloc'd or (recently) free'd > ==25191== > > It crashes on the following source line in src/event/ngx_event_posted.c: > > ev->handler(ev); > > The minimized config sample is as follows: > > events { > use poll; > } > > http { > server { > listen 8080; > > location /t { > set $myserver nginx.org; > proxy_pass http://$myserver/; > resolver 127.0.0.1; > } > } > } > > assuming that there's nothing listening on the local port 53. > > Basically, when POLLERR happens ngx_poll_module will generate both a > write event and a read event for the current connection but there is > no write event handler registered for the UDP connection created in > ngx_resolver.c, thus leading to calling a NULL function pointer. > > The simplest fix (though maybe not the most reasonable) is to register > an empty write event handler in ngx_resolver.c, as shown in the patch > attached to this mail (already tested on my side). > > Best regards, > -agentzh First of all, thanks for catching the bug! I propose one of these patches instead: %%% Index: src/event/modules/ngx_poll_module.c =================================================================== --- src/event/modules/ngx_poll_module.c (revision 5013) +++ src/event/modules/ngx_poll_module.c (working copy) @@ -366,7 +366,13 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms * active handler */ - revents |= POLLIN|POLLOUT; + if (c->read->active) { + revents |= POLLIN; + } + + if (c->write->active) { + revents |= POLLOUT; + } } found = 0; %%% %%% Index: src/event/modules/ngx_poll_module.c =================================================================== --- src/event/modules/ngx_poll_module.c (revision 5013) +++ src/event/modules/ngx_poll_module.c (working copy) @@ -371,7 +371,7 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms found = 0; - if (revents & POLLIN) { + if ((revents & POLLIN) && c->read->active) { found = 1; ev = c->read; @@ -388,7 +388,7 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms ngx_locked_post_event(ev, queue); } - if (revents & POLLOUT) { + if ((revents & POLLOUT) && c->write->active) { found = 1; ev = c->write; %%% While the first patch looks more natural to me, the second patch is in line with the ngx_epoll_process_events() code. From appa at perusio.net Wed Jan 23 11:26:13 2013 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 23 Jan 2013 12:26:13 +0100 Subject: Transforming SSL server cert and private key in variables. In-Reply-To: <20130122133440.GH9787@mdounin.Nru> References: <87mww14ix3.wl%appa@perusio.net> <20130122112104.GB9787@mdounin.ru> <87libl4b1c.wl%appa@perusio.net> <20130122133440.GH9787@mdounin.ru> Message-ID: <87ip6o3zu2.wl%appa@perusio.net> On 22 Jan 2013 14h34 CET, mdounin at mdounin.ru wrote: Hello again Maxim, > Hello! > > On Tue, Jan 22, 2013 at 02:11:59PM +0100, Ant?nio P. P. Almeida > wrote: > >> On 22 Jan 2013 12h21 CET, mdounin at mdounin.ru wrote: >> >>> Hello! >> >> Hello Maxim, >> >> Thank you for your reply. >> >>> On Tue, Jan 22, 2013 at 11:21:44AM +0100, Ant?nio P. P. Almeida >>> wrote: >>> >>>> Hello, >>>> >>>> I've not yet ventured into Nginx C module coding, but I would >>>> like to know if changing the current SSL module directives: >>>> ssl_certificate and ssl_certificate_key, so that instead of >>>> strings they can be variables (complex values) is feasible, or >>>> due to the fact that SSL happens below the protocol layer, is >>>> much more difficult, than, for instance, the recent >>>> transformation in variables of the auth_basic module directives? >>> >>> It is going to be much more difficult, as you have to reload >>> certificates and keys into SSL context before asking OpenSSL to >>> establish connection, and you'll likely need at least some caching >>> layer in place to make things at least somewhat reasonable from >>> performance point of view. >>> >>> Besides that, the only connection-specific info available when >>> establishing SSL connection is remote address (in all cases) and >>> server name indicated by a client (in case of SNI). Which makes >>> it mostly useless, as remote address destinction is mostly useless >>> (and/or should be done at layer 3), and server{} blocks are here >>> to handle server name distinction. >> >> It's precisely for SNI in mass SSL hosting. Wouldn't be much more >> efficient if there was a callback that returned the host (SNI) and >> that would select a proper (cert, key) pair so that instead of >> reloading we could proceed without having to reload the config? >> >> I would like for something of the kind. >> >> map $sni_host $cert { >> example.net example.net.pem; >> example.com example.com.pem; >> ... >> } >> >> map $sni_host $privkey { >> example.net key.example.net.pem; >> example.com key.example.com.pem; >> ... >> } >> >> Then in the server block: >> >> server { >> listen 80; >> listen 443 ssl; >> server_name *.example.*; >> >> ssl_certificate $cert; >> ssl_certificate_key $privkey; >> >> ... >> } >> >> Also this will avoid having a plethora of server {}. > > As long as there will be cache layer to avoid re-reading certs - > it might be efficient enough to be usable. It will require much > more code than just adding variables support though, and things > like OCSP stapling won't be available. Overall I would recommend > using server{} blocks instead. Thank you Maxim. So the thing to consider here is to tune correctly the SSL session cache from the Nginx configuration point of view. That besides the cipher suite and OS tweaking (TCP stack and buffers). Using an http level SSL shared session cache, of course. Is that about it or there's more? Thank you, --- appa From brian at akins.org Wed Jan 23 14:42:22 2013 From: brian at akins.org (Brian Akins) Date: Wed, 23 Jan 2013 09:42:22 -0500 Subject: Proposal: new caching backend for nginx In-Reply-To: <20130122181734.GM9787@mdounin.ru> References: <20130122181734.GM9787@mdounin.ru> Message-ID: On Tue, Jan 22, 2013 at 1:17 PM, Maxim Dounin wrote: > In > particular, with quick look through sources I don't see any > interface to store data with size not known in advance, which > happens often in HTTP world. Would this help? from the docs: "'Add transaction' support allows constructing object's value on the fly without serializing it into a temporary buffer (aka 'zero-copy'). Uncommited transaction can be rolled back at any time. Think of videos streamed directly into the cache without usage of temporary buffers or complex objects requiring serialization from multiple distinct places before storing them into the cache" I haven't looked at the code yet. From mdounin at mdounin.ru Wed Jan 23 15:14:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2013 19:14:36 +0400 Subject: Proposal: new caching backend for nginx In-Reply-To: References: <20130122181734.GM9787@mdounin.ru> Message-ID: <20130123151436.GF27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 09:42:22AM -0500, Brian Akins wrote: > On Tue, Jan 22, 2013 at 1:17 PM, Maxim Dounin wrote: > > In > > particular, with quick look through sources I don't see any > > interface to store data with size not known in advance, which > > happens often in HTTP world. > > Would this help? from the docs: > "'Add transaction' support allows constructing object's value on the fly > without serializing it into a temporary buffer (aka 'zero-copy'). > Uncommited transaction can be rolled back at any time. > Think of videos streamed directly into the cache without usage of temporary > buffers or complex objects requiring serialization from multiple distinct > places before storing them into the cache" > > I haven't looked at the code yet. I have looked, and the interface (including transaction's one) requires to provide size in advance. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Jan 23 15:55:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2013 19:55:28 +0400 Subject: [PATCH] Fixing a segmentation fault when resolver and poll are used together In-Reply-To: <20130123081341.GA9413@lo0.su> References: <20130123081341.GA9413@lo0.su> Message-ID: <20130123155528.GG27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 12:13:41PM +0400, Ruslan Ermilov wrote: > On Tue, Jan 22, 2013 at 02:56:34PM -0800, agentzh wrote: > > I've noticed a segmentation fault caught by Valgrind/Memcheck when > > using ngx_resolver and ngx_poll_module together with Nginx 1.2.6 on > > Linux x86_64 (and i386 too): > > > > ==25191== Jump to the invalid address stated on the next line > > ==25191== at 0x0: ??? > > ==25191== by 0x42CC67: ngx_event_process_posted (ngx_event_posted.c:40) > > ==25191== by 0x42C7E7: ngx_process_events_and_timers (ngx_event.c:274) > > ==25191== by 0x434BCB: ngx_single_process_cycle (ngx_process_cycle.c:315) > > ==25191== by 0x416981: main (nginx.c:409) > > ==25191== Address 0x0 is not stack'd, malloc'd or (recently) free'd > > ==25191== > > > > It crashes on the following source line in src/event/ngx_event_posted.c: > > > > ev->handler(ev); > > > > The minimized config sample is as follows: > > > > events { > > use poll; > > } > > > > http { > > server { > > listen 8080; > > > > location /t { > > set $myserver nginx.org; > > proxy_pass http://$myserver/; > > resolver 127.0.0.1; > > } > > } > > } > > > > assuming that there's nothing listening on the local port 53. > > > > Basically, when POLLERR happens ngx_poll_module will generate both a > > write event and a read event for the current connection but there is > > no write event handler registered for the UDP connection created in > > ngx_resolver.c, thus leading to calling a NULL function pointer. > > > > The simplest fix (though maybe not the most reasonable) is to register > > an empty write event handler in ngx_resolver.c, as shown in the patch > > attached to this mail (already tested on my side). > > > > Best regards, > > -agentzh > > First of all, thanks for catching the bug! > > I propose one of these patches instead: > > %%% > Index: src/event/modules/ngx_poll_module.c > =================================================================== > --- src/event/modules/ngx_poll_module.c (revision 5013) > +++ src/event/modules/ngx_poll_module.c (working copy) > @@ -366,7 +366,13 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms > * active handler > */ > > - revents |= POLLIN|POLLOUT; > + if (c->read->active) { > + revents |= POLLIN; > + } > + > + if (c->write->active) { > + revents |= POLLOUT; > + } > } > > found = 0; > %%% > > %%% > Index: src/event/modules/ngx_poll_module.c > =================================================================== > --- src/event/modules/ngx_poll_module.c (revision 5013) > +++ src/event/modules/ngx_poll_module.c (working copy) > @@ -371,7 +371,7 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms > > found = 0; > > - if (revents & POLLIN) { > + if ((revents & POLLIN) && c->read->active) { > found = 1; > > ev = c->read; > @@ -388,7 +388,7 @@ ngx_poll_process_events(ngx_cycle_t *cycle, ngx_ms > ngx_locked_post_event(ev, queue); > } > > - if (revents & POLLOUT) { > + if ((revents & POLLOUT) && c->write->active) { > found = 1; > ev = c->write; > > %%% > > While the first patch looks more natural to me, the second patch is > in line with the ngx_epoll_process_events() code. Second one also consistent with rtsig code (and might also catch other strange cases of unexpected events), I vote for it. Please go ahead and commit it. -- Maxim Dounin http://nginx.com/support.html From info at tvdw.eu Wed Jan 23 16:05:53 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Wed, 23 Jan 2013 17:05:53 +0100 Subject: Request: upstream via a SOCKS proxy Message-ID: <51000A61.3080805@tvdw.eu> Hi, A project I'm working on has a backend server that, for security reasons, can only be accessed via a SOCKS4a/SOCKS5 proxy. A frontend server for this project (nginx) has one simple task: to proxy all incoming connections to the backend server. Right now, nginx cannot do this, because it has no support for proxying upstream connections via a SOCKS proxy. The current temporary workaround is to run another service on the frontend machine that acts like a HTTP server but proxies the data to the backend - basically everything I'd like nginx to do. I cannot use this service as my main frontend, because there are a few other files that also need to be served. SOCKS4a and SOCKS5 are really easy protocols and are basically just sockets but with an alternate handshake (skip the DNS lookup, send the hostname to the socket instead). Since they should be so easy to implement, I'm requesting that on this mailing list. I was thinking of a config file that would look something like this : upstream backend { server hidden_dns.local socks4=127.0.0.1:1234; } server { location / { proxy_pass http://backend; } } As far as I'm aware, this feature wouldn't break anything, since a SOCKS connections behaves just like any other normal socket. Thanks for considering, Tom van der Woerdt From mdounin at mdounin.ru Wed Jan 23 16:11:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2013 20:11:15 +0400 Subject: A patch to force the gunzip filter module work In-Reply-To: References: Message-ID: <20130123161115.GH27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 11:46:56AM +0800, ??? wrote: > Hi, > > I have used the gunzip filter module to inflate the compressed response. > This module is very efficient and help us a lot. But it It just work when > the request don't send the Accept-Encoding: Gzip header. If the client can > accept compressed response, it will not work at all. I have changed this > module and added a gunzip_force directive. Then it will always inflate the > compressed response when the directive is turned on. > > This patch could be helpful for other filter modules, like ssi module and > substitute module etc. It can save the bandwidth with backend servers. In > our company, the intranet bandwidth really kills us. > > This patch is from the separated gunzip module. It should be similar with > the Nginx official source code. Hope this be helpful. You probably seen this comment at the top of gunzip handler: /* TODO always gunzip - due to configuration or module request */ While configuration directive certainly will work, but it looks like a hack (and that's why it wasn't implemented). I tend to think it would be much better to make things just happen automatically on module request, much like it's currently done with reading response body into memory once r->filter_need_in_memory is set by any filter. -- Maxim Dounin http://nginx.com/support.html From valyala at gmail.com Wed Jan 23 17:27:42 2013 From: valyala at gmail.com (Aliaksandr Valialkin) Date: Wed, 23 Jan 2013 19:27:42 +0200 Subject: Proposal: new caching backend for nginx Message-ID: On Wed, Jan 23, 2013 at 5:47 AM, Maxim Dounin wrote: > I don't think that it will fit as a cache store for nginx. In > particular, with quick look through sources I don't see any > interface to store data with size not known in advance, which > happens often in HTTP world. Yes, ybc doesn't allow storing data with size not known in advance due to performance and architectural reasons. There are several workarounds for this problem: - objects with unknown sizes may be streamed into a temporary location before storing them into ybc. - objects with unknown sizes may be cached in ybc using fixed-sized chunks, except for the last chunk, which may have smaller size. Here is a pseudo-code: store_stream_to_ybc(stream, key, chunk_size) { for (n = 0; ; n++) { key_for_chunk = get_key_for_chunk(key, n); chunk_txn = start_ybc_set_txn(key_for_chunk, chunk_size); bytes_copied = copy_stream_to_set_txn(stream, chunk_txn); if (bytes_copied == -1) { // error occurred when copying data to chunk_txn. rollback_set_txn(chunk_txn); return ERROR; } if (bytes_copied < chunk_size) { // the last chunk reached. Copy it again, since we know its' size now. last_chunk_txn = start_ybc_set_txn(key_for_chunk, bytes_copied); copy_data(last_chunk_txn, cunk_txn, bytes_copied); rollback_set_txn(chunk_txn); commit_set_txn(last_chunk_txn); return SUCCESS; } // there is other data in the stream. commit_set_txn(chunk_txn); } > Additionally, it looks like it > doesn't provide async disk IO support. > Ybc works with memory mapped files. It doesn't use disk I/O directly. Disk I/O may be triggered if the given memory page is missing in RAM. It's possible to determine whether the given virtual memory location is cached in RAM or not - OSes provide special syscalls designed for this case - for example, mincore(2) in linux. But I think it's better relying on caching mechanisms provided by OS for memory mapped files than using such syscalls directly. Ybc may block nginx worker when reading swapped out memory pages, but this should be rare event if frequently accessed cached objects fit RAM. Also as I understood from the http://www.aosabook.org/en/nginx.html , nginx currently may block on disk I/O too: > One major problem that the developers of nginx will be solving in upcoming versions is > how to avoid most of the blocking on disk I/O. At the moment, if there's not enough > storage performance to serve disk operations generated by a particular worker, that > worker may still block on reading/writing from disk. -- Best Regards, Aliaksandr -------------- next part -------------- An HTML attachment was scrubbed... URL: From valyala at gmail.com Wed Jan 23 17:40:29 2013 From: valyala at gmail.com (Aliaksandr Valialkin) Date: Wed, 23 Jan 2013 19:40:29 +0200 Subject: nginx-devel Digest, Vol 39, Issue 27 In-Reply-To: References: Message-ID: On Wed, Jan 23, 2013 at 2:00 PM, Igor Sysoev wrote: > > > - unlike nginx's file-based cache, stores all cached data in two files - > one file is used for index, while the other file is used for cached data. > These files aren't vulnerable to fragmentation, since their sizes never > change after the application start. > > Well it eliminates the cache file fragmentation on file system, but how do > you eliminate > cached objects fragmentation inside the cache file? > Cached objects fragmentation is eliminated by the following mechanisms build into ybc: - cached object data always occupies a single contiguous memory region. - cached objects are stored in a circular log. So newly objects may only overwrite the oldest objects stored in the log. This way the log cannot contain holes (except for stale, rolled back or overwritten objects). - frequently requested objects with sizes smaller than the given threshold are periodically moved into the log's head. This guarantees that small hot objects will be always packed into the smallest possible memory region located near the log's head. Big objects with sizes exceeding multiple memory pages aren't moved, because there is no much sense in their defragmentation. -- Best Regards, Aliaksandr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 23 18:19:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2013 22:19:49 +0400 Subject: Proposal: new caching backend for nginx In-Reply-To: References: Message-ID: <20130123181949.GJ27423@mdounin.ru> Hello! On Wed, Jan 23, 2013 at 07:27:42PM +0200, Aliaksandr Valialkin wrote: > On Wed, Jan 23, 2013 at 5:47 AM, Maxim Dounin wrote: > > > > I don't think that it will fit as a cache store for nginx. In > > particular, with quick look through sources I don't see any > > interface to store data with size not known in advance, which > > happens often in HTTP world. > > > Yes, ybc doesn't allow storing data with size not known in advance due to > performance and architectural reasons. There are several workarounds for > this problem: > - objects with unknown sizes may be streamed into a temporary location > before storing them into ybc. This will effectively double work needed to store response, and writing to cache is already one of major performance bottlenecks on many setups. > - objects with unknown sizes may be cached in ybc using fixed-sized chunks, > except for the last chunk, which may have smaller size. Here is a Thus effectively reinventing what is known as blocks in the filsystem world, doing all the maintanance work by hand. Actually, this is the basic problem with phk@'s aproach (with all respect to phk@): reinventing the filesystem. [...] > > Additionally, it looks like it > > doesn't provide async disk IO support. > > > > Ybc works with memory mapped files. It doesn't use disk I/O directly. Disk > I/O may be triggered if the given memory page is missing in RAM. It's > possible to determine whether the given virtual memory location is cached > in RAM or not - OSes provide special syscalls designed for this case - for > example, mincore(2) in linux. But I think it's better relying on caching > mechanisms provided by OS for memory mapped files than using such syscalls > directly. Ybc may block nginx worker when reading swapped out memory pages, > but this should be rare event if frequently accessed cached objects fit RAM. The key words are "if ... cached objects fit RAM". But bad things happen if they aren't, and that's why support for AIO was introduced in nginx 0.8.11 several years ago. Surprisingly enough, it just works for cache without any special code - because it's just files. (Well, not exactly, there is special code to handle async reading of a response header. But it's rather addition to normal AIO code.) And another major problem with mmap is that it doesn't tolerate IO errors, and if e.g. disk is unable to read a particular block - you'll end up with SIGBUS to the whole process leaving it no possibilities to recover instead of just an error returned for a single read() operation. While this may be acceptable in many use cases, it is really bad for an event-based server with thousands of clients served within a single process. > Also as I understood from the http://www.aosabook.org/en/nginx.html , nginx > currently may block on disk I/O too: > > > One major problem that the developers of nginx will be solving in > upcoming versions is > > how to avoid most of the blocking on disk I/O. At the moment, if there's > not enough > > storage performance to serve disk operations generated by a particular > worker, that > > worker may still block on reading/writing from disk. Yep, not all OSes have AIO support (and some, like Linux, require you to choose just one option, VM cache or AIO), and even on systems with good one (FreeBSD, notably) there are still operations like open() and stat(), which doesn't have async counterparts at all, but still may block. That's why we are experimenting with various ways to make things better. But it's all about improving async IO support, not about dropping it. -- Maxim Dounin http://nginx.com/support.html From andrew at nginx.com Wed Jan 23 19:33:03 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 23 Jan 2013 23:33:03 +0400 Subject: Proposal: new caching backend for nginx In-Reply-To: <20130123181949.GJ27423@mdounin.ru> References: <20130123181949.GJ27423@mdounin.ru> Message-ID: <967B9AE4-6D93-4D66-99EC-C5E2AB06A7C4@nginx.com> On Jan 23, 2013, at 10:19 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 23, 2013 at 07:27:42PM +0200, Aliaksandr Valialkin wrote: > >> On Wed, Jan 23, 2013 at 5:47 AM, Maxim Dounin wrote: >> >> >>> I don't think that it will fit as a cache store for nginx. In >>> particular, with quick look through sources I don't see any >>> interface to store data with size not known in advance, which >>> happens often in HTTP world. >> >> >> Yes, ybc doesn't allow storing data with size not known in advance due to >> performance and architectural reasons. There are several workarounds for >> this problem: >> - objects with unknown sizes may be streamed into a temporary location >> before storing them into ybc. > > This will effectively double work needed to store response, and > writing to cache is already one of major performance bottlenecks > on many setups. > >> - objects with unknown sizes may be cached in ybc using fixed-sized chunks, >> except for the last chunk, which may have smaller size. Here is a > > Thus effectively reinventing what is known as blocks in the > filsystem world, doing all the maintanance work by hand. > > Actually, this is the basic problem with phk@'s aproach (with all > respect to phk@): reinventing the filesystem. I personally don't think reinventing a filesystem is always bad :) Generic filesystems might not be that suitable for high load (caching/streaming) scenarions. A better question might be - what filesystem architecture/concept fits best. One option would be to bypass VM layer completely and work with raw block devices, right? > [...] > >>> Additionally, it looks like it >>> doesn't provide async disk IO support. >> >> Ybc works with memory mapped files. It doesn't use disk I/O directly. Disk >> I/O may be triggered if the given memory page is missing in RAM. It's >> possible to determine whether the given virtual memory location is cached >> in RAM or not - OSes provide special syscalls designed for this case - for >> example, mincore(2) in linux. But I think it's better relying on caching >> mechanisms provided by OS for memory mapped files than using such syscalls >> directly. Ybc may block nginx worker when reading swapped out memory pages, >> but this should be rare event if frequently accessed cached objects fit RAM. > > The key words are "if ... cached objects fit RAM". > > But bad things happen if they aren't, and that's why support for > AIO was introduced in nginx 0.8.11 several years ago. > Surprisingly enough, it just works for cache without any special > code - because it's just files. > > (Well, not exactly, there is special code to handle async reading > of a response header. But it's rather addition to normal AIO > code.) > > And another major problem with mmap is that it doesn't tolerate IO > errors, and if e.g. disk is unable to read a particular block - > you'll end up with SIGBUS to the whole process leaving it no > possibilities to recover instead of just an error returned for a > single read() operation. While this may be acceptable in many use > cases, it is really bad for an event-based server with thousands > of clients served within a single process. > >> Also as I understood from the http://www.aosabook.org/en/nginx.html , nginx >> currently may block on disk I/O too: >> >>> One major problem that the developers of nginx will be solving in >>> upcoming versions is how to avoid most of the blocking on disk I/O. >>> At the moment, if there's not enough >>> storage performance to serve disk operations generated by a particular >>> worker, that >>> worker may still block on reading/writing from disk. As the author of that chapter I can't help mentioning that this is an excerpt :) Next phrase is actually: "A number of mechanisms and configuration file directives exist to mitigate such disk I/O blocking scenarios. Most notably, combinations of options like sendfile and AIO typically produce a lot of headroom for disk performance. An nginx installation should be planned based on the data set, the amount of memory available for nginx, and the underlying storage architecture." > Yep, not all OSes have AIO support (and some, like Linux, > require you to choose just one option, VM cache or AIO), and even on > systems with good one (FreeBSD, notably) there are still > operations like open() and stat(), which doesn't have async > counterparts at all, but still may block. That's why we are > experimenting with various ways to make things better. But it's > all about improving async IO support, not about dropping it. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From agentzh at gmail.com Wed Jan 23 20:16:59 2013 From: agentzh at gmail.com (agentzh) Date: Wed, 23 Jan 2013 12:16:59 -0800 Subject: [PATCH] Fixing a segmentation fault when resolver and poll are used together In-Reply-To: <20130123081341.GA9413@lo0.su> References: <20130123081341.GA9413@lo0.su> Message-ID: Hello! On Wed, Jan 23, 2013 at 12:13 AM, Ruslan Ermilov wrote: > > While the first patch looks more natural to me, the second patch is > in line with the ngx_epoll_process_events() code. > Yes, your patch is more reasonable :) Thanks! -agentzh From igor at sysoev.ru Thu Jan 24 07:50:49 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 24 Jan 2013 11:50:49 +0400 Subject: ybc (was: nginx-devel Digest, Vol 39, Issue 27) In-Reply-To: References: Message-ID: <4FF7F144-068D-461B-A380-D06A0B0DB381@sysoev.ru> On Jan 23, 2013, at 21:40 , Aliaksandr Valialkin wrote: > On Wed, Jan 23, 2013 at 2:00 PM, Igor Sysoev wrote: > > - unlike nginx's file-based cache, stores all cached data in two files - one file is used for index, while the other file is used for cached data. These files aren't vulnerable to fragmentation, since their sizes never change after the application start. > > Well it eliminates the cache file fragmentation on file system, but how do you eliminate > cached objects fragmentation inside the cache file? > > Cached objects fragmentation is eliminated by the following mechanisms build into ybc: > - cached object data always occupies a single contiguous memory region. > - cached objects are stored in a circular log. So newly objects may only overwrite the oldest objects stored in the log. This way the log cannot contain holes (except for stale, rolled back or overwritten objects). These holes mean that the algorithm has sources of the fragmentation. > - frequently requested objects with sizes smaller than the given threshold are periodically moved into the log's head. This guarantees that small hot objects will be always packed into the smallest possible memory region located near the log's head. Big objects with sizes exceeding multiple memory pages aren't moved, because there is no much sense in their defragmentation. What will happen with hot ~100K size objects? Will they be eventually overwritten with some new objects, probably requested only once? -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Thu Jan 24 10:47:03 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 24 Jan 2013 18:47:03 +0800 Subject: A patch to force the gunzip filter module work In-Reply-To: <20130123161115.GH27423@mdounin.ru> References: <20130123161115.GH27423@mdounin.ru> Message-ID: Thanks, Maxim. I know what you means. But there may be some problems with the order of filter modules. If a filter module's header filter function sets the filter_need_in_memory flag, it should be run before the gunzip module. And this filter module indeed need parse the response after the gunzip module in the body filter function. This may be a problem? 2013/1/24 Maxim Dounin > Hello! > > On Wed, Jan 23, 2013 at 11:46:56AM +0800, ??? wrote: > > > Hi, > > > > I have used the gunzip filter module to inflate the compressed response. > > This module is very efficient and help us a lot. But it It just work when > > the request don't send the Accept-Encoding: Gzip header. If the client > can > > accept compressed response, it will not work at all. I have changed this > > module and added a gunzip_force directive. Then it will always inflate > the > > compressed response when the directive is turned on. > > > > This patch could be helpful for other filter modules, like ssi module and > > substitute module etc. It can save the bandwidth with backend servers. In > > our company, the intranet bandwidth really kills us. > > > > This patch is from the separated gunzip module. It should be similar with > > the Nginx official source code. Hope this be helpful. > > You probably seen this comment at the top of gunzip handler: > > /* TODO always gunzip - due to configuration or module request */ > > While configuration directive certainly will work, but it looks > like a hack (and that's why it wasn't implemented). I tend to > think it would be much better to make things just happen > automatically on module request, much like it's currently done > with reading response body into memory once > r->filter_need_in_memory is set by any filter. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 24 11:23:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Jan 2013 15:23:18 +0400 Subject: A patch to force the gunzip filter module work In-Reply-To: References: <20130123161115.GH27423@mdounin.ru> Message-ID: <20130124112318.GA40753@mdounin.ru> Hello! On Thu, Jan 24, 2013 at 06:47:03PM +0800, ??? wrote: > Thanks, Maxim. I know what you means. But there may be some problems with > the order of filter modules. If a filter module's header filter function > sets the filter_need_in_memory flag, it should be run before the gunzip > module. And this filter module indeed need parse the response after the > gunzip module in the body filter function. This may be a problem? Yes, as some headers have to be modified by gunzip - and this in turn should happen before (at least some of) other filters. This needs detailsed investigation, probably aproach similar to range filter will work here (i.e. separate header and body filters). > > 2013/1/24 Maxim Dounin > > > Hello! > > > > On Wed, Jan 23, 2013 at 11:46:56AM +0800, ??? wrote: > > > > > Hi, > > > > > > I have used the gunzip filter module to inflate the compressed response. > > > This module is very efficient and help us a lot. But it It just work when > > > the request don't send the Accept-Encoding: Gzip header. If the client > > can > > > accept compressed response, it will not work at all. I have changed this > > > module and added a gunzip_force directive. Then it will always inflate > > the > > > compressed response when the directive is turned on. > > > > > > This patch could be helpful for other filter modules, like ssi module and > > > substitute module etc. It can save the bandwidth with backend servers. In > > > our company, the intranet bandwidth really kills us. > > > > > > This patch is from the separated gunzip module. It should be similar with > > > the Nginx official source code. Hope this be helpful. > > > > You probably seen this comment at the top of gunzip handler: > > > > /* TODO always gunzip - due to configuration or module request */ > > > > While configuration directive certainly will work, but it looks > > like a hack (and that's why it wasn't implemented). I tend to > > think it would be much better to make things just happen > > automatically on module request, much like it's currently done > > with reading response body into memory once > > r->filter_need_in_memory is set by any filter. > > > > -- > > Maxim Dounin > > http://nginx.com/support.html > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Thu Jan 24 16:14:12 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 24 Jan 2013 16:14:12 +0000 Subject: [nginx] svn commit: r5014 - in trunk/auto/lib: geoip libgd Message-ID: <20130124161412.A928F3F9C44@mail.nginx.com> Author: ru Date: 2013-01-24 16:14:12 +0000 (Thu, 24 Jan 2013) New Revision: 5014 URL: http://trac.nginx.org/nginx/changeset/5014/nginx Log: Configure: fixed style of include directories. Modified: trunk/auto/lib/geoip/conf trunk/auto/lib/libgd/conf Modified: trunk/auto/lib/geoip/conf =================================================================== --- trunk/auto/lib/geoip/conf 2013-01-22 12:36:00 UTC (rev 5013) +++ trunk/auto/lib/geoip/conf 2013-01-24 16:14:12 UTC (rev 5014) @@ -34,7 +34,7 @@ # NetBSD port ngx_feature="GeoIP library in /usr/pkg/" - ngx_feature_path="/usr/pkg/include/" + ngx_feature_path="/usr/pkg/include" if [ $NGX_RPATH = YES ]; then ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lGeoIP" Modified: trunk/auto/lib/libgd/conf =================================================================== --- trunk/auto/lib/libgd/conf 2013-01-22 12:36:00 UTC (rev 5013) +++ trunk/auto/lib/libgd/conf 2013-01-24 16:14:12 UTC (rev 5014) @@ -35,7 +35,7 @@ # NetBSD port ngx_feature="GD library in /usr/pkg/" - ngx_feature_path="/usr/pkg/include/" + ngx_feature_path="/usr/pkg/include" if [ $NGX_RPATH = YES ]; then ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lgd" From ru at nginx.com Thu Jan 24 16:15:07 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 24 Jan 2013 16:15:07 +0000 Subject: [nginx] svn commit: r5015 - trunk/auto/lib/geoip Message-ID: <20130124161507.759013F9C4E@mail.nginx.com> Author: ru Date: 2013-01-24 16:15:07 +0000 (Thu, 24 Jan 2013) New Revision: 5015 URL: http://trac.nginx.org/nginx/changeset/5015/nginx Log: Configure: fixed GeoIP library detection. Modified: trunk/auto/lib/geoip/conf Modified: trunk/auto/lib/geoip/conf =================================================================== --- trunk/auto/lib/geoip/conf 2013-01-24 16:14:12 UTC (rev 5014) +++ trunk/auto/lib/geoip/conf 2013-01-24 16:15:07 UTC (rev 5015) @@ -6,7 +6,7 @@ ngx_feature="GeoIP library" ngx_feature_name= ngx_feature_run=no - ngx_feature_incs= + ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs="-lGeoIP" ngx_feature_test="GeoIP_open(NULL, 0)" @@ -18,6 +18,7 @@ # FreeBSD port ngx_feature="GeoIP library in /usr/local/" + ngx_feature_path="/usr/local/include" if [ $NGX_RPATH = YES ]; then ngx_feature_libs="-R/usr/local/lib -L/usr/local/lib -lGeoIP" @@ -64,6 +65,8 @@ if [ $ngx_found = yes ]; then + + CORE_INCS="$CORE_INCS $ngx_feature_path" CORE_LIBS="$CORE_LIBS $ngx_feature_libs" else From ru at nginx.com Thu Jan 24 16:15:52 2013 From: ru at nginx.com (ru at nginx.com) Date: Thu, 24 Jan 2013 16:15:52 +0000 Subject: [nginx] svn commit: r5016 - in trunk: auto/lib/geoip src/http/modules Message-ID: <20130124161552.849F23F9F9F@mail.nginx.com> Author: ru Date: 2013-01-24 16:15:51 +0000 (Thu, 24 Jan 2013) New Revision: 5016 URL: http://trac.nginx.org/nginx/changeset/5016/nginx Log: GeoIP: IPv6 support. When using IPv6 databases, IPv4 addresses are looked up as IPv4-mapped IPv6 addresses. Mostly based on a patch by Gregor Kali?\197?\161nik (ticket #250). Modified: trunk/auto/lib/geoip/conf trunk/src/http/modules/ngx_http_geoip_module.c Modified: trunk/auto/lib/geoip/conf =================================================================== --- trunk/auto/lib/geoip/conf 2013-01-24 16:15:07 UTC (rev 5015) +++ trunk/auto/lib/geoip/conf 2013-01-24 16:15:51 UTC (rev 5016) @@ -69,6 +69,18 @@ CORE_INCS="$CORE_INCS $ngx_feature_path" CORE_LIBS="$CORE_LIBS $ngx_feature_libs" + if [ $NGX_IPV6 = YES ]; then + ngx_feature="GeoIP IPv6 support" + ngx_feature_name="NGX_HAVE_GEOIP_V6" + ngx_feature_run=no + ngx_feature_incs="#include + #include " + #ngx_feature_path= + #ngx_feature_libs= + ngx_feature_test="printf(\"%d\", GEOIP_CITY_EDITION_REV0_V6);" + . auto/feature + fi + else cat << END Modified: trunk/src/http/modules/ngx_http_geoip_module.c =================================================================== --- trunk/src/http/modules/ngx_http_geoip_module.c 2013-01-24 16:15:07 UTC (rev 5015) +++ trunk/src/http/modules/ngx_http_geoip_module.c 2013-01-24 16:15:51 UTC (rev 5016) @@ -13,12 +13,22 @@ #include +#define NGX_GEOIP_COUNTRY_CODE 0 +#define NGX_GEOIP_COUNTRY_CODE3 1 +#define NGX_GEOIP_COUNTRY_NAME 2 + + typedef struct { GeoIP *country; GeoIP *org; GeoIP *city; ngx_array_t *proxies; /* array of ngx_cidr_t */ ngx_flag_t proxy_recursive; +#if (NGX_HAVE_GEOIP_V6) + unsigned country_v6:1; + unsigned org_v6:1; + unsigned city_v6:1; +#endif } ngx_http_geoip_conf_t; @@ -28,10 +38,32 @@ } ngx_http_geoip_var_t; -typedef char *(*ngx_http_geoip_variable_handler_pt)(GeoIP *, u_long addr); +typedef const char *(*ngx_http_geoip_variable_handler_pt)(GeoIP *, + u_long addr); -static u_long ngx_http_geoip_addr(ngx_http_request_t *r, - ngx_http_geoip_conf_t *gcf); + +ngx_http_geoip_variable_handler_pt ngx_http_geoip_country_functions[] = { + GeoIP_country_code_by_ipnum, + GeoIP_country_code3_by_ipnum, + GeoIP_country_name_by_ipnum, +}; + + +#if (NGX_HAVE_GEOIP_V6) + +typedef const char *(*ngx_http_geoip_variable_handler_v6_pt)(GeoIP *, + geoipv6_t addr); + + +ngx_http_geoip_variable_handler_v6_pt ngx_http_geoip_country_v6_functions[] = { + GeoIP_country_code_by_ipnum_v6, + GeoIP_country_code3_by_ipnum_v6, + GeoIP_country_name_by_ipnum_v6, +}; + +#endif + + static ngx_int_t ngx_http_geoip_country_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_geoip_org_variable(ngx_http_request_t *r, @@ -138,19 +170,19 @@ { ngx_string("geoip_country_code"), NULL, ngx_http_geoip_country_variable, - (uintptr_t) GeoIP_country_code_by_ipnum, 0, 0 }, + NGX_GEOIP_COUNTRY_CODE, 0, 0 }, { ngx_string("geoip_country_code3"), NULL, ngx_http_geoip_country_variable, - (uintptr_t) GeoIP_country_code3_by_ipnum, 0, 0 }, + NGX_GEOIP_COUNTRY_CODE3, 0, 0 }, { ngx_string("geoip_country_name"), NULL, ngx_http_geoip_country_variable, - (uintptr_t) GeoIP_country_name_by_ipnum, 0, 0 }, + NGX_GEOIP_COUNTRY_NAME, 0, 0 }, { ngx_string("geoip_org"), NULL, ngx_http_geoip_org_variable, - (uintptr_t) GeoIP_name_by_ipnum, 0, 0 }, + 0, 0, 0 }, { ngx_string("geoip_city_continent_code"), NULL, ngx_http_geoip_city_variable, @@ -255,12 +287,68 @@ } +#if (NGX_HAVE_GEOIP_V6) + +static geoipv6_t +ngx_http_geoip_addr_v6(ngx_http_request_t *r, ngx_http_geoip_conf_t *gcf) +{ + ngx_addr_t addr; + ngx_table_elt_t *xfwd; + in_addr_t addr4; + struct in6_addr addr6; + struct sockaddr_in *sin; + struct sockaddr_in6 *sin6; + + addr.sockaddr = r->connection->sockaddr; + addr.socklen = r->connection->socklen; + /* addr.name = r->connection->addr_text; */ + + xfwd = r->headers_in.x_forwarded_for; + + if (xfwd != NULL && gcf->proxies != NULL) { + (void) ngx_http_get_forwarded_addr(r, &addr, xfwd->value.data, + xfwd->value.len, gcf->proxies, + gcf->proxy_recursive); + } + + switch (addr.sockaddr->sa_family) { + + case AF_INET: + /* Produce IPv4-mapped IPv6 address. */ + sin = (struct sockaddr_in *) addr.sockaddr; + addr4 = ntohl(sin->sin_addr.s_addr); + + ngx_memzero(&addr6, sizeof(struct in6_addr)); + addr6.s6_addr[10] = 0xff; + addr6.s6_addr[11] = 0xff; + addr6.s6_addr[12] = addr4 >> 24; + addr6.s6_addr[13] = addr4 >> 16; + addr6.s6_addr[14] = addr4 >> 8; + addr6.s6_addr[15] = addr4; + return addr6; + + case AF_INET6: + sin6 = (struct sockaddr_in6 *) addr.sockaddr; + return sin6->sin6_addr; + + default: + return in6addr_any; + } +} + +#endif + + static ngx_int_t ngx_http_geoip_country_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { - ngx_http_geoip_variable_handler_pt handler = - (ngx_http_geoip_variable_handler_pt) data; + ngx_http_geoip_variable_handler_pt handler = + ngx_http_geoip_country_functions[data]; +#if (NGX_HAVE_GEOIP_V6) + ngx_http_geoip_variable_handler_v6_pt handler_v6 = + ngx_http_geoip_country_v6_functions[data]; +#endif const char *val; ngx_http_geoip_conf_t *gcf; @@ -271,7 +359,13 @@ goto not_found; } +#if (NGX_HAVE_GEOIP_V6) + val = gcf->country_v6 + ? handler_v6(gcf->country, ngx_http_geoip_addr_v6(r, gcf)) + : handler(gcf->country, ngx_http_geoip_addr(r, gcf)); +#else val = handler(gcf->country, ngx_http_geoip_addr(r, gcf)); +#endif if (val == NULL) { goto not_found; @@ -297,9 +391,6 @@ ngx_http_geoip_org_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { - ngx_http_geoip_variable_handler_pt handler = - (ngx_http_geoip_variable_handler_pt) data; - size_t len; char *val; ngx_http_geoip_conf_t *gcf; @@ -310,7 +401,15 @@ goto not_found; } - val = handler(gcf->org, ngx_http_geoip_addr(r, gcf)); +#if (NGX_HAVE_GEOIP_V6) + val = gcf->org_v6 + ? GeoIP_name_by_ipnum_v6(gcf->org, + ngx_http_geoip_addr_v6(r, gcf)) + : GeoIP_name_by_ipnum(gcf->org, + ngx_http_geoip_addr(r, gcf)); +#else + val = GeoIP_name_by_ipnum(gcf->org, ngx_http_geoip_addr(r, gcf)); +#endif if (val == NULL) { goto not_found; @@ -500,7 +599,15 @@ gcf = ngx_http_get_module_main_conf(r, ngx_http_geoip_module); if (gcf->city) { +#if (NGX_HAVE_GEOIP_V6) + return gcf->city_v6 + ? GeoIP_record_by_ipnum_v6(gcf->city, + ngx_http_geoip_addr_v6(r, gcf)) + : GeoIP_record_by_ipnum(gcf->city, + ngx_http_geoip_addr(r, gcf)); +#else return GeoIP_record_by_ipnum(gcf->city, ngx_http_geoip_addr(r, gcf)); +#endif } return NULL; @@ -603,6 +710,13 @@ return NGX_CONF_OK; +#if (NGX_HAVE_GEOIP_V6) + case GEOIP_COUNTRY_EDITION_V6: + + gcf->country_v6 = 1; + return NGX_CONF_OK; +#endif + default: ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid GeoIP database \"%V\" type:%d", @@ -654,6 +768,16 @@ return NGX_CONF_OK; +#if (NGX_HAVE_GEOIP_V6) + case GEOIP_ISP_EDITION_V6: + case GEOIP_ORG_EDITION_V6: + case GEOIP_DOMAIN_EDITION_V6: + case GEOIP_ASNUM_EDITION_V6: + + gcf->org_v6 = 1; + return NGX_CONF_OK; +#endif + default: ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid GeoIP database \"%V\" type:%d", @@ -703,6 +827,14 @@ return NGX_CONF_OK; +#if (NGX_HAVE_GEOIP_V6) + case GEOIP_CITY_EDITION_REV0_V6: + case GEOIP_CITY_EDITION_REV1_V6: + + gcf->city_v6 = 1; + return NGX_CONF_OK; +#endif + default: ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid GeoIP City database \"%V\" type:%d", From yaoweibin at gmail.com Fri Jan 25 03:45:32 2013 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 25 Jan 2013 11:45:32 +0800 Subject: Request: upstream via a SOCKS proxy In-Reply-To: <51000A61.3080805@tvdw.eu> References: <51000A61.3080805@tvdw.eu> Message-ID: I have no idea about the SOCK4a/SOCK5 protocol. Is it similar with the tcp proxy module? https://github.com/yaoweibin/nginx_tcp_proxy_module 2013/1/24 Tom van der Woerdt > Hi, > > A project I'm working on has a backend server that, for security reasons, > can only be accessed via a SOCKS4a/SOCKS5 proxy. A frontend server for this > project (nginx) has one simple task: to proxy all incoming connections to > the backend server. > > Right now, nginx cannot do this, because it has no support for proxying > upstream connections via a SOCKS proxy. The current temporary workaround is > to run another service on the frontend machine that acts like a HTTP server > but proxies the data to the backend - basically everything I'd like nginx > to do. I cannot use this service as my main frontend, because there are a > few other files that also need to be served. > > SOCKS4a and SOCKS5 are really easy protocols and are basically just > sockets but with an alternate handshake (skip the DNS lookup, send the > hostname to the socket instead). Since they should be so easy to implement, > I'm requesting that on this mailing list. > > I was thinking of a config file that would look something like this : > > upstream backend { > server hidden_dns.local socks4=127.0.0.1:1234; > } > > server { > location / { > proxy_pass http://backend; > } > } > > As far as I'm aware, this feature wouldn't break anything, since a SOCKS > connections behaves just like any other normal socket. > > Thanks for considering, > Tom van der Woerdt > > > ______________________________**_________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx-devel > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Fri Jan 25 09:59:29 2013 From: ru at nginx.com (ru at nginx.com) Date: Fri, 25 Jan 2013 09:59:29 +0000 Subject: [nginx] svn commit: r5017 - trunk/src/event/modules Message-ID: <20130125095929.1632A3F9F44@mail.nginx.com> Author: ru Date: 2013-01-25 09:59:28 +0000 (Fri, 25 Jan 2013) New Revision: 5017 URL: http://trac.nginx.org/nginx/changeset/5017/nginx Log: Events: fixed null pointer dereference with resolver and poll. A POLLERR signalled by poll() without POLLIN/POLLOUT, as seen on Linux, would generate both read and write events, but there's no write event handler for resolver events. A fix is to only call event handler of an active event. Modified: trunk/src/event/modules/ngx_poll_module.c Modified: trunk/src/event/modules/ngx_poll_module.c =================================================================== --- trunk/src/event/modules/ngx_poll_module.c 2013-01-24 16:15:51 UTC (rev 5016) +++ trunk/src/event/modules/ngx_poll_module.c 2013-01-25 09:59:28 UTC (rev 5017) @@ -371,7 +371,7 @@ found = 0; - if (revents & POLLIN) { + if ((revents & POLLIN) && c->read->active) { found = 1; ev = c->read; @@ -388,7 +388,7 @@ ngx_locked_post_event(ev, queue); } - if (revents & POLLOUT) { + if ((revents & POLLOUT) && c->write->active) { found = 1; ev = c->write; From ru at nginx.com Fri Jan 25 10:01:42 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 25 Jan 2013 14:01:42 +0400 Subject: [PATCH] Fixing a segmentation fault when resolver and poll are used together In-Reply-To: References: <20130123081341.GA9413@lo0.su> Message-ID: <20130125100142.GF52101@lo0.su> On Wed, Jan 23, 2013 at 12:16:59PM -0800, agentzh wrote: > On Wed, Jan 23, 2013 at 12:13 AM, Ruslan Ermilov wrote: > > > > While the first patch looks more natural to me, the second patch is > > in line with the ngx_epoll_process_events() code. > > > > Yes, your patch is more reasonable :) http://trac.nginx.org/nginx/changeset/5017/nginx From info at tvdw.eu Fri Jan 25 11:37:42 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Fri, 25 Jan 2013 12:37:42 +0100 Subject: Request: upstream via a SOCKS proxy In-Reply-To: References: <51000A61.3080805@tvdw.eu> Message-ID: <51026E86.7000008@tvdw.eu> As far as I know, the tcp proxy module intends to be a reverse proxy for any tcp connection, while my SOCKS suggestion would be to support forward proxies in proxy_pass, uwsgi_pass, fastcgi_pass, etc. Tom Op 1/25/13 4:45 AM, ??? schreef: > I have no idea about the SOCK4a/SOCK5 protocol. Is it similar with the > tcp proxy module? https://github.com/yaoweibin/nginx_tcp_proxy_module > > 2013/1/24 Tom van der Woerdt > > > Hi, > > A project I'm working on has a backend server that, for security > reasons, can only be accessed via a SOCKS4a/SOCKS5 proxy. A > frontend server for this project (nginx) has one simple task: to > proxy all incoming connections to the backend server. > > Right now, nginx cannot do this, because it has no support for > proxying upstream connections via a SOCKS proxy. The current > temporary workaround is to run another service on the frontend > machine that acts like a HTTP server but proxies the data to the > backend - basically everything I'd like nginx to do. I cannot use > this service as my main frontend, because there are a few other > files that also need to be served. > > SOCKS4a and SOCKS5 are really easy protocols and are basically > just sockets but with an alternate handshake (skip the DNS lookup, > send the hostname to the socket instead). Since they should be so > easy to implement, I'm requesting that on this mailing list. > > I was thinking of a config file that would look something like this : > > upstream backend { > server hidden_dns.local socks4=127.0.0.1:1234 > ; > } > > server { > location / { > proxy_pass http://backend; > } > } > > As far as I'm aware, this feature wouldn't break anything, since a > SOCKS connections behaves just like any other normal socket. > > Thanks for considering, > Tom van der Woerdt > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3729 bytes Desc: S/MIME-cryptografische ondertekening URL: From al-nginx at none.at Fri Jan 25 11:57:35 2013 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 25 Jan 2013 12:57:35 +0100 Subject: Request: upstream via a SOCKS proxy In-Reply-To: <51000A61.3080805@tvdw.eu> References: <51000A61.3080805@tvdw.eu> Message-ID: Hi, There are some http2socks proxy out there. http://www.privoxy.org/ http://www.privoxy.org/user-manual/config.html#SOCKS http://www.delegate.org/delegate/ http://www.delegate.org/delegate/Manual.htm#SOCKS http://en.wikipedia.org/wiki/SOCKS#Translating_proxies The setup coul looks like client -> nginx -> http-proxylistener -> socks-proxyrequester -> socks-server OT: Sock5 is not so easy if you want to implement the full protocol, imho. I Agree with you that this would be a nice upsteam module, even that I don't need it at the moment. Cheers Aleks Am 23-01-2013 17:05, schrieb Tom van der Woerdt: > Hi, > > A project I'm working on has a backend server that, for security > reasons, can only be accessed via a SOCKS4a/SOCKS5 proxy. A frontend > server for this project (nginx) has one simple task: to proxy all > incoming connections to the backend server. > > Right now, nginx cannot do this, because it has no support for > proxying upstream connections via a SOCKS proxy. The current temporary > workaround is to run another service on the frontend machine that acts > like a HTTP server but proxies the data to the backend - basically > everything I'd like nginx to do. I cannot use this service as my main > frontend, because there are a few other files that also need to be > served. > > SOCKS4a and SOCKS5 are really easy protocols and are basically just > sockets but with an alternate handshake (skip the DNS lookup, send the > hostname to the socket instead). Since they should be so easy to > implement, I'm requesting that on this mailing list. > > I was thinking of a config file that would look something like this : > > upstream backend { > server hidden_dns.local socks4=127.0.0.1:1234; > } > > server { > location / { > proxy_pass http://backend; > } > } > > As far as I'm aware, this feature wouldn't break anything, since a > SOCKS connections behaves just like any other normal socket. > > Thanks for considering, > Tom van der Woerdt > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From info at tvdw.eu Fri Jan 25 12:13:43 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Fri, 25 Jan 2013 13:13:43 +0100 Subject: Request: upstream via a SOCKS proxy In-Reply-To: References: <51000A61.3080805@tvdw.eu> Message-ID: <510276F7.3020809@tvdw.eu> Yes, I currently use a proxy like that, but it feels like a performance killer to do it like that. If implemented in nginx it could be so much faster. About SOCKS implementations: as long as authentication isn't required, the handshake is really, really easy, especially version 4. The lack of a framing protocol makes it behave like any normal socket once the handshake is done. Tom Op 1/25/13 12:57 PM, Aleksandar Lazic schreef: > Hi, > > There are some http2socks proxy out there. > > http://www.privoxy.org/ > http://www.privoxy.org/user-manual/config.html#SOCKS > > http://www.delegate.org/delegate/ > http://www.delegate.org/delegate/Manual.htm#SOCKS > > http://en.wikipedia.org/wiki/SOCKS#Translating_proxies > > The setup coul looks like > > client -> nginx -> http-proxylistener -> socks-proxyrequester -> > socks-server > > OT: Sock5 is not so easy if you want to implement the full protocol, > imho. > > I Agree with you that this would be a nice upsteam module, even that I > don't > need it at the moment. > > Cheers > Aleks > Am 23-01-2013 17:05, schrieb Tom van der Woerdt: >> Hi, >> >> A project I'm working on has a backend server that, for security >> reasons, can only be accessed via a SOCKS4a/SOCKS5 proxy. A frontend >> server for this project (nginx) has one simple task: to proxy all >> incoming connections to the backend server. >> >> Right now, nginx cannot do this, because it has no support for >> proxying upstream connections via a SOCKS proxy. The current temporary >> workaround is to run another service on the frontend machine that acts >> like a HTTP server but proxies the data to the backend - basically >> everything I'd like nginx to do. I cannot use this service as my main >> frontend, because there are a few other files that also need to be >> served. >> >> SOCKS4a and SOCKS5 are really easy protocols and are basically just >> sockets but with an alternate handshake (skip the DNS lookup, send the >> hostname to the socket instead). Since they should be so easy to >> implement, I'm requesting that on this mailing list. >> >> I was thinking of a config file that would look something like this : >> >> upstream backend { >> server hidden_dns.local socks4=127.0.0.1:1234; >> } >> >> server { >> location / { >> proxy_pass http://backend; >> } >> } >> >> As far as I'm aware, this feature wouldn't break anything, since a >> SOCKS connections behaves just like any other normal socket. >> >> Thanks for considering, >> Tom van der Woerdt >> >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3729 bytes Desc: S/MIME-cryptografische ondertekening URL: From info at tvdw.eu Fri Jan 25 21:46:48 2013 From: info at tvdw.eu (Tom van der Woerdt) Date: Fri, 25 Jan 2013 22:46:48 +0100 Subject: Request: upstream via a SOCKS proxy In-Reply-To: <510276F7.3020809@tvdw.eu> References: <51000A61.3080805@tvdw.eu> <510276F7.3020809@tvdw.eu> Message-ID: <5102FD48.40507@tvdw.eu> So I invested some time in this today, and came up with this patch that implements SOCKS5 support. Can anyone suggest improvements? Sample config : upstream backend { server 127.0.0.1:9050 socks=ip4.me; } server { listen 1234; server_name localhost; location / { proxy_pass http://backend; proxy_connect_timeout 5s; proxy_set_header Host ip4.me; } } No DNS lookups are done at nginx, which is proper behavior. Index: src/http/ngx_http_upstream_round_robin.c =================================================================== --- src/http/ngx_http_upstream_round_robin.c (revision 5017) +++ src/http/ngx_http_upstream_round_robin.c (working copy) @@ -87,6 +87,12 @@ peers->peer[n].weight = server[i].weight; peers->peer[n].effective_weight = server[i].weight; peers->peer[n].current_weight = 0; + +#if (NGX_HTTP_UPSTREAM_SOCKS) + peers->peer[n].socks = server[i].socks; + peers->peer[n].socks_port = server[i].socks_port; + peers->peer[n].socks_hostname = server[i].socks_hostname; +#endif n++; } } @@ -145,6 +151,12 @@ backup->peer[n].max_fails = server[i].max_fails; backup->peer[n].fail_timeout = server[i].fail_timeout; backup->peer[n].down = server[i].down; + +#if (NGX_HTTP_UPSTREAM_SOCKS) + backup->peer[n].socks = server[i].socks; + backup->peer[n].socks_port = server[i].socks_port; + backup->peer[n].socks_hostname = server[i].socks_hostname; +#endif n++; } } @@ -453,6 +465,12 @@ pc->socklen = peer->socklen; pc->name = &peer->name; +#if (NGX_HTTP_UPSTREAM_SOCKS) + pc->socks = peer->socks; + pc->socks_port = peer->socks_port; + pc->socks_hostname = peer->socks_hostname; +#endif + /* ngx_unlock_mutex(rrp->peers->mutex); */ if (pc->tries == 1 && rrp->peers->next) { Index: src/http/ngx_http_upstream.c =================================================================== --- src/http/ngx_http_upstream.c (revision 5017) +++ src/http/ngx_http_upstream.c (working copy) @@ -146,7 +146,12 @@ static void ngx_http_upstream_ssl_handshake(ngx_connection_t *c); #endif +#if (NGX_HTTP_UPSTREAM_SOCKS) +void ngx_upstream_socks_init_handshake(ngx_http_request_t *, + ngx_http_upstream_t *); +#endif + ngx_http_upstream_header_t ngx_http_upstream_headers_in[] = { { ngx_string("Status"), @@ -1228,6 +1233,15 @@ u->request_sent = 0; +#if (NGX_HTTP_UPSTREAM_SOCKS) + + if (u->peer.socks) { + ngx_upstream_socks_init_handshake(r, u); + return; + } + +#endif + if (rc == NGX_AGAIN) { ngx_add_timer(c->write, u->conf->connect_timeout); return; @@ -4131,7 +4145,8 @@ |NGX_HTTP_UPSTREAM_MAX_FAILS |NGX_HTTP_UPSTREAM_FAIL_TIMEOUT |NGX_HTTP_UPSTREAM_DOWN - |NGX_HTTP_UPSTREAM_BACKUP); + |NGX_HTTP_UPSTREAM_BACKUP + |NGX_HTTP_UPSTREAM_SOCKS_FLAG); if (uscf == NULL) { return NGX_CONF_ERROR; } @@ -4334,6 +4349,29 @@ continue; } +#if (NGX_HTTP_UPSTREAM_SOCKS) + if (ngx_strncmp(value[i].data, "socks=", 6) == 0) { + + if (!(uscf->flags & NGX_HTTP_UPSTREAM_SOCKS_FLAG)) { + goto invalid; + } + + if (us->socks) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "duplicate socks proxy"); + + goto invalid; + } + + us->socks = 1; + us->socks_port = 80; + us->socks_hostname.len = value[i].len - 6; + us->socks_hostname.data = value[i].data + 6; + + continue; + } +#endif + goto invalid; } Index: src/http/ngx_http_upstream.h =================================================================== --- src/http/ngx_http_upstream.h (revision 5017) +++ src/http/ngx_http_upstream.h (working copy) @@ -93,6 +93,12 @@ unsigned down:1; unsigned backup:1; + +#if (NGX_HTTP_UPSTREAM_SOCKS) + unsigned socks:1; + in_port_t socks_port; + ngx_str_t socks_hostname; +#endif } ngx_http_upstream_server_t; @@ -102,6 +108,7 @@ #define NGX_HTTP_UPSTREAM_FAIL_TIMEOUT 0x0008 #define NGX_HTTP_UPSTREAM_DOWN 0x0010 #define NGX_HTTP_UPSTREAM_BACKUP 0x0020 +#define NGX_HTTP_UPSTREAM_SOCKS_FLAG 0x0040 struct ngx_http_upstream_srv_conf_s { @@ -332,6 +339,11 @@ unsigned request_sent:1; unsigned header_sent:1; + +#if (NGX_HTTP_UPSTREAM_SOCKS) + ngx_http_upstream_handler_pt next_read_event_handler; + ngx_http_upstream_handler_pt next_write_event_handler; +#endif }; Index: src/http/ngx_http_upstream_round_robin.h =================================================================== --- src/http/ngx_http_upstream_round_robin.h (revision 5017) +++ src/http/ngx_http_upstream_round_robin.h (working copy) @@ -35,6 +35,12 @@ #if (NGX_HTTP_SSL) ngx_ssl_session_t *ssl_session; /* local to a process */ #endif + +#if (NGX_HTTP_UPSTREAM_SOCKS) + unsigned socks:1; + in_port_t socks_port; + ngx_str_t socks_hostname; +#endif } ngx_http_upstream_rr_peer_t; Index: src/http/modules/ngx_http_upstream_socks_module.c =================================================================== --- src/http/modules/ngx_http_upstream_socks_module.c (revision 0) +++ src/http/modules/ngx_http_upstream_socks_module.c (revision 0) @@ -0,0 +1,133 @@ + +#include +#include +#include + +static void ngx_upstream_socks_write_handler(ngx_http_request_t *r, + ngx_http_upstream_t *u); +static void ngx_upstream_socks_read_handler(ngx_http_request_t *r, + ngx_http_upstream_t *u); + +void +ngx_upstream_socks_init_handshake(ngx_http_request_t *r, + ngx_http_upstream_t *u) +{ + ngx_connection_t *c; + + c = u->peer.connection; + + u->next_write_event_handler = u->write_event_handler; + u->next_read_event_handler = u->read_event_handler; + + u->write_event_handler = ngx_upstream_socks_write_handler; + u->read_event_handler = ngx_upstream_socks_read_handler; +} + +static void +ngx_upstream_socks_write_handler(ngx_http_request_t *r, + ngx_http_upstream_t *u) +{ + ngx_connection_t *c; + + c = u->peer.connection; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http socks upstream handshake handler"); + + if (c->write->timedout) { + ngx_http_finalize_request(r, NGX_HTTP_GATEWAY_TIME_OUT); + return; + } + + if (!u->peer.socks_handshake1_sent) { + u->peer.socks_handshake1_sent = 1; + + // TODO, this is ugly + u_char handshake[] = {0x05, 0x01, 0x00}; + c->send(c, handshake, 3); + + } else if (u->peer.socks_handshake2_recv && !u->peer.socks_handshake3_sent) { + u_char *handshake; + ngx_uint_t len; + + len = 7 + u->peer.socks_hostname.len; + + handshake = ngx_pnalloc(r->pool, len); + + handshake[0] = 5; // version + handshake[1] = 1; // connect + handshake[2] = 0; + handshake[3] = 3; // specify dns + handshake[4] = (u_char)u->peer.socks_hostname.len; + + // port + *(u_short*)(handshake+len-2) = ntohs(u->peer.socks_port); + + ngx_memcpy(handshake+5, u->peer.socks_hostname.data, u->peer.socks_hostname.len); + + c->send(c, handshake, len); + + ngx_pfree(r->pool, handshake); + + u->peer.socks_handshake3_sent = 1; + } +} + +static void +ngx_upstream_socks_read_handler(ngx_http_request_t *r, + ngx_http_upstream_t *u) +{ + ngx_connection_t *c; + + c = u->peer.connection; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http socks upstream handshake recv handler"); + + if (c->read->timedout) { + ngx_http_finalize_request(r, NGX_HTTP_GATEWAY_TIME_OUT); + return; + } + + if (!u->peer.socks_handshake2_recv) { + u_char buf[2]; + + u->peer.socks_handshake2_recv = 1; + c->recv(c, buf, 2); + + if (buf[0] != 5 || buf[1] != 0) { + ngx_http_finalize_request(r, NGX_HTTP_BAD_GATEWAY); + return; + } + + } else if (u->peer.socks_handshake3_sent && !u->peer.socks_handshake4_recv) { + u_char buf[22]; + + c->recv(c, buf, 5); + + if (buf[0] != 5 || buf[1] != 0 || buf[2] != 0) { + ngx_http_finalize_request(r, NGX_HTTP_BAD_GATEWAY); + return; + } + + if (buf[3] == 1) { + c->recv(c, buf+5, 5); + + } else if (buf[3] == 4) { + c->recv(c, buf+5, 17); + + } else if (buf[3] == 3) { + u_char *hostname_and_port = ngx_pnalloc(r->pool, ((size_t)buf[4]) + 2); + c->recv(c, hostname_and_port, ((size_t)buf[4]) + 2); + ngx_pfree(r->pool, hostname_and_port); + } + + u->peer.socks_handshake4_recv = 1; + + // restore previous handlers + u->write_event_handler = u->next_write_event_handler; + u->read_event_handler = u->next_read_event_handler; + + u->write_event_handler(r, u); + } +} Index: src/event/ngx_event_connect.h =================================================================== --- src/event/ngx_event_connect.h (revision 5017) +++ src/event/ngx_event_connect.h (working copy) @@ -52,6 +52,16 @@ ngx_event_save_peer_session_pt save_session; #endif +#if (NGX_HTTP_UPSTREAM_SOCKS) + unsigned socks:1; + unsigned socks_handshake1_sent:1; + unsigned socks_handshake2_recv:1; + unsigned socks_handshake3_sent:1; + unsigned socks_handshake4_recv:1; + in_port_t socks_port; + ngx_str_t socks_hostname; +#endif + #if (NGX_THREADS) ngx_atomic_t *lock; #endif Index: auto/options =================================================================== --- auto/options (revision 5017) +++ auto/options (working copy) @@ -99,6 +99,7 @@ HTTP_UPSTREAM_IP_HASH=YES HTTP_UPSTREAM_LEAST_CONN=YES HTTP_UPSTREAM_KEEPALIVE=YES +HTTP_UPSTREAM_SOCKS=NO # STUB HTTP_STUB_STATUS=NO @@ -216,6 +217,8 @@ --with-http_random_index_module) HTTP_RANDOM_INDEX=YES ;; --with-http_secure_link_module) HTTP_SECURE_LINK=YES ;; --with-http_degradation_module) HTTP_DEGRADATION=YES ;; + --with-http_upstream_socks_module) + HTTP_UPSTREAM_SOCKS=YES ;; --without-http_charset_module) HTTP_CHARSET=NO ;; --without-http_gzip_module) HTTP_GZIP=NO ;; @@ -364,6 +367,7 @@ --with-http_secure_link_module enable ngx_http_secure_link_module --with-http_degradation_module enable ngx_http_degradation_module --with-http_stub_status_module enable ngx_http_stub_status_module + --with-http_upstream_socks_module enable SOCKS5 support for upstreams --without-http_charset_module disable ngx_http_charset_module --without-http_gzip_module disable ngx_http_gzip_module Index: auto/modules =================================================================== --- auto/modules (revision 5017) +++ auto/modules (working copy) @@ -365,6 +365,11 @@ HTTP_SRCS="$HTTP_SRCS $HTTP_UPSTREAM_KEEPALIVE_SRCS" fi +if [ $HTTP_UPSTREAM_SOCKS = YES ]; then + have=NGX_HTTP_UPSTREAM_SOCKS . auto/have + HTTP_SRCS="$HTTP_SRCS src/http/modules/ngx_http_upstream_socks_module.c" +fi + if [ $HTTP_STUB_STATUS = YES ]; then have=NGX_STAT_STUB . auto/have HTTP_MODULES="$HTTP_MODULES ngx_http_stub_status_module" Tom Op 1/25/13 1:13 PM, Tom van der Woerdt schreef: > Yes, I currently use a proxy like that, but it feels like a > performance killer to do it like that. If implemented in nginx it > could be so much faster. > > About SOCKS implementations: as long as authentication isn't required, > the handshake is really, really easy, especially version 4. The lack > of a framing protocol makes it behave like any normal socket once the > handshake is done. > > Tom > > > Op 1/25/13 12:57 PM, Aleksandar Lazic schreef: >> Hi, >> >> There are some http2socks proxy out there. >> >> http://www.privoxy.org/ >> http://www.privoxy.org/user-manual/config.html#SOCKS >> >> http://www.delegate.org/delegate/ >> http://www.delegate.org/delegate/Manual.htm#SOCKS >> >> http://en.wikipedia.org/wiki/SOCKS#Translating_proxies >> >> The setup coul looks like >> >> client -> nginx -> http-proxylistener -> socks-proxyrequester -> >> socks-server >> >> OT: Sock5 is not so easy if you want to implement the full protocol, >> imho. >> >> I Agree with you that this would be a nice upsteam module, even that >> I don't >> need it at the moment. >> >> Cheers >> Aleks >> Am 23-01-2013 17:05, schrieb Tom van der Woerdt: >>> Hi, >>> >>> A project I'm working on has a backend server that, for security >>> reasons, can only be accessed via a SOCKS4a/SOCKS5 proxy. A frontend >>> server for this project (nginx) has one simple task: to proxy all >>> incoming connections to the backend server. >>> >>> Right now, nginx cannot do this, because it has no support for >>> proxying upstream connections via a SOCKS proxy. The current temporary >>> workaround is to run another service on the frontend machine that acts >>> like a HTTP server but proxies the data to the backend - basically >>> everything I'd like nginx to do. I cannot use this service as my main >>> frontend, because there are a few other files that also need to be >>> served. >>> >>> SOCKS4a and SOCKS5 are really easy protocols and are basically just >>> sockets but with an alternate handshake (skip the DNS lookup, send the >>> hostname to the socket instead). Since they should be so easy to >>> implement, I'm requesting that on this mailing list. >>> >>> I was thinking of a config file that would look something like this : >>> >>> upstream backend { >>> server hidden_dns.local socks4=127.0.0.1:1234; >>> } >>> >>> server { >>> location / { >>> proxy_pass http://backend; >>> } >>> } >>> >>> As far as I'm aware, this feature wouldn't break anything, since a >>> SOCKS connections behaves just like any other normal socket. >>> >>> Thanks for considering, >>> Tom van der Woerdt >>> >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sun Jan 27 08:07:37 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 27 Jan 2013 00:07:37 -0800 Subject: [BUG] "client_body_in_file_only on" no longer works with upstream modules in nginx 1.3.9+ Message-ID: Hello! I've noticed that for nginx 1.3.9+, "client_body_in_file_only on" no longer always sets r->request_body->bufs, which makes upstream modules like ngx_proxy send empty request bodies to the backend server. Here's a minimal example that demonstrates the issue: location = /t { client_body_in_file_only on; proxy_pass http://127.0.0.1:1234; } And run nc to listen on the local port 1234: $ nc -l 1234 Then issue a simple POST request to location = /t: $ curl -d 'hello world' localhost/t When using nginx 1.3.9+, we get the raw HTTP request sent by ngx_proxy: $ nc -l 1234 POST /t HTTP/1.0 Host: 127.0.0.1:1234 Connection: close Content-Length: 11 User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 ... Accept: */* Content-Type: application/x-www-form-urlencoded That is, when the request body is completely preread into the client header buffer, the request body will only be hooked into r->request_body->temp_file but not r->request_body->bufs. But when the request body is big enough that it is not completely preread into the client header buffer, then the a in-file buf will still be properly inserted into r->request_body->bufs and we can get the expected request body sent from ngx_proxy. And with nginx 1.3.8 (or any earlier versions), we always get the expected request: $ nc -l 1234 POST /t HTTP/1.0 Host: 127.0.0.1:1234 Connection: close User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 ... Accept: */* Content-Length: 11 Content-Type: application/x-www-form-urlencoded hello world Best regards, -agentzh From mdounin at mdounin.ru Mon Jan 28 11:30:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Jan 2013 15:30:35 +0400 Subject: [BUG] "client_body_in_file_only on" no longer works with upstream modules in nginx 1.3.9+ In-Reply-To: References: Message-ID: <20130128113035.GQ40753@mdounin.ru> Hello! On Sun, Jan 27, 2013 at 12:07:37AM -0800, agentzh wrote: > Hello! > > I've noticed that for nginx 1.3.9+, "client_body_in_file_only on" no > longer always sets r->request_body->bufs, which makes upstream modules > like ngx_proxy send empty request bodies to the backend server. > > Here's a minimal example that demonstrates the issue: > > location = /t { > client_body_in_file_only on; > proxy_pass http://127.0.0.1:1234; > } > > And run nc to listen on the local port 1234: > > $ nc -l 1234 > > Then issue a simple POST request to location = /t: > > $ curl -d 'hello world' localhost/t > > When using nginx 1.3.9+, we get the raw HTTP request sent by ngx_proxy: > > $ nc -l 1234 > POST /t HTTP/1.0 > Host: 127.0.0.1:1234 > Connection: close > Content-Length: 11 > User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 ... > Accept: */* > Content-Type: application/x-www-form-urlencoded > > That is, when the request body is completely preread into the client > header buffer, the request body will only be hooked into > r->request_body->temp_file but not r->request_body->bufs. > > But when the request body is big enough that it is not completely > preread into the client header buffer, then the a in-file buf will > still be properly inserted into r->request_body->bufs and we can get > the expected request body sent from ngx_proxy. Thnx, the test for client_body_in_file_only actually checked file, so it wasn't noticed. :) The following patch should fix this: # HG changeset patch # User Maxim Dounin # Date 1359372279 -14400 # Node ID 1f6b73a7b7c9992d2e6853413a8f2c599c1e8ea8 # Parent 153d131f0b7aa28fde12a94fd6a7e78a20743a3a Request body: fixed client_body_in_file_only. Broken while introducing chunked request body reading support in 1.3.9. diff --git a/src/http/ngx_http_request_body.c b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c +++ b/src/http/ngx_http_request_body.c @@ -35,7 +35,8 @@ ngx_http_read_client_request_body(ngx_ht size_t preread; ssize_t size; ngx_int_t rc; - ngx_chain_t out; + ngx_buf_t *b; + ngx_chain_t out, *cl; ngx_http_request_body_t *rb; ngx_http_core_loc_conf_t *clcf; @@ -128,6 +129,21 @@ ngx_http_read_client_request_body(ngx_ht rc = NGX_HTTP_INTERNAL_SERVER_ERROR; goto done; } + + cl = ngx_chain_get_free_buf(r->pool, &rb->free); + if (cl == NULL) { + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + + b = cl->buf; + + ngx_memzero(b, sizeof(ngx_buf_t)); + + b->in_file = 1; + b->file_last = rb->temp_file->file.offset; + b->file = &rb->temp_file->file; + + rb->bufs = cl; } post_handler(r); -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Mon Jan 28 14:42:07 2013 From: ru at nginx.com (ru at nginx.com) Date: Mon, 28 Jan 2013 14:42:07 +0000 Subject: [nginx] svn commit: r5018 - trunk/src/http/modules Message-ID: <20130128144208.0601F3F9C5F@mail.nginx.com> Author: ru Date: 2013-01-28 14:42:07 +0000 (Mon, 28 Jan 2013) New Revision: 5018 URL: http://trac.nginx.org/nginx/changeset/5018/nginx Log: Secure_link: fixed configuration inheritance. The "secure_link_secret" directive was always inherited from the outer configuration level even when "secure_link" and "secure_link_md5" were specified on the inner level. Modified: trunk/src/http/modules/ngx_http_secure_link_module.c Modified: trunk/src/http/modules/ngx_http_secure_link_module.c =================================================================== --- trunk/src/http/modules/ngx_http_secure_link_module.c 2013-01-25 09:59:28 UTC (rev 5017) +++ trunk/src/http/modules/ngx_http_secure_link_module.c 2013-01-28 14:42:07 UTC (rev 5018) @@ -111,7 +111,7 @@ conf = ngx_http_get_module_loc_conf(r, ngx_http_secure_link_module); - if (conf->secret.len) { + if (conf->secret.data) { return ngx_http_secure_link_old_variable(r, conf, v, data); } @@ -318,8 +318,17 @@ ngx_http_secure_link_conf_t *prev = parent; ngx_http_secure_link_conf_t *conf = child; - ngx_conf_merge_str_value(conf->secret, prev->secret, ""); + if (conf->secret.data) { + if (conf->variable || conf->md5) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"secure_link_secret\" cannot be mixed with " + "\"secure_link\" and \"secure_link_md5\""); + return NGX_CONF_ERROR; + } + return NGX_CONF_OK; + } + if (conf->variable == NULL) { conf->variable = prev->variable; } @@ -328,6 +337,10 @@ conf->md5 = prev->md5; } + if (conf->variable == NULL && conf->md5 == NULL) { + conf->secret = prev->secret; + } + return NGX_CONF_OK; } From vbart at nginx.com Mon Jan 28 15:34:09 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:34:09 +0000 Subject: [nginx] svn commit: r5019 - trunk/src/event Message-ID: <20130128153409.BEEAD3FAA2B@mail.nginx.com> Author: vbart Date: 2013-01-28 15:34:09 +0000 (Mon, 28 Jan 2013) New Revision: 5019 URL: http://trac.nginx.org/nginx/changeset/5019/nginx Log: SSL: removed conditions that always hold true. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 14:42:07 UTC (rev 5018) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:34:09 UTC (rev 5019) @@ -1210,7 +1210,7 @@ size = buf->last - buf->pos; - if (!flush && buf->last < buf->end && c->ssl->buffer) { + if (!flush && buf->last < buf->end) { break; } @@ -1232,10 +1232,8 @@ break; } - if (buf->pos == buf->last) { - buf->pos = buf->start; - buf->last = buf->start; - } + buf->pos = buf->start; + buf->last = buf->start; if (in == NULL || send == limit) { break; From vbart at nginx.com Mon Jan 28 15:35:13 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:35:13 +0000 Subject: [nginx] svn commit: r5020 - trunk/src/event Message-ID: <20130128153513.6B3643F9F78@mail.nginx.com> Author: vbart Date: 2013-01-28 15:35:12 +0000 (Mon, 28 Jan 2013) New Revision: 5020 URL: http://trac.nginx.org/nginx/changeset/5020/nginx Log: SSL: resetting of flush flag after the data was written. There is no need to flush next chunk of data if it does not contain a buffer with the flush or last_buf flags set. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 15:34:09 UTC (rev 5019) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:35:12 UTC (rev 5020) @@ -1232,6 +1232,8 @@ break; } + flush = 0; + buf->pos = buf->start; buf->last = buf->start; From vbart at nginx.com Mon Jan 28 15:37:11 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:37:11 +0000 Subject: [nginx] svn commit: r5021 - trunk/src/event Message-ID: <20130128153711.F0AD93F9F2F@mail.nginx.com> Author: vbart Date: 2013-01-28 15:37:11 +0000 (Mon, 28 Jan 2013) New Revision: 5021 URL: http://trac.nginx.org/nginx/changeset/5021/nginx Log: SSL: preservation of flush flag for buffered data. Previously, if SSL buffer was not sent we lost information that the data must be flushed. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 15:35:12 UTC (rev 5020) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:37:11 UTC (rev 5021) @@ -1169,7 +1169,7 @@ } send = 0; - flush = (in == NULL) ? 1 : 0; + flush = (in == NULL) ? 1 : buf->flush; for ( ;; ) { @@ -1191,7 +1191,6 @@ if (send + size > limit) { size = (ssize_t) (limit - send); - flush = 1; } ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -1210,7 +1209,7 @@ size = buf->last - buf->pos; - if (!flush && buf->last < buf->end) { + if (!flush && send < limit && buf->last < buf->end) { break; } @@ -1221,8 +1220,7 @@ } if (n == NGX_AGAIN) { - c->buffered |= NGX_SSL_BUFFERED; - return in; + break; } buf->pos += n; @@ -1242,6 +1240,8 @@ } } + buf->flush = flush; + if (buf->pos < buf->last) { c->buffered |= NGX_SSL_BUFFERED; From vbart at nginx.com Mon Jan 28 15:38:36 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:38:36 +0000 Subject: [nginx] svn commit: r5022 - trunk/src/event Message-ID: <20130128153836.A94533F9F2F@mail.nginx.com> Author: vbart Date: 2013-01-28 15:38:36 +0000 (Mon, 28 Jan 2013) New Revision: 5022 URL: http://trac.nginx.org/nginx/changeset/5022/nginx Log: SSL: calculation of buffer size moved closer to its usage. No functional changes. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 15:37:11 UTC (rev 5021) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:38:36 UTC (rev 5022) @@ -1207,12 +1207,12 @@ } } - size = buf->last - buf->pos; - if (!flush && send < limit && buf->last < buf->end) { break; } + size = buf->last - buf->pos; + n = ngx_ssl_write(c, buf->pos, size); if (n == NGX_ERROR) { From vbart at nginx.com Mon Jan 28 15:40:25 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:40:25 +0000 Subject: [nginx] svn commit: r5023 - trunk/src/event Message-ID: <20130128154025.CD2D53FA05E@mail.nginx.com> Author: vbart Date: 2013-01-28 15:40:25 +0000 (Mon, 28 Jan 2013) New Revision: 5023 URL: http://trac.nginx.org/nginx/changeset/5023/nginx Log: SSL: avoid calling SSL_write() with zero data size. According to documentation, calling SSL_write() with num=0 bytes to be sent results in undefined behavior. We don't currently call ngx_ssl_send_chain() with empty chain and buffer. This check handles the case of a chain with total data size that is a multiple of NGX_SSL_BUFSIZE, and with the special buffer at the end. In practice such cases resulted in premature connection close and critical error "SSL_write() failed (SSL:)" in the error log. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 15:38:36 UTC (rev 5022) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:40:25 UTC (rev 5023) @@ -1213,6 +1213,12 @@ size = buf->last - buf->pos; + if (size == 0) { + buf->flush = 0; + c->buffered &= ~NGX_SSL_BUFFERED; + return in; + } + n = ngx_ssl_write(c, buf->pos, size); if (n == NGX_ERROR) { From vbart at nginx.com Mon Jan 28 15:41:12 2013 From: vbart at nginx.com (vbart at nginx.com) Date: Mon, 28 Jan 2013 15:41:12 +0000 Subject: [nginx] svn commit: r5024 - trunk/src/event Message-ID: <20130128154112.768843FA050@mail.nginx.com> Author: vbart Date: 2013-01-28 15:41:12 +0000 (Mon, 28 Jan 2013) New Revision: 5024 URL: http://trac.nginx.org/nginx/changeset/5024/nginx Log: SSL: take into account data in the buffer while limiting output. In some rare cases this can result in a more smooth sending rate. Modified: trunk/src/event/ngx_event_openssl.c Modified: trunk/src/event/ngx_event_openssl.c =================================================================== --- trunk/src/event/ngx_event_openssl.c 2013-01-28 15:40:25 UTC (rev 5023) +++ trunk/src/event/ngx_event_openssl.c 2013-01-28 15:41:12 UTC (rev 5024) @@ -1168,7 +1168,7 @@ buf->end = buf->start + NGX_SSL_BUFSIZE; } - send = 0; + send = buf->last - buf->pos; flush = (in == NULL) ? 1 : buf->flush; for ( ;; ) { From vbart at nginx.com Mon Jan 28 17:06:43 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Jan 2013 21:06:43 +0400 Subject: [BUG] "client_body_in_file_only on" no longer works with upstream modules in nginx 1.3.9+ In-Reply-To: <20130128113035.GQ40753@mdounin.ru> References: <20130128113035.GQ40753@mdounin.ru> Message-ID: <201301282106.43785.vbart@nginx.com> On Monday 28 January 2013 15:30:35 Maxim Dounin wrote: > Hello! > > On Sun, Jan 27, 2013 at 12:07:37AM -0800, agentzh wrote: > > Hello! > > > > I've noticed that for nginx 1.3.9+, "client_body_in_file_only on" no > > longer always sets r->request_body->bufs, which makes upstream modules > > like ngx_proxy send empty request bodies to the backend server. > > > > Here's a minimal example that demonstrates the issue: > > location = /t { > > > > client_body_in_file_only on; > > proxy_pass http://127.0.0.1:1234; > > > > } > > > > And run nc to listen on the local port 1234: > > $ nc -l 1234 > > > > Then issue a simple POST request to location = /t: > > $ curl -d 'hello world' localhost/t > > > > When using nginx 1.3.9+, we get the raw HTTP request sent by ngx_proxy: > > $ nc -l 1234 > > POST /t HTTP/1.0 > > Host: 127.0.0.1:1234 > > Connection: close > > Content-Length: 11 > > User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 ... > > Accept: */* > > Content-Type: application/x-www-form-urlencoded > > > > That is, when the request body is completely preread into the client > > header buffer, the request body will only be hooked into > > r->request_body->temp_file but not r->request_body->bufs. > > > > But when the request body is big enough that it is not completely > > preread into the client header buffer, then the a in-file buf will > > still be properly inserted into r->request_body->bufs and we can get > > the expected request body sent from ngx_proxy. > > Thnx, the test for client_body_in_file_only actually checked file, > so it wasn't noticed. :) > > The following patch should fix this: > > # HG changeset patch > # User Maxim Dounin > # Date 1359372279 -14400 > # Node ID 1f6b73a7b7c9992d2e6853413a8f2c599c1e8ea8 > # Parent 153d131f0b7aa28fde12a94fd6a7e78a20743a3a > Request body: fixed client_body_in_file_only. > > Broken while introducing chunked request body reading support in 1.3.9. > IMHO it's worth to specify that it was broken by r4931. > diff --git a/src/http/ngx_http_request_body.c > b/src/http/ngx_http_request_body.c --- a/src/http/ngx_http_request_body.c > +++ b/src/http/ngx_http_request_body.c > @@ -35,7 +35,8 @@ ngx_http_read_client_request_body(ngx_ht > size_t preread; > ssize_t size; > ngx_int_t rc; > - ngx_chain_t out; > + ngx_buf_t *b; > + ngx_chain_t out, *cl; > ngx_http_request_body_t *rb; > ngx_http_core_loc_conf_t *clcf; > > @@ -128,6 +129,21 @@ ngx_http_read_client_request_body(ngx_ht > rc = NGX_HTTP_INTERNAL_SERVER_ERROR; > goto done; > } > + > + cl = ngx_chain_get_free_buf(r->pool, &rb->free); > + if (cl == NULL) { > + return NGX_HTTP_INTERNAL_SERVER_ERROR; > + } > + > + b = cl->buf; > + > + ngx_memzero(b, sizeof(ngx_buf_t)); > + > + b->in_file = 1; > + b->file_last = rb->temp_file->file.offset; > + b->file = &rb->temp_file->file; > + > + rb->bufs = cl; > } > > post_handler(r); Looks good and works fine (according to your test). wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From agentzh at gmail.com Mon Jan 28 20:18:49 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 28 Jan 2013 12:18:49 -0800 Subject: SPDY patch >= v55_1.3.11 breaks building nginx 1.3.11 + lua-nginx-module In-Reply-To: <878v7n3301.wl%appa@perusio.net> References: <878v7n3301.wl%appa@perusio.net> Message-ID: Hello! On Sun, Jan 20, 2013 at 2:26 PM, Ant?nio P. P. Almeida wrote: > On 20 Jan 2013 22h45 CET, agentzh at gmail.com wrote: >> >> It is known that ngx_lua does not really work with the SPDY patch >> and ngx_lua is not compatible with Nginx 1.3.9+. > > Well I'm using it with LuaJIT and have no issues so far. I haven't > applied the SPDY patch though. > Well, you're lucky or you were just not watching closely enough :) I've fixed several Nginx 1.3.9+ compatibility issues in ngx_lua git master: https://github.com/chaoslawful/lua-nginx-module/commits/master Now all tests are passing in all testing modes with Nginx 1.3.11 (without the SPDY patch) on my EC2 test cluster (both Linux x86_64 and i386): http://qa.openresty.org/#linux_x86_64_nginx_1_3_11__no_pool_ http://qa.openresty.org/#linux_i386_nginx_1_3_11__no_pool_ I'll look into the SPDY patch in the near future :) Best regards, -agentzh From mat999 at gmail.com Tue Jan 29 11:24:57 2013 From: mat999 at gmail.com (SplitIce) Date: Tue, 29 Jan 2013 21:54:57 +1030 Subject: Segfault in ngx_http_syslog Message-ID: Found a segfault in the ngx_http_syslog module. I dont have the contact details for any of the ngsru team that developed it so I am posting it in the mailing list in the hope a few are members. Occurs when error_page is used, tested on a 504 error with config similar to @e504 { proxy_pass ...; } error_page 504 @e504; Backtrace: #0 0x080f2ac7 in ngx_http_syslog_error_handler (log=0x8cf4608, p=0xbfbae8b8 "\b\353\272\277L<\265\b\346", p_len=1960) at /var/x4b/nginx-syslog-module/ngx_http_syslog_module.c:489 489 ngx_http_syslog_common_handler( (gdb) backtrace full #0 0x080f2ac7 in ngx_http_syslog_error_handler (log=0x8cf4608, p=0xbfbae8b8 "\b\353\272\277L<\265\b\346", p_len=1960) at /var/x4b/nginx-syslog-module/ngx_http_syslog_module.c:489 No locals. #1 0x080f2aea in ngx_http_syslog_error_handler (log=0x0, p=0xa
, p_len=3206221928) at /var/x4b/nginx-syslog-module/ngx_http_syslog_module.c:494 No locals. #2 0x080f2aea in ngx_http_syslog_error_handler (log=0xb7bc7380, p=0xb7bc6ff4 "|-\024", p_len=3206222780) at /var/x4b/nginx-syslog-module/ngx_http_syslog_module.c:494 No locals. #3 0x00000000 in ?? () -------------- next part -------------- An HTML attachment was scrubbed... URL: From lekensteyn at gmail.com Tue Jan 29 17:49:46 2013 From: lekensteyn at gmail.com (Peter Wu) Date: Tue, 29 Jan 2013 18:49:46 +0100 Subject: [RFC] [PATCH] Autoindex: support sorting using URL parameters Message-ID: <2685449.KCDleNYGX9@al> Based on Apache HTTPD autoindex docs[1]. Supported: - C=N sorts the directory by file name - C=M sorts the directory by last-modified date, then file name - C=S sorts the directory by size, then file name - O=A sorts the listing in Ascending Order - O=D sorts the listing in Descending Order Not supported (does not make sense for nginx): - C=D sorts the directory by description, then file name - All F= (FancyIndex) related arguments - Version sorting for file names (V=) - Pattern filter (P=) Argument processing stops when the query string does not exactly match the options allowed by nginx, invalid values (like "C=m", "C=x" or "C=foo") are ignored and cause further processing to stop. C and O are the most useful options and can commonly be found in my old Apache logs (also outputted in header links). This patch is made for these cases, not for some exotic use of Apache-specific properties. [1]: http://httpd.apache.org/docs/2.4/mod/mod_autoindex.html --- (this patch is also available at http://lekensteyn.nl/files/nginx/) RFC: this patch only adds support for URL parameters. Before implementing the actual clickable headers I would like to have some feedback. Do you actually want to add such a sorting feature to the standard autoindex module? Sorting has also been requested on http://forum.nginx.org/read.php?10,211728 I am aware of FancyIndex[2] (which does not support sorting either), but having two separate modules to do the exact same thing seems a bit useless. About the implementation, I considered using the non-standard qsort_r (different prototypes on BSD and Linux, qsort_s on Windows) for adding the sort options. Because these prototypes and their compare functions are incompatible I dropped that idea. Next was augmenting the directory entries struct to make the options available, this is currently the way how it is implemented. Parsing URL parameters is done in a separate function which is currently non-void. Since the return value is not used at the moment, it can also be make void. Thoughts? [2]: http://wiki.nginx.org/NgxFancyIndex --- src/http/modules/ngx_http_autoindex_module.c | 93 +++++++++++++++++++++++++++- 1 file changed, 92 insertions(+), 1 deletion(-) diff --git a/src/http/modules/ngx_http_autoindex_module.c b/src/http/modules/ngx_http_autoindex_module.c index 450a48e..aa1be25 100644 --- a/src/http/modules/ngx_http_autoindex_module.c +++ b/src/http/modules/ngx_http_autoindex_module.c @@ -24,6 +24,16 @@ typedef struct { typedef struct { + u_char sort_key; + unsigned order_desc:1; +} ngx_http_autoindex_opts_t; + +#define NGX_AUTOINDEX_SORT_NAME 'N' +#define NGX_AUTOINDEX_SORT_MTIME 'M' +#define NGX_AUTOINDEX_SORT_SIZE 'S' + + +typedef struct { ngx_str_t name; size_t utf_len; size_t escape; @@ -33,6 +43,8 @@ typedef struct { time_t mtime; off_t size; + + ngx_http_autoindex_opts_t *sort_opts; } ngx_http_autoindex_entry_t; @@ -50,6 +62,8 @@ typedef struct { static int ngx_libc_cdecl ngx_http_autoindex_cmp_entries(const void *one, const void *two); +static ngx_int_t ngx_http_autoindex_get_opts(ngx_http_request_t *r, + ngx_http_autoindex_opts_t *opts); static ngx_int_t ngx_http_autoindex_error(ngx_http_request_t *r, ngx_dir_t *dir, ngx_str_t *name); static ngx_int_t ngx_http_autoindex_init(ngx_conf_t *cf); @@ -153,6 +167,7 @@ ngx_http_autoindex_handler(ngx_http_request_t *r) ngx_array_t entries; ngx_http_autoindex_entry_t *entry; ngx_http_autoindex_loc_conf_t *alcf; + ngx_http_autoindex_opts_t opts; static char *months[] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; @@ -214,6 +229,8 @@ ngx_http_autoindex_handler(ngx_http_request_t *r) return rc; } + ngx_http_autoindex_get_opts(r, &opts); + #if (NGX_SUPPRESS_WARN) /* MSVC thinks 'entries' may be used without having been initialized */ @@ -353,6 +370,8 @@ ngx_http_autoindex_handler(ngx_http_request_t *r) entry->dir = ngx_de_is_dir(&dir); entry->mtime = ngx_de_mtime(&dir); entry->size = ngx_de_size(&dir); + + entry->sort_opts = &opts; } if (ngx_close_dir(&dir) == NGX_ERROR) { @@ -580,11 +599,67 @@ ngx_http_autoindex_handler(ngx_http_request_t *r) } +static ngx_int_t +ngx_http_autoindex_get_opts(ngx_http_request_t *r, + ngx_http_autoindex_opts_t *opts) +{ + u_char *p, *last, key, val; + + opts->sort_key = NGX_AUTOINDEX_SORT_NAME; + opts->order_desc = 0; + + p = r->args.data; + last = p + r->args.len; + + for ( /* void */; p < last; p++) { + + key = *p++; + + if (*p++ != '=') { + return NGX_DECLINED; + } + + val = *p++; + + /* assume one-letter value and expect separator */ + if (p < last && *p != '&' && *p != ';') { + return NGX_DECLINED; + } + + if (key == 'C') { + switch (val) { + case NGX_AUTOINDEX_SORT_NAME: + case NGX_AUTOINDEX_SORT_MTIME: + case NGX_AUTOINDEX_SORT_SIZE: + opts->sort_key = val; + break; + default: + return NGX_DECLINED; + } + } else if (key == 'O') { + if (val == 'D') { + opts->order_desc = 1; + } else if (val == 'A') { + opts->order_desc = 0; + } else { + return NGX_DECLINED; + } + } else { + return NGX_DECLINED; + } + } + + return NGX_OK; +} + + static int ngx_libc_cdecl ngx_http_autoindex_cmp_entries(const void *one, const void *two) { ngx_http_autoindex_entry_t *first = (ngx_http_autoindex_entry_t *) one; ngx_http_autoindex_entry_t *second = (ngx_http_autoindex_entry_t *) two; + ngx_http_autoindex_opts_t *opts = first->sort_opts; + int r; if (first->dir && !second->dir) { /* move the directories to the start */ @@ -596,7 +671,23 @@ ngx_http_autoindex_cmp_entries(const void *one, const void *two) return 1; } - return (int) ngx_strcmp(first->name.data, second->name.data); + switch (opts->sort_key) { + case NGX_AUTOINDEX_SORT_MTIME: + r = first->mtime - second->mtime; + break; + case NGX_AUTOINDEX_SORT_SIZE: + r = first->size - second->size; + break; + default: /* includes NGX_AUTOINDEX_SORT_NAME */ + r = 0; + break; + } + + if (r == 0) { + r = ngx_strcmp(first->name.data, second->name.data); + } + + return opts->order_desc ? -r : r; } -- 1.8.1.1 From pengqian at ruijie.com.cn Wed Jan 30 03:50:39 2013 From: pengqian at ruijie.com.cn (=?gb2312?B?xe3HqyjR0MH5ILij1t0p?=) Date: Wed, 30 Jan 2013 03:50:39 +0000 Subject: CPU 100% Upstream SSL handshake Message-ID: <0E2381B3873AD44F9C4B3D0EC0BC9A14548592C3@fzex.ruijie.com.cn> Hello all! Recently I came across a problem that CPU 100% when nginx upstream were trying to SSL handshake to the web sever. the pass_proxy url is https://www.salesforce.com/export/login-messages/common/css/images/login/bk_promo_overlay3.png; maybe the slow https web site has the same problem. The debug log: 2013/01/30 11:06:23 [debug] 19604#0: posted event 100232FC 2013/01/30 11:06:23 [debug] 19604#0: *1 delete posted event 100232FC 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL handshake handler: 1 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_do_handshake: -1 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_get_error: 2 2013/01/30 11:06:23 [debug] 19604#0: posted event 00000000 2013/01/30 11:06:23 [debug] 19604#0: worker cycle 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:6 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:3 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:1 2013/01/30 11:06:23 [debug] 19604#0: max_fd: 7 2013/01/30 11:06:23 [debug] 19604#0: select timer: 46801 2013/01/30 11:06:23 [debug] 19604#0: select ready 1 2013/01/30 11:06:23 [debug] 19604#0: select write 7 2013/01/30 11:06:23 [debug] 19604#0: *1 post event 100232FC 2013/01/30 11:06:23 [debug] 19604#0: timer delta: 0 2013/01/30 11:06:23 [debug] 19604#0: posted events 100232FC 2013/01/30 11:06:23 [debug] 19604#0: posted event 100232FC 2013/01/30 11:06:23 [debug] 19604#0: *1 delete posted event 100232FC 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL handshake handler: 1 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_do_handshake: -1 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_get_error: 2 2013/01/30 11:06:23 [debug] 19604#0: posted event 00000000 2013/01/30 11:06:23 [debug] 19604#0: worker cycle 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:6 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:3 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:0 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:1 2013/01/30 11:06:23 [debug] 19604#0: max_fd: 7 we can see the write handler will repeat again and again until the SSL_do_handshake return 1. i just repear the problem in select I/O multiplexing. Can you help me to fix this bug? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 30 17:50:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jan 2013 21:50:57 +0400 Subject: [RFC] [PATCH] Autoindex: support sorting using URL parameters In-Reply-To: <2685449.KCDleNYGX9@al> References: <2685449.KCDleNYGX9@al> Message-ID: <20130130175056.GJ40753@mdounin.ru> Hello! On Tue, Jan 29, 2013 at 06:49:46PM +0100, Peter Wu wrote: [...] > RFC: this patch only adds support for URL parameters. Before implementing the > actual clickable headers I would like to have some feedback. Do you actually > want to add such a sorting feature to the standard autoindex module? Sorting > has also been requested on http://forum.nginx.org/read.php?10,211728 Overrall I tend to think that it's bad idea to do autoindex more customizable. I would rather like to see it producing XML and hence any customization being possible with XSLT filter. [...] > Parsing URL parameters is done in a separate function which is currently > non-void. Since the return value is not used at the moment, it can also be make > void. Thoughts? You may want to use ngx_http_arg() function instead of reinventing the wheel. -- Maxim Dounin http://nginx.com/support.html From lekensteyn at gmail.com Wed Jan 30 18:08:25 2013 From: lekensteyn at gmail.com (Peter Wu) Date: Wed, 30 Jan 2013 19:08:25 +0100 Subject: [RFC] [PATCH] Autoindex: support sorting using URL parameters In-Reply-To: <20130130175056.GJ40753@mdounin.ru> References: <2685449.KCDleNYGX9@al> <20130130175056.GJ40753@mdounin.ru> Message-ID: <1922171.WKllyH8LsR@al> Hi, Thank you for your feedback. On Wednesday 30 January 2013 21:50:57 Maxim Dounin wrote: > Overrall I tend to think that it's bad idea to do autoindex more > customizable. I would rather like to see it producing XML and > hence any customization being possible with XSLT filter. Do you want nginx to spit out XML? I am in favor of passing parameters this way to nginx so that the client does not have to do anything to see a sorted directory index. Use case: curl/wget. Yes, I know I can pipe it to sort and stuff, but that takes more commands. I have also considered using Javascript for sorting, but that is UGLY just to get the directory index correct and won't work with NoScript enabled. If you prefer not touching this autoindex module, but building an XML thing, you may also consider outputting JSON (there is also request for that). So, you want to provide the current "autoindex" module and create a new "fancy_autoindex" module with these XML/JSON/sorting features? > > Parsing URL parameters is done in a separate function which is currently > > non-void. Since the return value is not used at the moment, it can also be > > make void. Thoughts? > > You may want to use ngx_http_arg() function instead of reinventing > the wheel. I know that function, but that function does not work with the ";" separator. I just watched how Apache acted on my requests and build the nginx compatibility stuff with that in mind. Regards, Peter From vbart at nginx.com Wed Jan 30 18:54:29 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 30 Jan 2013 22:54:29 +0400 Subject: [RFC] [PATCH] Autoindex: support sorting using URL parameters In-Reply-To: <2685449.KCDleNYGX9@al> References: <2685449.KCDleNYGX9@al> Message-ID: <201301302254.29505.vbart@nginx.com> On Tuesday 29 January 2013 21:49:46 Peter Wu wrote: > Based on Apache HTTPD autoindex docs[1]. Supported: > - C=N sorts the directory by file name > - C=M sorts the directory by last-modified date, then file name > - C=S sorts the directory by size, then file name > - O=A sorts the listing in Ascending Order > - O=D sorts the listing in Descending Order > > Not supported (does not make sense for nginx): > - C=D sorts the directory by description, then file name > - All F= (FancyIndex) related arguments > - Version sorting for file names (V=) > - Pattern filter (P=) > > Argument processing stops when the query string does not exactly match the > options allowed by nginx, invalid values (like "C=m", "C=x" or "C=foo") are > ignored and cause further processing to stop. > > C and O are the most useful options and can commonly be found in my old > Apache logs (also outputted in header links). This patch is made for these > cases, not for some exotic use of Apache-specific properties. > > [1]: http://httpd.apache.org/docs/2.4/mod/mod_autoindex.html [...] Personally I do not like these Apache hard-coded arguments. I would prefer something like "autoindex_sort" directive autoindex_sort /criterion/ [ /order/ ]; with variables support. And if user want Apache-like behavior then he will be able to configure it like this: map $arg_C $criteria { default name; M modified; S size; } map $arg_O $ord { default asc; D desc; } autoindex_sort $criteria $ord; wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Jan 30 18:58:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jan 2013 22:58:01 +0400 Subject: CPU 100% Upstream SSL handshake In-Reply-To: <0E2381B3873AD44F9C4B3D0EC0BC9A14548592C3@fzex.ruijie.com.cn> References: <0E2381B3873AD44F9C4B3D0EC0BC9A14548592C3@fzex.ruijie.com.cn> Message-ID: <20130130185800.GK40753@mdounin.ru> Hello! On Wed, Jan 30, 2013 at 03:50:39AM +0000, ??(?? ??) wrote: > Hello all! > Recently I came across a problem that CPU 100% when nginx upstream were trying to SSL handshake to the web sever. > the pass_proxy url is https://www.salesforce.com/export/login-messages/common/css/images/login/bk_promo_overlay3.png; > maybe the slow https web site has the same problem. > > The debug log: > 2013/01/30 11:06:23 [debug] 19604#0: posted event 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: *1 delete posted event 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL handshake handler: 1 > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_do_handshake: -1 > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_get_error: 2 > 2013/01/30 11:06:23 [debug] 19604#0: posted event 00000000 > 2013/01/30 11:06:23 [debug] 19604#0: worker cycle > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:6 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:3 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:1 > 2013/01/30 11:06:23 [debug] 19604#0: max_fd: 7 > 2013/01/30 11:06:23 [debug] 19604#0: select timer: 46801 > 2013/01/30 11:06:23 [debug] 19604#0: select ready 1 > 2013/01/30 11:06:23 [debug] 19604#0: select write 7 > 2013/01/30 11:06:23 [debug] 19604#0: *1 post event 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: timer delta: 0 > 2013/01/30 11:06:23 [debug] 19604#0: posted events 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: posted event 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: *1 delete posted event 100232FC > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL handshake handler: 1 > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_do_handshake: -1 > 2013/01/30 11:06:23 [debug] 19604#0: *1 SSL_get_error: 2 > 2013/01/30 11:06:23 [debug] 19604#0: posted event 00000000 > 2013/01/30 11:06:23 [debug] 19604#0: worker cycle > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:6 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:3 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:0 > 2013/01/30 11:06:23 [debug] 19604#0: select event: fd:7 wr:1 > 2013/01/30 11:06:23 [debug] 19604#0: max_fd: 7 > > we can see the write handler will repeat again and again until the SSL_do_handshake return 1. > i just repear the problem in select I/O multiplexing. > Can you help me to fix this bug? Thank you for your report, it looks like generic problem in ssl handshake handling with level-triggered event methods. The following patch should fix the problem: diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -808,6 +808,10 @@ ngx_ssl_handshake(ngx_connection_t *c) return NGX_ERROR; } + if (ngx_handle_write_event(c->write, 0) != NGX_OK) { + return NGX_ERROR; + } + return NGX_AGAIN; } @@ -816,6 +820,10 @@ ngx_ssl_handshake(ngx_connection_t *c) c->read->handler = ngx_ssl_handshake_handler; c->write->handler = ngx_ssl_handshake_handler; + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + return NGX_ERROR; + } + if (ngx_handle_write_event(c->write, 0) != NGX_OK) { return NGX_ERROR; } -- Maxim Dounin http://nginx.com/support.html From lekensteyn at gmail.com Wed Jan 30 21:02:28 2013 From: lekensteyn at gmail.com (Peter Wu) Date: Wed, 30 Jan 2013 22:02:28 +0100 Subject: [RFC] [PATCH] Autoindex: support sorting using URL parameters In-Reply-To: <201301302254.29505.vbart@nginx.com> References: <2685449.KCDleNYGX9@al> <201301302254.29505.vbart@nginx.com> Message-ID: <1756172.nDVlF0ZDQF@al> On Wednesday 30 January 2013 22:54:29 Valentin V. Bartenev wrote: > I would prefer something like "autoindex_sort" directive > > autoindex_sort criterion [ order ]; > > with variables support. > > And if user want Apache-like behavior then he will be able to configure > it like this: > > map $arg_C $criteria { > default name; > M modified; > S size; > } > > map $arg_O $ord { > default asc; > D desc; > } > > autoindex_sort $criteria $ord; Nice idea, definitively worth implementing. And for the ";" thing, that could be done with a rewrite right? What is your opinion on extending the autoindex module with support for adding a header with clickable links or something fancy like that? Regards, Peter From mat999 at gmail.com Thu Jan 31 01:26:12 2013 From: mat999 at gmail.com (SplitIce) Date: Thu, 31 Jan 2013 11:56:12 +1030 Subject: UDP Listener Message-ID: Ive been working on my first nginx module (well more specifically, a fork of an existing module with permission). Pretty close to done however found one thing thats lacking in the nginx core (as far as I can tell). UDP Listening. There is support for UDP sockets, so sending of UDP data is fine. Howeaver binding and listening seems to be full of hardcoded TCP related stuff. Is this correct or am I just doing it wrong?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Jan 31 12:31:04 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 31 Jan 2013 16:31:04 +0400 Subject: Export all log specific variables In-Reply-To: References: Message-ID: <20130131123104.GA44578@lo0.su> On Fri, Jan 04, 2013 at 03:41:18PM +0200, Kiril Kalchev wrote: > Hi Guys, > > I made a patch for exporting all log only variable as common nginx variables. > I am wandering how can I submit the patch to the official nginx devs. I am > attaching the patch in this email. If I have to do something more please > write me back. Patch was made for nginx 1.3.10 which is the newest > development nginx release. Thanks. I've committed the slightly modified version of your patch: http://trac.nginx.org/nginx/changeset/5011/nginx It will be available in the upcoming 1.3.12 release. From mdounin at mdounin.ru Thu Jan 31 14:37:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jan 2013 18:37:50 +0400 Subject: UDP Listener In-Reply-To: References: Message-ID: <20130131143750.GP40753@mdounin.ru> Hello! On Thu, Jan 31, 2013 at 11:56:12AM +1030, SplitIce wrote: > Ive been working on my first nginx module (well more specifically, a fork > of an existing module with permission). > > Pretty close to done however found one thing thats lacking in the nginx > core (as far as I can tell). UDP Listening. > > There is support for UDP sockets, so sending of UDP data is fine. Howeaver > binding and listening seems to be full of hardcoded TCP related stuff. > > Is this correct or am I just doing it wrong?. This is correct - there is no UDP listening in nginx. On the other hand, it shouldn't be hard to implement it even directly in your module, much like UDP client connection code (ngx_udp_connect) is currently implemented in resolver. -- Maxim Dounin http://nginx.com/support.html