[ANN] ngx_lua v0.1.5: ability to capture multiple parallelsubrequests

Roman Vasilyev roman at anchorfree.com
Fri Feb 11 23:39:43 MSK 2011

Also let me show my fastcgi parameters:
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;

Actually I don't know which extra buffers here user, is it some lua_buffers?
On 02/10/2011 07:55 PM, agentzh wrote:
> On Fri, Feb 11, 2011 at 2:12 AM, Akins, Brian<Brian.Akins at turner.com>  wrote:
>>   agentzh,
>> I was wondering if you had considered having a capture mode that used a
>> callback rather than coroutines?  In really high traffic servers, the
>> coroutines seem to eat a good bit of memory.
> Have you tried LuaJIT 2.0? It saves 25+% of the total RAM used by our
> nginx worker processes in our business, compared to the standard Lua
> 5.1 interpreter. Also the latter's coroutine implementation is
> suboptimal.
> Another issue is that most of the upstream modules do not release its
> output bufs as early as possible in the context of subrequests, they
> usually rely on the nginx memory pool to release all those bufs when
> the pool is destroyed at the end of the main request, which is quite
> unacceptable. We'll fix our upstream modules and possibly other
> standard modules (via patches) to release buffers at the end of the
> subrequest, rather than main request.
> Technically speaking, callbacks won't save memory, we still need to
> save all your Lua context such that you can get access to data in the
> outer context in your Lua callback, or it'll be useless :)
> Cheers,
> -agentzh
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://nginx.org/mailman/listinfo/nginx

More information about the nginx mailing list