[Q]: Why nginx stop feeding content to a custom filter module after processing 64K of data?

Maxim Dounin mdounin at mdounin.ru
Fri Feb 3 12:34:10 UTC 2017


Hello!

On Thu, Feb 02, 2017 at 06:51:18PM +0300, Andrey Kulikov wrote:

> Hello,
> 
> I've implemented custom filter module for nginx.
> In fact, it does nothing, but copy input chain to output.
> The aim is to have placeholder for filter modules, what do little bit
> more intelligent processing.
> Wish it be useful for new nginx modules developers.
> 
> Sources could be found here: https://github.com/amdei/dummy-filter-module
> It kind of stripped simplified version of ngx_http_sub_module module.
> 
> Currently I can observe only one issue with this module: it can't
> process replies longer that 65536 bytes.
> Doesn't matter if it proxied reply, or content from a disk file.
> I can't identify source of this issue, so appealing to the collective
> intelligence.

You are not freeing the buffers passed, but rather hold them for 
an infinite time in your filter.  As a result once output_buffers 
are exhausted, the process stalls waiting for you to free some 
buffers.

(Note well that your filter misuses NGX_HTTP_SUB_BUFFERED flag.  
This will cause undefined behaviour if the filter is used with the 
sub filter.)

-- 
Maxim Dounin
http://nginx.org/


More information about the nginx-devel mailing list