[Q]: Why nginx stop feeding content to a custom filter module after processing 64K of data?

Andrey Kulikov amdeich at gmail.com
Thu Feb 2 15:51:18 UTC 2017


Hello,

I've implemented custom filter module for nginx.
In fact, it does nothing, but copy input chain to output.
The aim is to have placeholder for filter modules, what do little bit
more intelligent processing.
Wish it be useful for new nginx modules developers.

Sources could be found here: https://github.com/amdei/dummy-filter-module
It kind of stripped simplified version of ngx_http_sub_module module.

Currently I can observe only one issue with this module: it can't
process replies longer that 65536 bytes.
Doesn't matter if it proxied reply, or content from a disk file.
I can't identify source of this issue, so appealing to the collective
intelligence.

With the following location configuration:

        location / {
            dummy_filter on;
            root /root/devel/ngx_module/dummy-filter-module/t/servroot/html;
            index index.html;
        }

if index.html contains more that 64K of data, request to / location
hangs up after processing exactly,
and after some time connection closing with timeout.
(I use index.html containing 70000 of letter 'A'.)

Debugging shows that input file feeded to my module with chains 32K
long (in one buffer).
But after second input chain it looks like feeding data to module just stops.

In logs we may see following:

...
2017/02/02 13:55:57 [debug] 16389#0: *4 dummy out
2017/02/02 13:55:57 [debug] 16389#0: *4 dummy out: 0000000000750DB8
0000000000769860-0000000000771860 (32768)
2017/02/02 13:55:57 [debug] 16389#0: *4 http postpone filter
"/index.html?" 0000000000750D38
2017/02/02 13:55:57 [debug] 16389#0: *4 http chunk: 32768
2017/02/02 13:55:57 [debug] 16389#0: *4 write new buf t:1 f:0
0000000000750E18, pos 0000000000750E18, size: 6 file: 0, size: 0
2017/02/02 13:55:57 [debug] 16389#0: *4 write new buf t:0 f:0
0000000000000000, pos 0000000000769860, size: 32768 file: 0, size: 0
2017/02/02 13:55:57 [debug] 16389#0: *4 write new buf t:0 f:0
0000000000750CB0, pos 00000000004DE2DD, size: 2 file: 0, size: 0
2017/02/02 13:55:57 [debug] 16389#0: *4 http write filter: l:0 f:1 s:32776
2017/02/02 13:55:57 [debug] 16389#0: *4 http write filter limit 0
2017/02/02 13:55:57 [debug] 16389#0: *4 writev: 32776 of 32776
2017/02/02 13:55:57 [debug] 16389#0: *4 http write filter 0000000000000000
2017/02/02 13:55:57 [debug] 16389#0: *4 http copy filter: -2 "/index.html?"
2017/02/02 13:55:57 [debug] 16389#0: *4 http finalize request: -2,
"/index.html?" a:1, c:2
2017/02/02 13:55:57 [debug] 16389#0: *4 event timer add: 3: 60000:1486033017701
2017/02/02 13:55:57 [debug] 16389#0: *4 http finalize request: -4,
"/index.html?" a:1, c:2
2017/02/02 13:55:57 [debug] 16389#0: *4 http request count:2 blk:0
2017/02/02 13:55:57 [debug] 16389#0: worker cycle
...

At the same time, if input chain divided into few buffers, in total
containing more that 64K of data, it processed normally.
(as we may see in test
https://github.com/amdei/dummy-filter-module/blob/master/t/07-long-reply.t
)

What could be done in order to identify source (apart from damaged
DNA) and eliminate described issue?

Any help and advises will be appreciated.

--
WBR,
Andrey


More information about the nginx-devel mailing list