High memory consumption when proxying to a Comet server

Maxim Dounin mdounin at mdounin.ru
Mon Apr 12 23:52:18 MSD 2010


Hello!

On Mon, Apr 12, 2010 at 12:43:23AM -0300, Rogério Schneider wrote:

[...]

> > You may switch on debug log, it will be possible to trace big
> > allocations there.  See here:
> >
> > http://nginx.org/en/docs/debugging_log.html
> 
> I posted this log of some tests, and if you can please take a
> look at it, I would appreciate.
> 
> It was separated in sections, the start, the small msg, the big
> msg, the close.
> Then another run, the small msg after many big msgs, and the
> close (with lots of "strange" free()s):
> 
> http://pastebin.com/Fggn4Ui7

Yep, I see what's going on.  It's chunked filter which eats memory 
- it allocates small buffers for chunk-size and crlf markers.  
They aren't big, but they are allocated for each buffer sent, 
and they aren't reused.

Here is snipped from log where it causes another 4k allocation 
from system (note "malloc" line):

2010/04/12 00:06:04 [debug] 32748#0: *55 recv: fd:16 1024 of 1024
2010/04/12 00:06:04 [debug] 32748#0: *55 http output filter "/push/user1/xhrinteractive/canal2.b1?"
2010/04/12 00:06:04 [debug] 32748#0: *55 copy filter: "/push/user1/xhrinteractive/canal2.b1?"
2010/04/12 00:06:04 [debug] 32748#0: *55 http chunk: 1024
2010/04/12 00:06:04 [debug] 32748#0: *55 malloc: 096E9878:4096
2010/04/12 00:06:04 [debug] 32748#0: *55 write new buf t:1 f:0 00000000, pos 096E985C, size: 5 file: 0, size: 0
2010/04/12 00:06:04 [debug] 32748#0: *55 write new buf t:0 f:0 00000000, pos 096E8140, size: 1024 file: 0, size: 0
2010/04/12 00:06:04 [debug] 32748#0: *55 write new buf t:0 f:0 00000000, pos 080A6EE5, size: 2 file: 0, size: 0
2010/04/12 00:06:04 [debug] 32748#0: *55 http write filter: l:0 f:1 s:1031
2010/04/12 00:06:04 [debug] 32748#0: *55 http write filter limit 0
2010/04/12 00:06:04 [debug] 32748#0: *55 writev: 1031

As a workaround you may want to increase proxy_buffer_size to 
reduce number of such allocation (and/or just drop connections 
periodically).  Correct fix would be to make these buffers 
reusable once sent to client.

Maxim Dounin



More information about the nginx mailing list