How to disable buffering when using FastCGI?

Nicolas Grilly nicolas at
Tue Oct 13 22:24:53 MSD 2009

Hello again Maxim,

2009/10/13 Maxim Dounin <mdounin at>:
> On Tue, Oct 13, 2009 at 04:20:00PM +0200, Nicolas Grilly wrote:
>> Is there no such option just because nobody implemented it? Or is it
>> because of some kind of technical constraint?
> Something like this.  FastCGI requires buffer processing which
> isn't compatible with current code for unbuffered connections.


>> Do you recommend to people developing Comet style application to use
>> HTTP proxying instead of FastCGI?
> For now you should either close & reopen connections, or use HTTP
> proxy instead.

So, for now, I guess my best bet is to use HTTP proxying :-)

>> Is it difficult to implement the option "fastcgi_buffering off", using
>> the same technique as in the source code of module HTTP proxy?
> Current "proxy_buffering off" implementation is something wierd
> and should be nuked, IMHO.  The same implementation for FastCGI
> just won't work.
> I believe buffering control in upstream module (which includes
> fastcgi, proxy and memcached) should be changed to something more
> flexible.  In particular, fastcgi should be aware of FastCGI
> record boundaries, and shouldn't try to buffer too much as long as
> it got full record.
> I've posted some preliminary patches for this as a part of backend
> keepalive support work, but they are a bit stale now.

It would be a perfect solution! If the fastcgi module is aware of
FastCGI record boundaries and stops buffering after having received a
full record, then the problem is solved. This gives to the FastCGI
backend complete control over the amount of buffering, sending short
records in order to limit buffering, or sending long records (around
8KB) for normal buffering. Is it your plan for the future of the
upstream module?



More information about the nginx mailing list