Multiplexing FastCGI module

Maxim Dounin mdounin at
Mon Dec 14 04:12:34 MSK 2009


On Sun, Dec 13, 2009 at 06:31:06PM +0100, Rasmus Andersson wrote:

> Hi.
> I've written a multiplexing (concurrent requests through a single
> backend) version of the 0.8 fastcgi module along with a FastCGI server
> reference implementation:
> Please let me know what you think and if you're interested in merging
> in this code upstream.

[Just a side note: there are lots of whitespace damage and style 
violations in your code.  It's belived that following original 
nginx style is good idea, makes reading diffs easier and improves 

I wasn't able to find enough code there to keep backend 
connections alive and share them between requests.  Looks like the 
code in question just assigns unique ids to requests sent to 
fastcgi backend - while each request is still send in separate 

On the other hand, it sets FASTCGI_KEEP_CONN flag and thus breaks 
things as nginx relay on fastcgi application to close connection.

So basically it's not something usefull right now.  And it breaks 
3 out of 4 fastcgi subtests in the test suite 

As to idea in general - I'm not really sure that request 
multiplexing is great feature of fastcgi protocol.  It complicates 
things and shouldn't be noticably faster than just using multiple 
connections to the same fastcgi app (assuming fastcgi app is able 
to handle connections asynchroniously).  It would be intresting to 
compare though.

Maxim Dounin

p.s. As keeping fastcgi connections alive appears to be 
prerequisite for fastcgi multiplexing, here are some notes:

Keeping fastcgi connections alive isn't really hard, but requires 
a lot more than setting KEEP_CONN flag in fastcgi request.  Most 
notably - ngx_event_pipe code and upstream module code should be 
modified to make sure buffering is somewhat limited and does not 
relay on connection close as end signal.  I've posted some preview 
patches a while ago which makes fastcgi module able to keep 
connections alive (with ngx_http_upstream_keepalive module), you 
may want to take a look if you are going to continue your 
multiplexing work.

More information about the nginx mailing list