Multiplexing FastCGI module

Rasmus Andersson rasmus at notion.se
Mon Dec 14 14:42:41 MSK 2009


On Mon, Dec 14, 2009 at 02:12, Maxim Dounin <mdounin at mdounin.ru> wrote:
> Hello!
>
> On Sun, Dec 13, 2009 at 06:31:06PM +0100, Rasmus Andersson wrote:
>
>> Hi.
>>
>> I've written a multiplexing (concurrent requests through a single
>> backend) version of the 0.8 fastcgi module along with a FastCGI server
>> reference implementation:
>>
>> http://github.com/rsms/afcgi
>>
>> Please let me know what you think and if you're interested in merging
>> in this code upstream.
>
> [Just a side note: there are lots of whitespace damage and style
> violations in your code.  It's belived that following original
> nginx style is good idea, makes reading diffs easier and improves
> karma.]

I'm well aware of this. The code (as you might have noticed) is in the
early stages. Cleanup etc will be done once I've got the general ideas
straight.

>
> I wasn't able to find enough code there to keep backend
> connections alive and share them between requests.  Looks like the
> code in question just assigns unique ids to requests sent to
> fastcgi backend - while each request is still send in separate
> connection...

I'm far from experienced with the codebase of nginx thus I'm employing
trial-and-error at large. In my tests nginx do keep the upstream peer
connection and share it over http requests. In this screenshot
http://hunch.se/s/cx/e1rg8i8bkgwks.png in the upper right terminal you
can see nginx log in debug mode -- no "disconnect" or "connect"
messages (which nginx do log when connecting/disconnecting to/from
upstream peers). In the left hand side of the screen you see two
FastCGI servers running. Nginx evenly distributes incoming requests to
the two backends over two persistent upstream peer connections (the
FastCGI server instances are logging "connect/disconnect" when nginx
creates or drops a connection).

I just re-ran the tests to confirm I wasn't too tired yesterday.
Here's the log from one FastCGI server:

fcgi client 127.0.0.1 connected on fd 5
...
app_handle_beginrequest 0x100530
...
app_handle_beginrequest 0x100530
...

Multiple requests received and handled over a period of time over one
persistent connection from nginx.

But I guess I'm missing something?

>
> On the other hand, it sets FASTCGI_KEEP_CONN flag and thus breaks
> things as nginx relay on fastcgi application to close connection.

The FastCGI server will still be responsible for closing the connection.

>
> So basically it's not something usefull right now.  And it breaks
> 3 out of 4 fastcgi subtests in the test suite
> (http://mdounin.ru/hg/nginx-tests/).

I wasn't aware of those tests. In what way does it break the tests?
Would you please help me solve the issues in my code breaking these
tests?

>
> As to idea in general - I'm not really sure that request
> multiplexing is great feature of fastcgi protocol.  It complicates
> things and shouldn't be noticably faster than just using multiple
> connections to the same fastcgi app (assuming fastcgi app is able
> to handle connections asynchroniously).  It would be intresting to
> compare though.

Multiplexing in FastCGI is a HUGE DEAL. Imagine you run a website
(there's quite a few of those around) and you want to do something
fancy (for instance run some python or ruby app). Now, let's say your
website become very popular and alot of your visitors have slow
connections. You also do stuff in your app which takes some time (may
it be long-polling for a chat message or wait for a slow I/O
operation).

This is a very common scenario now days (and the reason for nginx in
the first place -- the c10k problem) which is very tough to satisfy
with non-multiplexing fastcgi setups.

Instead of this: http://hunch.se/s/ag/3h5dmvibcwk8k.png
You can have this: http://hunch.se/s/6b/jddgr2qk8wgg0.png

Saving gigabytes of memory and tens of thousands of file descriptors.
Today, the only option is to build purpose-made nginx-modules or whole
http-servers running separately from nginx.

I say this with real-world experience. Would be awesome to implement
the complete FastCGI 1.0 spec in nginx and be the first web server to
support long-lived and slow connections with rich web apps!

>
> Maxim Dounin
>
> p.s. As keeping fastcgi connections alive appears to be
> prerequisite for fastcgi multiplexing, here are some notes:
>
> Keeping fastcgi connections alive isn't really hard, but requires
> a lot more than setting KEEP_CONN flag in fastcgi request.  Most
> notably - ngx_event_pipe code and upstream module code should be
> modified to make sure buffering is somewhat limited and does not
> relay on connection close as end signal.  I've posted some preview
> patches a while ago which makes fastcgi module able to keep
> connections alive (with ngx_http_upstream_keepalive module), you
> may want to take a look if you are going to continue your
> multiplexing work.

Thanks. Do you know where I can find those? Any hint to where I should
start googling to find it in the archives?

>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://nginx.org/mailman/listinfo/nginx-devel
>



-- 
Rasmus Andersson



More information about the nginx mailing list