Multiplexing FastCGI module

Rasmus Andersson rasmus at notion.se
Tue Dec 15 19:43:22 MSK 2009


2009/12/14 Maxim Dounin <mdounin at mdounin.ru>:
<snip>
>> > On the other hand, it sets FASTCGI_KEEP_CONN flag and thus breaks
>> > things as nginx relay on fastcgi application to close connection.
>>
>> The FastCGI server will still be responsible for closing the connection.
>
> No.  Once nginx sets FASTCGI_KEEP_CONN flag - it takes
> this responsibility according to fastcgi spec.

Correct. My bad — when setting KEEP_CONN the app is relieved of
close() responsibility, but then at the same time, KEEP_CONN implies
the connection is _not_ closed (it is keept open).

>
> But looks like you didn't take the actual problem: nginx needs
> fastcgi application to close connections after it finished sending
> response.  It uses connection close as flush signal.  Without this
> requests will hang.

Then why does it work flawlessly for me? Are we maybe testing this
with different versions of nginx? My module is forked from the 0.8.29
module (and I'm testing on OS X 10.6 with the same version).

Sounds "broken" to rely on close as an implicit buffer flush signal.

In ngx_http_fastcgi_input_filter upstream_done is set to 1 on
NGX_HTTP_FASTCGI_END_REQUEST and any buffer is recycled. But then
looking at ngx_event_pipe.c it doesn't seem like buffers are flushed.

I found your patches (linked in your message further down) which seem
to include similar features as my stuff. Did this ever get finished or
to a functional state?

Or are you saying nginx — at a very low level — does not have support
for persistent connections with upstream peers?

<snip>
>
> Difference between fastcgi multiplexing and multiple tcp
> connections to the same fastcgi server isn't that huge.  Note
> well: same fastcgi server, not another one.
>
> You may save two file descriptors per request (one in nginx, one
> in fastcgi app), and associated tcp buffers.  But all this isn't
> likely to be noticable given the number of resources you already
> spent on request in question.

This solution would require you to have a nginx configuration which on
startup creates M number of connections to the backend(s), where M is
the maximum number of concurrent requests you will be able to handle.
I would like to set this to 10 000 (or even higher for some
applications), but that just seems like a REALLY ugly solution. Also,
at >10K the extra resources used are not neglectable -- for each fcgi
connection between nginx and the app there will be:

• Buffers (as you mentioned)
• Metadata
• FD pair

So you would basically end up with a configuration w/ a fixed upper
limit of concurrent requests as well as a somewhat high starting point
when it comes to system resources used.

I would like to have a "pretty" and D.R.Y. setup where I only run one
FastCGI server process per CPU, connect each of those servers to the
http front-end (nginx) and pass around data and virtual requests.

>
> The only use-case which appears to be somewhat valid is
> long-polling apps which consume almost no resources.  But even
> here you aren't likely to save more than half resources.

A more and more common case today, often solved by running a separate
server in parallel to the regular HTTP server. A solution where
maintenance, development and hardware becomes more expensive.

>
>> > p.s. As keeping fastcgi connections alive appears to be
>> > prerequisite for fastcgi multiplexing, here are some notes:
>> >
>> > Keeping fastcgi connections alive isn't really hard, but requires
>> > a lot more than setting KEEP_CONN flag in fastcgi request.  Most
>> > notably - ngx_event_pipe code and upstream module code should be
>> > modified to make sure buffering is somewhat limited and does not
>> > relay on connection close as end signal.  I've posted some preview
>> > patches a while ago which makes fastcgi module able to keep
>> > connections alive (with ngx_http_upstream_keepalive module), you
>> > may want to take a look if you are going to continue your
>> > multiplexing work.
>>
>> Thanks. Do you know where I can find those? Any hint to where I should
>> start googling to find it in the archives?
>
> Here is initial post in russian mailing list:
>
> http://nginx.org/pipermail/nginx-ru/2009-April/024101.html
>
> Here is the update for the last patch:
>
> http://nginx.org/pipermail/nginx-ru/2009-April/024379.html
>
> Not sure patches still applies cleanly, I haven't touch this for a
> while.

Thanks.

-- 
Rasmus Andersson



More information about the nginx-devel mailing list