Denis F. Latypoff
denis at gostats.ru
Mon Jan 14 20:14:59 MSK 2008
Monday, January 14, 2008, 10:54:53 PM, you wrote:
> Igor Clark ha scritto:
>> On 14 Jan 2008, at 13:46, Manlio Perillo wrote:
>>> Igor Clark ha scritto:
>>>> Hi Igor and everyone,
>>>> I'm toying with some ideas for a multithreaded FastCGI application,
>>>> intended to work with nginx.
>>>> Seems that FCGI request/connection multiplexing would be an extremely
>>>> valuable feature.
>>>> I can't find any reference to this on the wiki, in the mailing list
>>>> or on Google - I assume this is because nginx, like Apache, doesn't
>>>> support FastCGI multiplexing. Is this correct? If so, are there any
>>>> plans to implement it at some point in the future?
>>> As far as I know the multiplexing support in FastCGI is broken "by
>>> Read this response one of the authors of Twisted gave me about FastCGI
>>> (among other questions):
>> Thanks Manlio, that's very interesting. Lack of flow control in the
>> protocol is obviously an issue for multiplexing; now that it's been
>> pointed out, it seems bizarre that it should have been missed out. One
>> wonders if the intention was for the application to send an HTTP 503
>> over the FCGI connection in the event of overloading?
> Maybe they just though that overflow is not possible, who knows.
> TCP, as an example, has some form of flow control (but usually FastCGI
> uses an UDP connection)
>> I guess this would
>> require a web server module to back off from overloaded application
>> instances based on their HTTP status code - which seems like trying to
>> patch up the shortcomings of the transport in the application.
>> It's a shame; it seemed that removing all the TCP overhead between the
>> web server and the application server would be a good thing, but perhaps
>> FCGI just isn't the way.
> But with FCGI you can just execute one request at a time, using a
> persistent connection.
> The problems, with nginx, are:
> 1) the upstream module does not supports persistent connections.
> A new connection is created for every request.
> In fact the http proxy supports HTTP 1.0 with the backend.
> 2) nginx does not supports some forms of connections queue with upstream
> This means that if nginx is handling 500 connections, then it will
> make 500 concurrent connections to the upstream server, and likely
> the upstream server (usually "a toy") will not be able to handle this
> Fixing this will require a redesign of the upstream module.
create a special layer (say multiplexing proxy) which holds all 500
concurrent connections and pass only number of connections which fcgi backlog
create a second fastcgi library which is non-blocking and threaded: main process
is accepts and process requests, all other threads are run app (similar to
mod_wsgi, isn't it?)
>> I'm still just researching the area at the
>> moment though so any further thoughts or experiences would be very welcome.
>> Is there any plan to implement HTTP/1.1 & keepalive connections in
>> nginx's conversations with upstream servers? Can't see anything in the
>> wiki or feature request list.
> Igor Sysoev has expressed his intentions to add support for persistent
> upstream connections, try searching the mailing list archives.
>> Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749
>> 5355 // www.pokelondon.com
Denis mailto:denis at gostats.ru
More information about the nginx