FastCGI multiplexing?

Manlio Perillo manlio_perillo at libero.it
Mon Jan 14 20:52:13 MSK 2008


Denis F. Latypoff ha scritto:
> Hello Manlio,
> 

Hi.

> [...] 
>> But with FCGI you can just execute one request at a time, using a 
>> persistent connection.
> 
>> The problems, with nginx, are:
>> 1) the upstream module does not supports persistent connections.
>>     A new connection is created for every request.
>>     In fact the http proxy supports HTTP 1.0 with the backend.
>> 2) nginx does not supports some forms of connections queue with upstream
>>     servers.
>>     This means that if nginx is handling 500 connections, then it will
>>     make 500 concurrent connections to the upstream server, and likely
>>     the upstream server (usually "a toy") will not be able to handle this
> 
> 
>> Fixing this will require a redesign of the upstream module.
> 
> OR
> 
> create a special layer (say multiplexing proxy) which holds all 500
> concurrent connections and pass only number of connections which fcgi backlog
> allows.
> 

Not sure if this is possible, without having to modify the fastcgi module.


> OR
> 
> create a second fastcgi library which is non-blocking and threaded: main process
> is accepts and process requests, all other threads are run app 


The problem is the same: threads are not cheap, so creating 500 threads 
is likely to destabilize the server and the system (nowadays good 
multithreaded servers make use of thread pools).

Moreover some applications are not thread safe.


With threads you usually just spare memory, but you add an overhead 
caused by synchronization.



> (similar to
> mod_wsgi, isn't it?)
> 

No.
The WSGI module for nginx is embedded in nginx, and nginx just uses a 
fixed number of worker processes (and all the processess accept new 
requests on the same inherited socket).


 > [...]



Manlio Perillo





More information about the nginx mailing list