Interest in extending FastCGI / SCGI support to allow TLS encrypted connections to back end?

Peter Vereshagin peter at
Mon Jan 21 14:20:09 UTC 2013


2013/01/21 11:15:51 +0000 Some Developer <someukdeveloper at> => To nginx at :
SD> On 21/01/13 07:31, Peter Vereshagin wrote:
SD> > 2013/01/21 07:07:46 +0000 Some Developer <someukdeveloper at> => To nginx at :
SD> > SD> On 20/01/13 15:10, Peter Vereshagin wrote:
SD> > SD> > 2013/01/18 17:45:13 +0000 Some Developer <someukdeveloper at> => To nginx at :
SD> > SD> > What's messy with your 'stunnel'? Why shouldn't you use the 'nginx' on the
SD> > SD> > backend side with https as an uplink protocol? The your 'fastcgi client' nginx
SD> > SD> > should use then the 'nginx on a backend side' as an https upstream.
SD> > SD>
SD> > SD> I'm not sure I completely understand your point here. Are you suggesting
SD> > SD> that you just run a simple Nginx server on the application so that the
SD> > SD> front end Nginx server can just pass the requests to the Nginx on the
SD> > SD> application server via HTTPS and then the local Nginx server just passes
SD> > SD> the requests on to the application server on
SD> >
SD> > Short answer: yes.
SD> >
SD> > or local socket or DMZ neighbor (the whatever).
SD> >
SD> > What's wrong with stunnel then?
SD> Nothing is wrong with stunnel other than it adds extra complexity to 
SD> your deployment. It would be nice if Nginx could handle this on its own. 
SD> It clearly already can due to its support of HTTPS on the browser side 
SD> so I can't imagine it would be very hard to add support on the FastCGI 
SD> or SCGI side.

It's fine only for the smaller half of cases when the backend has only one
application (fcgi or scgi) server per host.

Back in time when one application server was handling several application(s)
this did more sense. But this just doesn't seem to be a web applications
architecture trend any more.

Adding more application to the typical nginx consumer's backend means adding
more application servers therefore more ports/sockets to listen.

The more ports to listen on the outer network means more complication(s) e.
g., firewall and encryption on each of them, from both frontend and a backend

At the same time being backed by nginx (or backing nginx) those daemons should
feel better with outer network instabilities, e. g., avoiding 'slow client
problem' that may happen between frontend and backend hosts keeping from use
of the full potential of the application servers and so on.

I believe it's not hard to implement encryption in the nginx fcgi/scgi client,
just think it's not a future targeting and can decrease the growth of
installations number, on backends particularly. ;-)

Thank you.

Peter Vereshagin <peter at> ( pgp: 1754B9C1

More information about the nginx mailing list