keepalive connection to fastcgi backend hangs
Nicolas.Franck at UGent.be
Mon Dec 20 20:53:48 UTC 2021
I kind of agree: keepalive connections are not strictly necessary in this scenario.
But there is a reason why I started looking into this: I started noticing a lot
of closed tcp connections with status TIME_WAIT. That happens when you
close the connection on your end, and the os keeps these around for a few
seconds, to make sure that the other end of the connection got the tcp "FIN".
During that time the client port for that connection cannot be used:
$ netstat -an | grep :5000
(if the fcgi app is listening on port 5000)
If you receive a lot of requests after each other, and the number grows
larger than the os can free the TIME_WAIT connections, then you'll run
out of client ports, and see seemingly unrelated errors like "dns lookup failure".
This even happens when the response of the upstream server is fast, as it
takes a "lot" of time before the TIME_WAIT connections are freed.
Reuse of tcp connections is one way to tackle this problem.
Playing around with sysctl is another:
$ sysctl -w net.ipv4.tcp_tw_recycle=1
But I am not well versed in this, and I do not know a lot
about the possible side effects.
On 20 Dec 2021, at 20:35, Maxim Dounin <mdounin at mdounin.ru<mailto:mdounin at mdounin.ru>> wrote:
On Mon, Dec 20, 2021 at 04:00:59PM +0000, Nicolas Franck wrote:
I looks like there is nothing that managing the incoming connections
for the fcgi workers. Every fcgi worker needs to do this on its own, right?
So if there are more clients (i.e. nginx workers) than fcgi workers,
then it becomes unresponsive after a few requests, because all
the fcgi workers are holding on to a connection to an nginx worker,
and there seems to be no queue handling this.
Is this correct? Just guessing here
More or less. The FastCGI code in your example implies very
simple connection management, based on the process-per-connection
model. As long as all FastCGI processes are busy, all additional
connections will be queued in the listen queue of FastCGI
listening socket (till a connection is closed). Certainly that's
not the only model possible with FastCGI, but the easiest to use.
The process-per-connection model doesn't combine well with
keepalive connections, since each keepalive connection occupies
the whole process. And you have to create enough processes to
handle all keepalive connections you want to be able to keep
alive. In case of nginx as a client, this means at least (<number
of connections in the keepalive directive> * <number of worker
Alternatively, you can avoid using keepalive connections. These
are not really needed for local upstream servers, since connection
establishment costs are usually negligible compared to the total
request processing costs. And this is what nginx does by default.
nginx mailing list
nginx at nginx.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx