keepalive connection to fastcgi backend hangs
Nicolas Franck
Nicolas.Franck at UGent.be
Mon Dec 20 20:53:48 UTC 2021
I kind of agree: keepalive connections are not strictly necessary in this scenario.
But there is a reason why I started looking into this: I started noticing a lot
of closed tcp connections with status TIME_WAIT. That happens when you
close the connection on your end, and the os keeps these around for a few
seconds, to make sure that the other end of the connection got the tcp "FIN".
During that time the client port for that connection cannot be used:
$ netstat -an | grep :5000
(if the fcgi app is listening on port 5000)
If you receive a lot of requests after each other, and the number grows
larger than the os can free the TIME_WAIT connections, then you'll run
out of client ports, and see seemingly unrelated errors like "dns lookup failure".
This even happens when the response of the upstream server is fast, as it
takes a "lot" of time before the TIME_WAIT connections are freed.
Reuse of tcp connections is one way to tackle this problem.
Playing around with sysctl is another:
$ sysctl -w net.ipv4.tcp_tw_recycle=1
But I am not well versed in this, and I do not know a lot
about the possible side effects.
cf. https://web3us.com/drupal6/how-guides/what-timewait-state
cf. https://onlinehelp.opswat.com/centralmgmt/What_you_need_to_do_if_you_see_too_many_TIME_WAIT_sockets.html
On 20 Dec 2021, at 20:35, Maxim Dounin <mdounin at mdounin.ru<mailto:mdounin at mdounin.ru>> wrote:
Hello!
On Mon, Dec 20, 2021 at 04:00:59PM +0000, Nicolas Franck wrote:
Interesting!
I looks like there is nothing that managing the incoming connections
for the fcgi workers. Every fcgi worker needs to do this on its own, right?
So if there are more clients (i.e. nginx workers) than fcgi workers,
then it becomes unresponsive after a few requests, because all
the fcgi workers are holding on to a connection to an nginx worker,
and there seems to be no queue handling this.
Is this correct? Just guessing here
More or less. The FastCGI code in your example implies very
simple connection management, based on the process-per-connection
model. As long as all FastCGI processes are busy, all additional
connections will be queued in the listen queue of FastCGI
listening socket (till a connection is closed). Certainly that's
not the only model possible with FastCGI, but the easiest to use.
The process-per-connection model doesn't combine well with
keepalive connections, since each keepalive connection occupies
the whole process. And you have to create enough processes to
handle all keepalive connections you want to be able to keep
alive. In case of nginx as a client, this means at least (<number
of connections in the keepalive directive> * <number of worker
processes>) processes.
Alternatively, you can avoid using keepalive connections. These
are not really needed for local upstream servers, since connection
establishment costs are usually negligible compared to the total
request processing costs. And this is what nginx does by default.
--
Maxim Dounin
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cb8a3cee984e84e47f86408d9c3efed9f%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756257522060451%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FSv4njqodduc7rpTd5m0FXnO1DBmQooZmFXABzKbC2A%3D&reserved=0
_______________________________________________
nginx mailing list
nginx at nginx.org
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cb8a3cee984e84e47f86408d9c3efed9f%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756257522060451%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=3gI9%2F8oxIPl65YD1pbdE5zT%2FsUM7JQUW5qLkQpSCAGU%3D&reserved=0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20211220/485e894e/attachment.htm>
More information about the nginx
mailing list