Why set keepalive_timeout to a short period when Nginx is great at handling them?
Valentin V. Bartenev
vbart at nginx.com
Sun Jun 19 08:53:12 UTC 2016
On Saturday 18 June 2016 14:12:31 B.R. wrote:
> There is no downside on the server application I suppose, especially since,
> as you recalled, nginx got no trouble for it.
> One big problem is, there might be socket exhaustion on the TCP stack of
> your front-end machine(s). Remember a socket is defined by a triple
> <protocol, address, port> and the number of available ports is 65535 (layer
> 4) for every IP (layer 3) double <protocol, address>.
> The baseline is, for TCP connections underlying your HTTP communication,
> you have 65535 port for each IP version your server handles.
Each TCP connection is identified by 4 parameters: source IP, source PORT,
destination IP, destination PORT. Since usually clients have different
public IPs there's not limitation by the number of ports.
> Now, you have to consider the relation between new clients (thus new
> connections) and the existing/open ones.
> If you have very low traffic, you could set an almost infinite timeout on
> your keepalive capability, that would greatly help people who never sever
> connection to your website because they are so addicted to it (and never
> close the tab of their browser to it).
> On the contrary, if you are very intensively seing new clients, with the
> same parameters, you would quickly exhaust your available sockets and be
> unable to accept client connections.
No, keep-alive connections shouldn't exhaust available sockets, because
there's "worker_connections" directive in nginx that limits number of open
connections and must be set according to other limits in your system.
> And finally, nginx provides the ability to recycle connections based on a
> number of requests made (default 100).
> I guess that is a way of mitigating clients with different behaviors: a
> client having made 100 requests is probably considered to hav had its share
> of time on the server, and it is time to put it back in the pool to give
> others access in case of congestion.
No, it's to overcome possible memory leaks of long lived connections in nginx,
because some modules may allocate memory from connection pool on each request.
It's usually save to increase this value to 1000-10000.
wbr, Valentin V. Bartenev
More information about the nginx