Trying to Understand Upstream Keepalive
Maxim Dounin
mdounin at mdounin.ru
Fri May 9 02:07:46 UTC 2014
Hello!
On Thu, May 08, 2014 at 03:12:44AM -0400, abstein2 wrote:
> I'm trying to better wrap my head around the keepalive functionality in the
> upstream module as when enabling keepalive, I'm seeing little to no
> performance benefits using the FOSS version of nginx.
>
> My upstream block is:
>
> upstream upstream_test_1 { server 1.1.1.1 max_fails=0; keepalive 50; }
>
> With a proxy block of:
>
> proxy_set_header X-Forwarded-For $IP;
> proxy_set_header Host $http_host;
> proxy_http_version 1.1;
> proxy_set_header Connection "";
> proxy_pass http://upstream_test_1;
>
> 1) How can I tell whether there are any connections currently in the
> keepalive pool for the upstream block? My origin server has keepalive
> enabled and I see that there are some connections in a keepalive state,
> however not the 50 defined and all seem to close much quicker than the
> keepalive timeout for the backend server. (I am using the Apache server
> status module to view this which is likely part of the problem)
As long as load is even enough, don't expect to see many keepalive
connections on the backend - new connections will be only open if
there are no idle connections in the cache of a worker process.
> 2) Are upstream blocks shared across workers? So in this situation, would
> all 4 workers I have shared the same upstream keepalive pool or would each
> worker have it's own block of 50?
It's per worker, see http://nginx.org/r/keepalive.
> 3) How is the length of the keepalive determined? The origin server's
> keepalive settings? Do the origin server's keepalive settings factor in at
> all?
Connections are kept in the cache till the origin server closes them.
> 4) If no traffic comes across this upstream for an extended period of time,
> will the connections be closed automatically or will they stay open
> infinitely?
See above.
> 5) Are the connections in keepalive shared across visitors to the proxy? For
> example, if I have three visitors to the proxy one after the other, would
> the expectation be that they use the same connection via keepalive or would
> a new connection be opened for each of them?
Connections in the cache are shared for all uses of the upstream.
As long as a connection is idle (and hence in the cache), it can
by used for any request by any visitor.
> 6) Is there any common level of performance benefit I should be seeing from
> enabling keepalive compared to just performing a proxy_pass directly to the
> origin server with no upstream block?
No.
There are two basic cases when keeping connections alive is
really beneficial:
- Fast backends, which produce responses is a very short time,
comparable to a TCP handshake.
- Distant backends, when a TCP handshake takes a long time,
comparable to a backend response time.
There are also some bonus side effects (reducing number of sockets
in TIME-WAIT state, less work for OS to establish new connections,
less packets on a network), but these are unlikely to result in
measurable performance benefits in a typical setup.
--
Maxim Dounin
http://nginx.org/
More information about the nginx
mailing list