TIME_WAITs and sustained connections was: Re: Memcached module -- unix domain socket support? (too man

Kon Wilms konfoo at gmail.com
Mon Aug 11 21:49:57 MSD 2008


On Sun, Aug 10, 2008 at 12:09 PM, Tyler Kovacs <tyler.kovacs at zvents.com> wrote:
>> what is the operating system?
>> what do you mean tweaked(details pls.)?

> 2008/01/31 20:49:30 [crit] 24806#0: *70538 connect() to 127.0.0.1:11211
> failed (99: Cannot assign requested address) while connecting to
> upstream, client: 192.168.200.10, server: www.xxxxxxxxxx.com, request:
> "GET /geocode?address=94131 HTTP/1.0", upstream: "memcached://127.0.0.1:11211",
> host: "www.xxxxxxxxxx.com"
>
> I believe this is a consequence of having too many sockets in TIME_WAIT.
>
> Finally, I think the original poster was referring to the following kernel
> parameter.
>
> net.ipv4.ip_local_port_range = 1024  65000

Indeed. Also:
net.ipv4.ip_local_port_range = 10000 65535 (since I have memcached and
other services on port 9000-10000)
net.ipv4.tcp_tw_recycle = 1 (fast recycle of T_W's)
net.ipv4.tcp_tw_reuse = 1 (reuse T_W's when 'safe' to do so)

I would advise anyone to read http://www.ietf.org/rfc/rfc1337.txt
before committing these to production.

It would seem that a persistent backend connection or domain socket
support for memcached would resolve this issue.

In my tests I have also noticed that connecting and disconnecting from
memcached has a performance impact on serving content. When I use this
mechanism on my application backend my video serving FPS maxes out at
2-3 updates per second. With a persistent connection I can go upwards
of 10fps. It would seem likely that the non-persistent
nginx<->memcached connectivity would also cause a bottleneck under
high connection rates.

Cheers
Kon





More information about the nginx mailing list