lots of connections on TIME_WAIT state

Anıl Çetin anil at saog.net
Wed Apr 8 20:51:57 MSD 2009


Thanks for answer Maxim, but it is exactly "out of sockets" openvz has 
resource limit option for opened tcp sockets (numtcpsock) and when time 
comes there becomes thousands (ten thousands approx.) of open 
connections while there is only 2000-3000 clients. Apache keepalive is 
off also. I read about TIME_WAIT state, I know the connection is 
reusable but why there is so much sockets opened? As my knowledge while 
nginx is a proxy it opens 2 connections for one client with apache, isnt 
it right? So there must be 4000-6000 tcp sockets not 10000-15000.

Isn't "/proc/sys/net/ipv4/tcp_tw_recycle"  turned on by default in 
linux? This can be my problem I will check and try it. May be it isnt 
using sockets in TIME_WAIT states and timeout for connections in this 
state is very big, so that it is opening new connections again and again 
without dropping any of before opened sockets.


Maxim Dounin yazmış:
> Hello!
>
> On Wed, Apr 08, 2009 at 05:56:51PM +0300, Anıl Çetin wrote:
>
>   
>> So, what is the solution? I have exactly the same problem, my nginx is  
>> in a virtual server (openvz), working as a proxy server in front of  
>> apache and oftenly  (after 2k-3k requests) server becomes "out of  
>> sockets"  even I raise the allowed numbers of sockets to a very big 
>> number.
>>     
>
> You probably "out of ports", not out of sockets.  Solution is to 
> configure TIME_WAIT reusing (tw_reuse, tw_recyle or something like 
> depending on your OS).  You may also allow your system to use more 
> ports for outgoing connections.
>
> Under FreeBSD reusing of TIME_WAIT sockets is the default, and 
> portrange for outgoing connections may be tuned via  
> net.inet.ip.portrange.hifirst and net.inet.ip.portrange.hilast 
> sysctls.
>
> Not sure about Linux, but Google suggests reusing of TIME_WAIT 
> sockets may be turned on via /proc/sys/net/ipv4/tcp_tw_recycle.
>
> Maxim Dounin
>
>   
>> Igor Sysoev yazmış:
>>     
>>> On Wed, Apr 08, 2009 at 10:47:16AM +0300, Artis Caune wrote:
>>>
>>>   
>>>       
>>>> 2009/4/7 Deepan Chakravarthy <codeshepherd at gmail.com>:
>>>>     
>>>>         
>>>>> Hi,
>>>>>   I am using nginx with fast-cgi .  When I run
>>>>> $netstat -np | grep 127.0.0.1:9000
>>>>> I find lot of connections in TIME_WAIT state. Is this because of high
>>>>> keepalive_timeout value ?   When lot of people use (5 requests per second)
>>>>>  nginx takes more  time to respond. System load goes more than 10 during
>>>>> peak hours.
>>>>>       
>>>>>           
>>>> This is because of how TCP works.
>>>>
>>>>
>>>>     
>>>>         
>>>>> debian:~# netstat -np | grep 127.0.0.1:9000
>>>>> tcp        0      0 127.0.0.1:9000          127.0.0.1:45603
>>>>> TIME_WAIT  -
>>>>> tcp        0      0 127.0.0.1:9000          127.0.0.1:45601
>>>>> TIME_WAIT  -
>>>>>       
>>>>>           
>>>> If you were on FreeBSD, you could disable TIME_WAIT on loopback
>>>> completely by setting:
>>>>
>>>>     sysctl net.inet.tcp.nolocaltimewait=1
>>>>     
>>>>         
>>> Due to the incorrect implementation this remedy is worse than the disease.
>>> The net.inet.tcp.nolocaltimewait relys on unlimited RST delivery, therefore
>>> if there are too many RSTs, they will be limited by net.inet.icmp.icmplim
>>> and you will have a lot of sockets in the LAST_ACK state on server side
>>> instead of lot of sockets in the TIME_WAIT on client side.
>>>
>>>
>>>   
>>>       
>
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090408/684c6b5c/attachment.html>


More information about the nginx mailing list