Weird Memleak problem
Paul
paul at gtcomm.net
Tue Sep 1 02:10:16 MSD 2009
Oh I also notice another issue, when ram usage is high like this, when i
reload nginx it does this:
root 5776 0.0 0.1 19528 4876 ? Ss Aug30 0:00 nginx:
master process /usr/local/nginx/sbin/nginx -c conf/ng1.conf
www 5777 2.3 2.3 184060 94364 ? S Aug30 28:53 nginx:
worker process is shutting down
www 5778 2.3 2.3 315500 95496 ? S Aug30 29:23 nginx:
worker process is shutting down
www 5781 2.2 3.8 342904 154144 ? S Aug30 28:08 nginx:
worker process is shutting down
www 5784 2.4 4.9 508552 199036 ? S Aug30 30:17 nginx:
worker process is shutting down
www 5785 2.4 3.0 349364 122764 ? S Aug30 30:18 nginx:
worker process is shutting down
www 5786 2.3 5.8 511264 234928 ? R Aug30 29:27 nginx:
worker process is shutting down
www 5787 2.3 2.7 209608 111840 ? S Aug30 29:29 nginx:
worker process is shutting down
www 5788 2.3 6.5 406672 263352 ? S Aug30 29:24 nginx:
worker process is shutting down
www 5789 2.3 2.4 275580 98532 ? S Aug30 29:25 nginx:
worker process is shutting down
www 5790 2.3 3.5 316852 145448 ? S Aug30 29:10 nginx:
worker process is shutting down
www 5791 2.3 6.5 505276 265980 ? S Aug30 29:13 nginx:
worker process is shutting down
www 5792 2.4 1.5 255980 62228 ? S Aug30 30:05 nginx:
worker process is shutting down
www 5793 2.3 5.4 546796 221996 ? S Aug30 29:34 nginx:
worker process is shutting down
www 5794 2.3 2.8 250940 116300 ? S Aug30 28:39 nginx:
worker process is shutting down
www 5795 2.3 2.1 246528 86908 ? S Aug30 29:08 nginx:
worker process is shutting down
www 5796 2.4 3.9 340724 161644 ? S Aug30 30:20 nginx:
worker process is shutting down
www 6347 2.8 0.7 43356 28640 ? S 17:10 0:13 nginx:
worker process
www 6348 2.7 0.6 42472 27636 ? S 17:10 0:13 nginx:
worker process
www 6349 2.4 0.6 40776 25672 ? S 17:10 0:11 nginx:
worker process
www 6350 2.3 0.6 41864 27104 ? S 17:10 0:11 nginx:
worker process
www 6351 2.3 0.6 42728 28028 ? S 17:10 0:11 nginx:
worker process
www 6352 2.3 0.6 43040 27828 ? S 17:10 0:11 nginx:
worker process
www 6353 2.6 0.7 43972 29360 ? S 17:10 0:12 nginx:
worker process
www 6354 2.4 0.7 46104 30892 ? S 17:10 0:11 nginx:
worker process
www 6355 2.0 0.6 40252 25776 ? S 17:10 0:09 nginx:
worker process
www 6356 2.9 0.7 46408 31396 ? S 17:10 0:14 nginx:
worker process
www 6357 2.2 0.6 41600 26816 ? S 17:10 0:10 nginx:
worker process
www 6358 2.3 0.6 42536 27960 ? S 17:10 0:11 nginx:
worker process
www 6359 2.4 0.6 40564 26120 ? S 17:10 0:11 nginx:
worker process
www 6360 2.4 0.6 42272 27656 ? S 17:10 0:11 nginx:
worker process
www 6361 2.8 0.7 46764 30336 ? S 17:10 0:14 nginx:
worker process
www 6362 2.1 0.6 40224 25636 ? S 17:10 0:10 nginx:
worker process
and the processes stay in 'process is shutting down' for a LONG time..
I've waited quite a while before
doing a restart instead of reload to clear them out. Not sure what
would cause this?
Most of my connection timeouts are set between 60-120seconds. Except SSL
session one is 10m, but i've waited
hours for the old processes to shut down.
Thanks again
Cliff Wells wrote:
> On Mon, 2009-08-31 at 16:52 -0400, Paul wrote:
>
>> Had some issues with people uploading/downloading files before on 0.6
>> and the buffers fixed it..
>>
>
> What sort of issues? Timeouts or "client body too large"? I regularly
> upload 10MB+ CSV files to a proxied application with the following:
>
> client_max_body_size 20m;
> client_body_buffer_size 128k;
> proxy_connect_timeout 90;
> proxy_send_timeout 90;
> proxy_read_timeout 90;
> proxy_buffer_size 4k;
> proxy_buffers 4 32k;
> proxy_busy_buffers_size 64k;
> proxy_temp_file_write_size 64k;
>
>
> Cliff
>
>
>> What would you suggest I change the config to? I will try and let you
>> know if I still see the problem.
>> Thank you.
>>
>>
>> Igor Sysoev wrote:
>>
>>> On Mon, Aug 31, 2009 at 04:17:03PM -0400, Paul wrote:
>>>
>>>
>>>
>>>> I know, but this problem has never occured until recently.. Once the
>>>> request is done, it should remove the memory allocation, but it looks
>>>> like maybe it isn't?
>>>>
>>>>
>>> No. These workers run since Aug 7 and handle up to 3,800r/s:
>>>
>>> ps ax -o pid,ppid,%cpu,vsz,wchan,start,command|egrep '(nginx|PID)'
>>> PID PPID %CPU VSZ WCHAN STARTED COMMAND
>>> 42412 51429 16.7 372452 kqread 7Aug09 nginx: worker process (nginx)
>>> 42413 51429 18.2 372452 kqread 7Aug09 nginx: worker process (nginx)
>>> 42414 51429 0.0 291556 kqread 7Aug09 nginx: cache manager process (nginx)
>>> 51429 1 0.0 291556 pause 28Jul09 nginx: master process /usr/local/nginx/ng
>>>
>>>
>>>
>>>> The only difference in a month ago and now is that we have more server
>>>> entries, and more requests per second. It used to not even use a gig of
>>>> ram doing 200 requests/sec and now it keeps using more and more ram
>>>> until it fills the entire ram and swap and errors out 8gb+ ..
>>>> What would you suggest?
>>>>
>>>>
>>> Just 250 simultaneous proxied connections take 8G. And 250 connections
>>> are too little in modern world. Why at all you have set up so huge buffers ?
>>>
>>>
>>>
>>>> Igor Sysoev wrote:
>>>>
>>>>
>>>>> On Sun, Aug 30, 2009 at 09:45:55PM -0400, Paul wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> I had nginx 0.6.32 and just upgraded to 0.7.61 and still have the same
>>>>>> memleak issue.
>>>>>> What happens is as soon as I start nginx, it starts using ram and
>>>>>> continues to use more and more and more over a 16 hour period is
>>>>>> consumes 8gb ram and all the swap and then errors out because it cant
>>>>>> use any more.
>>>>>> This never used to happen until recently and the only difference at all
>>>>>> in the config is more server entries.
>>>>>>
>>>>>> here's the config:
>>>>>>
>>>>>> user www www;
>>>>>>
>>>>>> worker_processes 16;
>>>>>> error_log logs/error.log;
>>>>>> worker_rlimit_nofile 65000;
>>>>>>
>>>>>> events
>>>>>> {
>>>>>>
>>>>>> worker_connections 40000;
>>>>>> }
>>>>>>
>>>>>> ####### HTTP SETTING
>>>>>> http
>>>>>> {
>>>>>> access_log off;
>>>>>> log_format alot '$remote_addr - $remote_user [$time_local] '
>>>>>> '"$request" $status $body_bytes_sent '
>>>>>> '"$http_referer" "$http_user_agent" "$http_accept"
>>>>>> $connection';
>>>>>> sendfile on;
>>>>>> tcp_nopush on;
>>>>>> tcp_nodelay on;
>>>>>> keepalive_timeout 0;
>>>>>> output_buffers 16 128k;
>>>>>> server_tokens off;
>>>>>> ssl_verify_client off;
>>>>>> ssl_session_timeout 10m;
>>>>>> # ssl_session_cache shared:SSL:500000;
>>>>>> include /usr/local/nginx/conf/mime.types;
>>>>>> default_type application/octet-stream;
>>>>>>
>>>>>> # cache_max_size 24;
>>>>>>
>>>>>> gzip on;
>>>>>> gzip_min_length 512;
>>>>>> gzip_buffers 64 32k;
>>>>>> gzip_types text/plain text/html text/xhtml text/css text/js;
>>>>>>
>>>>>> proxy_buffering on;
>>>>>> proxy_buffer_size 32m;
>>>>>> proxy_buffers 16 32m;
>>>>>>
>>>>>>
>>>>>>
>>>>> These settings allocate 32M buffer for each proxied request.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> proxy_busy_buffers_size 64m;
>>>>>> proxy_temp_file_write_size 2048m;
>>>>>> proxy_intercept_errors on;
>>>>>> proxy_ssl_session_reuse off;
>>>>>> proxy_read_timeout 120;
>>>>>> proxy_connect_timeout 60;
>>>>>> proxy_send_timeout 120;
>>>>>> client_body_buffer_size 32m;
>>>>>>
>>>>>>
>>>>>>
>>>>> This setting allocates 32M buffer for each request with body.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> client_header_buffer_size 64k;
>>>>>> large_client_header_buffers 16 64k;
>>>>>> client_max_body_size 16m;
>>>>>>
>>>>>>
>>>>>> server
>>>>>> {
>>>>>> listen 1.2.3.4:80;
>>>>>> location /
>>>>>> {
>>>>>>
>>>>>> proxy_pass http://3.4.5.6;
>>>>>> proxy_redirect http://3.4.5.6/
>>>>>> http://$http_host/;
>>>>>> proxy_redirect default;
>>>>>> proxy_set_header Host
>>>>>> $host; ##Forwards host along
>>>>>> proxy_set_header X-Real-IP
>>>>>> $remote_addr; ##Sends realip to customer svr
>>>>>> proxy_set_header X-Forwarded-For
>>>>>> $remote_addr; ##Sends realip to customer svr
>>>>>> }
>>>>>> }
>>>>>> server
>>>>>> {
>>>>>> listen 1.2.3.4:443;
>>>>>>
>>>>>> ssl on;
>>>>>> ssl_certificate
>>>>>> /usr/local/nginx/conf/whatever.com.crt;
>>>>>> ssl_certificate_key
>>>>>> /usr/local/nginx/conf/whatever.com.key;
>>>>>> ssl_protocols SSLv3;
>>>>>> ssl_ciphers HIGH:!ADH;
>>>>>> ssl_prefer_server_ciphers on;
>>>>>> location /
>>>>>> {
>>>>>> proxy_pass https://3.4.5.6;
>>>>>> proxy_redirect https://3.4.5.6/
>>>>>> http://$http_host/;
>>>>>> proxy_redirect default;
>>>>>> proxy_set_header Host
>>>>>> $host; ##Forwards host along
>>>>>> proxy_set_header X-Real-IP
>>>>>> $remote_addr; ##Sends realip to customer svr
>>>>>> proxy_set_header X-Forwarded-For
>>>>>> $remote_addr; ##Sends realip to customer svr
>>>>>> proxy_set_header X-FORWARDED_PROTO https;
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> And these server entries repeated about 60 or so times and that's it.
>>>>>> When it was around 40 we never had a memleak issue.
>>>>>>
>>>>>> This is on Linux, kernel is 2.6.25
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>> --
>>>>>> GloboTech Communications
>>>>>> Phone: 1-514-907-0050 x 215
>>>>>> Toll Free: 1-(888)-GTCOMM1
>>>>>> Fax: 1-(514)-907-0750
>>>>>> paul at gtcomm.net
>>>>>> http://www.gtcomm.net
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>> --
>>>> GloboTech Communications
>>>> Phone: 1-514-907-0050 x 215
>>>> Toll Free: 1-(888)-GTCOMM1
>>>> Fax: 1-(514)-907-0750
>>>> paul at gtcomm.net
>>>> http://www.gtcomm.net
>>>>
>>>>
>>>>
>>>
>>>
--
GloboTech Communications
Phone: 1-514-907-0050 x 215
Toll Free: 1-(888)-GTCOMM1
Fax: 1-(514)-907-0750
paul at gtcomm.net
http://www.gtcomm.net
More information about the nginx
mailing list