Weird Memleak problem
Cliff Wells
cliff at develix.com
Tue Sep 1 01:02:08 MSD 2009
On Mon, 2009-08-31 at 16:17 -0400, Paul wrote:
> I know, but this problem has never occured until recently.. Once the
> request is done, it should remove the memory allocation, but it looks
> like maybe it isn't?
> The only difference in a month ago and now is that we have more server
> entries, and more requests per second. It used to not even use a gig of
> ram doing 200 requests/sec and now it keeps using more and more ram
> until it fills the entire ram and swap and errors out 8gb+ ..
> What would you suggest?
There's many more factors than simple requests per second and number of
server entries:
1) the speed of the proxied backends: if the backend is slow, then that
memory will be held onto for much longer.
2) the speed of clients: slow clients can cause the request processing
cycle to be slowed down, causing memory to be held onto for longer.
3) resource contention: more backends means slower overall performance
as the OS can't cache as efficiently (both CPU and disk), must handle
more context switches, more disk seeks, etc.
In short, one resource shortage (say CPU or I/O) can cause systemic
resource shortages as other resources can't be freed in a timely fashion
(in turn causing even more resource shortages), so the problem quickly
spirals out of control.
These problems are pretty much unavoidable on a single server, the only
question is at what point will you encounter them. Basically there's a
breaking point in the scalability of any particular system. Tuning is
the art of pushing that breaking point out as far as possible. Because
you've got excessively large buffers, you've almost certainly brought
that breaking point upon yourself much earlier than need be.
As Igor mentions, 32MB seems really, really excessive (an application
that generates responses of that size calls for one or more dedicated
servers and brain surgery for the developer). Maybe you should try
something closer to 8 *kilobytes* and see if that addresses your issues.
Regards,
Cliff
>
>
> Igor Sysoev wrote:
> > On Sun, Aug 30, 2009 at 09:45:55PM -0400, Paul wrote:
> >
> >
> >> I had nginx 0.6.32 and just upgraded to 0.7.61 and still have the same
> >> memleak issue.
> >> What happens is as soon as I start nginx, it starts using ram and
> >> continues to use more and more and more over a 16 hour period is
> >> consumes 8gb ram and all the swap and then errors out because it cant
> >> use any more.
> >> This never used to happen until recently and the only difference at all
> >> in the config is more server entries.
> >>
> >> here's the config:
> >>
> >> user www www;
> >>
> >> worker_processes 16;
> >> error_log logs/error.log;
> >> worker_rlimit_nofile 65000;
> >>
> >> events
> >> {
> >>
> >> worker_connections 40000;
> >> }
> >>
> >> ####### HTTP SETTING
> >> http
> >> {
> >> access_log off;
> >> log_format alot '$remote_addr - $remote_user [$time_local] '
> >> '"$request" $status $body_bytes_sent '
> >> '"$http_referer" "$http_user_agent" "$http_accept"
> >> $connection';
> >> sendfile on;
> >> tcp_nopush on;
> >> tcp_nodelay on;
> >> keepalive_timeout 0;
> >> output_buffers 16 128k;
> >> server_tokens off;
> >> ssl_verify_client off;
> >> ssl_session_timeout 10m;
> >> # ssl_session_cache shared:SSL:500000;
> >> include /usr/local/nginx/conf/mime.types;
> >> default_type application/octet-stream;
> >>
> >> # cache_max_size 24;
> >>
> >> gzip on;
> >> gzip_min_length 512;
> >> gzip_buffers 64 32k;
> >> gzip_types text/plain text/html text/xhtml text/css text/js;
> >>
> >> proxy_buffering on;
> >> proxy_buffer_size 32m;
> >> proxy_buffers 16 32m;
> >>
> >
> > These settings allocate 32M buffer for each proxied request.
> >
> >
> >> proxy_busy_buffers_size 64m;
> >> proxy_temp_file_write_size 2048m;
> >> proxy_intercept_errors on;
> >> proxy_ssl_session_reuse off;
> >> proxy_read_timeout 120;
> >> proxy_connect_timeout 60;
> >> proxy_send_timeout 120;
> >> client_body_buffer_size 32m;
> >>
> >
> > This setting allocates 32M buffer for each request with body.
> >
> >
> >> client_header_buffer_size 64k;
> >> large_client_header_buffers 16 64k;
> >> client_max_body_size 16m;
> >>
> >>
> >> server
> >> {
> >> listen 1.2.3.4:80;
> >> location /
> >> {
> >>
> >> proxy_pass http://3.4.5.6;
> >> proxy_redirect http://3.4.5.6/
> >> http://$http_host/;
> >> proxy_redirect default;
> >> proxy_set_header Host
> >> $host; ##Forwards host along
> >> proxy_set_header X-Real-IP
> >> $remote_addr; ##Sends realip to customer svr
> >> proxy_set_header X-Forwarded-For
> >> $remote_addr; ##Sends realip to customer svr
> >> }
> >> }
> >> server
> >> {
> >> listen 1.2.3.4:443;
> >>
> >> ssl on;
> >> ssl_certificate
> >> /usr/local/nginx/conf/whatever.com.crt;
> >> ssl_certificate_key
> >> /usr/local/nginx/conf/whatever.com.key;
> >> ssl_protocols SSLv3;
> >> ssl_ciphers HIGH:!ADH;
> >> ssl_prefer_server_ciphers on;
> >> location /
> >> {
> >> proxy_pass https://3.4.5.6;
> >> proxy_redirect https://3.4.5.6/
> >> http://$http_host/;
> >> proxy_redirect default;
> >> proxy_set_header Host
> >> $host; ##Forwards host along
> >> proxy_set_header X-Real-IP
> >> $remote_addr; ##Sends realip to customer svr
> >> proxy_set_header X-Forwarded-For
> >> $remote_addr; ##Sends realip to customer svr
> >> proxy_set_header X-FORWARDED_PROTO https;
> >> }
> >> }
> >>
> >> And these server entries repeated about 60 or so times and that's it.
> >> When it was around 40 we never had a memleak issue.
> >>
> >> This is on Linux, kernel is 2.6.25
> >>
> >> Thanks
> >>
> >>
> >> --
> >> GloboTech Communications
> >> Phone: 1-514-907-0050 x 215
> >> Toll Free: 1-(888)-GTCOMM1
> >> Fax: 1-(514)-907-0750
> >> paul at gtcomm.net
> >> http://www.gtcomm.net
> >>
> >>
> >
> >
>
--
http://www.google.com/search?q=vonage+sucks
More information about the nginx
mailing list