Nginx hang and do not respond with large number of network connection in FIN_WAIT state
Anoop Alias
anoopalias01 at gmail.com
Thu Jan 10 18:25:13 UTC 2019
This server is not using network drives and the only thing I can think of
is the temp paths set to /dev/shm
--http-client-body-temp-path=/dev/shm/client_temp
--http-proxy-temp-path=/dev/shm/proxy_temp
--http-fastcgi-temp-path=/dev/shm/fastcgi_temp
--http-uwsgi-temp-path=/dev/shm/uwsgi_temp
--http-scgi-temp-path=/dev/shm/scgi_temp
Could this be causing an issue? The domain under the attack is set to proxy
to httpd and would surely be using http-client-body-temp-path
and http-proxy-temp-path
Although the system is quite beefy in terms of cpu and ram
# df -h|grep shm
tmpfs 63G 7.2M 63G 1% /dev/shm
On Thu, Jan 10, 2019 at 11:34 PM Anoop Alias <anoopalias01 at gmail.com> wrote:
> The issue was identified to be an enormous number of http request (
> attack) to one of the hosted domains that was using cloudflare. The traffic
> is coming in from cloudflare and this was causing nginx to be exhausted in
> terms of the TCP stack
>
> #########################################
> # netstat -tn|awk '{print $6}'|sort|uniq -c
> 1
> 19922 CLOSE_WAIT
> 2 CLOSING
> 23528 ESTABLISHED
> 17785 FIN_WAIT1
> 4 FIN_WAIT2
> 1 Foreign
> 17 LAST_ACK
> 904 SYN_RECV
> 14 SYN_SENT
> 142 TIME_WAIT
> ############################################
>
> Interestingly with the same attack, removing Nginx from the picture and
> exposing httpd cause the connections to be fine
>
> ############################################
> ]# netstat -tn|awk '{print $6}'|sort|uniq -c
> 1
> 39 CLOSE_WAIT
> 9 CLOSING
> 664 ESTABLISHED
> 13 FIN_WAIT1
> 48 FIN_WAIT2
> 1 Foreign
> 24 LAST_ACK
> 8 SYN_RECV
> 12 SYN_SENT
> 1137 TIME_WAIT
> ##############################################
>
> Although the load is a bit high than usual.
>
> It looks like the TCP connections in the established state is somehow
> piling up with Nginx
>
> Number of established connections over time with nginx
> ##############
> 535 ESTABLISHED
> 1195 ESTABLISHED
> 23437 ESTABLISHED
> 23490 ESTABLISHED
> 23482 ESTABLISHED
> 389 ESTABLISHED
> ##############
>
> I think this could be a misconfiguration in Nginx?. Would be great if
> someone points out what is wrong with the config
>
> Thanks,
>
>
> On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias <anoopalias01 at gmail.com>
> wrote:
>
>> Hi,
>>
>> Have had a really strange issue on a Nginx server configured as a reverse
>> proxy wherein the server stops responding when the network connections in
>> ESTABLISHED state and FIN_WAIT state in very high compared to normal
>> working
>>
>> If you see the below network graph, at around 00:30 hours there is a big
>> spike in network connections in FIN_WAIT state, to around 12000 from the
>> normal value of ~20
>>
>> https://i.imgur.com/wb6VMWo.png
>>
>> At this state, Nginx stops responding fully and does not work even after
>> a full restart of the service.
>>
>> Switching off Nginx and bring Apache service to the frontend (removing
>> the reverse proxy) fix this and the connections drop
>>
>> Nginx config & build setting
>> ##################################
>> nginx -V
>> nginx version: nginx/1.15.8
>> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)
>> built with LibreSSL 2.8.3
>> TLS SNI support enabled
>> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
>> --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit
>> --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3
>> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
>> --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
>> --lock-path=/var/run/nginx.lock
>> --http-client-body-temp-path=/dev/shm/client_temp
>> --http-proxy-temp-path=/dev/shm/proxy_temp
>> --http-fastcgi-temp-path=/dev/shm/fastcgi_temp
>> --http-uwsgi-temp-path=/dev/shm/uwsgi_temp
>> --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody
>> --with-http_ssl_module --with-http_realip_module
>> --with-http_addition_module --with-http_sub_module --with-http_dav_module
>> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
>> --with-http_gzip_static_module --with-http_random_index_module
>> --with-http_secure_link_module --with-http_stub_status_module
>> --with-http_auth_request_module --with-file-aio --with-threads
>> --with-stream --with-stream_ssl_module --with-http_slice_module
>> --with-compat --with-http_v2_module
>> --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable
>> --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module
>> --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61
>> --add-dynamic-module=headers-more-nginx-module-0.32
>> --add-dynamic-module=ngx_http_redis-0.3.8
>> --add-dynamic-module=redis2-nginx-module
>> --add-dynamic-module=srcache-nginx-module-0.31
>> --add-dynamic-module=ngx_devel_kit-0.3.0
>> --add-dynamic-module=set-misc-nginx-module-0.31
>> --add-dynamic-module=ngx_http_geoip2_module
>> --add-dynamic-module=testcookie-nginx-module
>> --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall
>> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
>> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
>> --with-ld-opt=-Wl,-E
>>
>> #####################################
>> # worker_processes auto; #Set to auto for a powerful server
>> worker_processes 1;
>> worker_rlimit_nofile 69152;
>> worker_shutdown_timeout 10s;
>> # worker_cpu_affinity auto;
>> timer_resolution 1s;
>> thread_pool iopool threads=32 max_queue=65536;
>> pcre_jit on;
>> pid /var/run/nginx.pid;
>> error_log /var/log/nginx/error_log;
>>
>> #Load Dynamic Modules
>> include /etc/nginx/modules.d/*.load;
>>
>>
>> events {
>> worker_connections 20480;
>> use epoll;
>> multi_accept on;
>> accept_mutex off;
>> }
>>
>> lingering_close off;
>> limit_req zone=FLOODVHOST burst=200;
>> limit_req zone=FLOODPROTECT burst=200;
>> limit_conn PERSERVER 60;
>> client_header_timeout 5s;
>> client_body_timeout 5s;
>> send_timeout 5s;
>> keepalive_timeout 0;
>> http2_idle_timeout 20s;
>> http2_recv_timeout 20s;
>>
>>
>> aio threads=iopool;
>> aio_write on;
>> directio 64m;
>> output_buffers 2 512k;
>>
>> tcp_nodelay on;
>>
>> types_hash_max_size 4096;
>> server_tokens off;
>> client_max_body_size 2048m;
>> reset_timedout_connection on;
>>
>> #Proxy
>> proxy_read_timeout 300;
>> proxy_send_timeout 300;
>> proxy_connect_timeout 30s;
>>
>> #FastCGI
>> fastcgi_read_timeout 300;
>> fastcgi_send_timeout 300;
>> fastcgi_connect_timeout 30s;
>>
>> #Proxy Buffer
>> proxy_buffering on;
>> proxy_buffer_size 128k;
>> proxy_buffers 8 128k;
>> proxy_busy_buffers_size 256k;
>>
>> #FastCGI Buffer
>> fastcgi_buffer_size 128k;
>> fastcgi_buffers 8 128k;
>> fastcgi_busy_buffers_size 256k;
>>
>> server_names_hash_max_size 2097152;
>> server_names_hash_bucket_size 128;
>> ######################################################
>>
>>
>>
>> --
>> *Anoop P Alias*
>>
>>
>
> --
> *Anoop P Alias*
>
>
--
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20190110/08956082/attachment-0001.html>
More information about the nginx
mailing list