Too many connections in waiting state

Anoop Alias anoopalias01 at gmail.com
Thu Sep 7 10:58:41 UTC 2017


Doing strace on a nginx child in the shutdown state i get

##################
 strace -p 23846
strace: Process 23846 attached
restart_syscall(<... resuming interrupted futex ...>

) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5395,
{1504781553, 30288000}, ffffffff

) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5397,
{1504781554, 30408000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5399,
{1504781555, 30535000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5401,
{1504781556, 30675000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5403,
{1504781557, 30767000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5405,
{1504781558, 30889000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5407,
{1504781559, 30980000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5409,
{1504781560, 31099000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5411,
{1504781561, 31210000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5413,
{1504781562, 31317000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5415,
{1504781563, 31428000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5417,
{1504781564, 31575000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5419,
{1504781565, 31678000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5421,
{1504781566, 31828000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5423,
{1504781567, 31941000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
futex(0x7a6c0078, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7a6c00c4, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 5425,
{1504781568, 32085000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
###############################################



On Thu, Sep 7, 2017 at 3:59 PM, Lucas Rolff <lucas at lucasrolff.com> wrote:

> Check if any of the sites you run on the server gets crawled by any
> crawlers around the time you see an increase – I know that a crawler such
> as Screaming Frog doesn’t handle servers that are capable of http2
> connections and have it activated for sites that are getting crawled, and
> will result in connections with a “waiting” state in nginx.
>
>
>
> It might be there’s other tools that behave the same way, but I’d
> personally look into what kind of traffic/requests happened that increased
> the waiting state a lot.
>
>
>
> Best Regards,
>
>
>
> *From: *nginx <nginx-bounces at nginx.org> on behalf of Anoop Alias <
> anoopalias01 at gmail.com>
> *Reply-To: *"nginx at nginx.org" <nginx at nginx.org>
> *Date: *Thursday, 7 September 2017 at 11.52
> *To: *Nginx <nginx at nginx.org>
> *Subject: *Too many connections in waiting state
>
>
>
> Hi,
>
>
>
> I see sometimes too many waiting connections on nginx .
>
>
>
> This often gets cleared on a restart , but otherwise pileup
>
>
>
> ###################
>
> Active connections: 4930
>
>
>
> server accepts handled requests
>
>
>
>  442071 442071 584163
>
>
>
> Reading: 2 Writing: 539 Waiting: 4420
>
>
>
> #######################
>
> [root at web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf
>
> keepalive_timeout               10s;
>
> keepalive_requests              200;
>
> keepalive_disable               msie6 safari;
>
> ########################
>
>
>
> [root at web1 ~]# nginx -V
>
> nginx version: nginx/1.13.3
>
> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
>
> built with LibreSSL 2.5.5
>
> TLS SNI support enabled
>
> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
> --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit
> --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5
> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
> --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
> --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp
> --http-proxy-temp-path=/var/cache/nginx/proxy_temp
> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
> --group=nobody --with-http_ssl_module --with-http_realip_module
> --with-http_addition_module --with-http_sub_module --with-http_dav_module
> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
> --with-http_gzip_static_module --with-http_random_index_module
> --with-http_secure_link_module --with-http_stub_status_module
> --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src
> --with-file-aio --with-threads --with-stream --with-stream_ssl_module
> --with-http_slice_module --with-compat --with-http_v2_module
> --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-1.12.34.2-stable
> --add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/
> gems/passenger-5.1.8/src/nginx_module --add-dynamic-module=ngx_brotli
> --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32
> --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module
> --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0
> --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=testcookie-nginx-module
> --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
> --with-ld-opt=-Wl,-E
>
> #######################
>
>
>
>
>
> What could be causing this? The server is quite capable and this happens
> only rarely
>
>
>
>
>
> --
>
> *Anoop P Alias*
>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



-- 
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20170907/59812278/attachment.html>


More information about the nginx mailing list