SPDY sockets staying open indefinitely
CM Fields
cmfileds at gmail.com
Fri Nov 9 19:07:02 UTC 2012
Little more information. Here we see netstat in CLOSE_WAIT state to the
offending ip.
# netstat | grep tcp
tcp 0 0 localhost.https 210.77.27.XX.2284
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2309
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2315
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2332
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2358
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2376
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2386
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2406
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2413
CLOSE_WAIT
tcp 0 0 localhost.https 210.77.27.XX.2434
CLOSE_WAIT
On Fri, Nov 9, 2012 at 1:08 PM, CM Fields <cmfileds at gmail.com> wrote:
> We are seeing an issue with Nginx SPDY sockets staying open indefinitely.
> I understand that the SPDY patch is still beta and not ready for
> production. This server is a test box which is used as a mirror of the
> production system accepting public traffic.
>
> This is the source build we are using:
> Nginx 1.3.8
> OpenSSL 1.0.1c
> SPDY patch.spdy-52.txt
> OpenBSD v5.2 (default install)
>
> NOTE: If nginx is built without the SPDY patch there are _NO_ issues at
> all and the server works like normal with keep alive connections.
>
> The SPDY problem occurs when the offending client connects and they make a
> lot of SPDY error requests. Each of these requests takes a
> "worker_connections" slot. If the client makes more requests then the
> worker_connections directive allows the web server denies all new
> connections. Essentially, this one ip has triggered a denial of service.
>
> What we are seeing in the logs is a client connecting and triggering a
> bunch of "SPDY ERROR while SSL handshaking" error messages in the
> error_log. There is no mention of the client ip in the access_log.
> According to the Pf logs and the firewall state table the connections from
> the offending ip have been closed for hours. This server gets around 2000
> connections per hour and only this one ip triggered this issue in 24 hours
> of operation. Sadly, I do not have packet dumps of this traffic so I do not
> know exactly what the client sent. Perhaps this is a badly written client
> or a malicious scan. I do not know.
>
> The only way to clear the open sockets and allow new connections is to
> completely restart the nginx daemon.
>
> The nginx.conf for this server is very basic. It just serves a few static
> resources. We tried adding some timeouts to help clear the open sockets to
> no avail.
>
> ## Timeouts
> client_body_timeout 10;
> client_header_timeout 10;
> keepalive_timeout 180 180;
> send_timeout 10;
> reset_timedout_connection on;
>
>
> Here is the error_log with the client ip. Server is listening on localhost:
>
> 2012/11/08 01:42:59 [warn] 25619#0: *5792 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:00 [alert] 25619#0: *5792 spdy inflate() failed: -5 while
> SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:00 [warn] 25619#0: *5792 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:10 [warn] 25619#0: *5796 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:11 [alert] 25619#0: *5796 spdy inflate() failed: -5 while
> SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:11 [warn] 25619#0: *5796 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:22 [warn] 25619#0: *5803 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:23 [alert] 25619#0: *5803 spdy inflate() failed: -5 while
> SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:23 [warn] 25619#0: *5803 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:33 [warn] 25619#0: *5804 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:34 [alert] 25619#0: *5804 spdy inflate() failed: -5 while
> SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
> 2012/11/08 01:43:34 [warn] 25619#0: *5804 SPDY ERROR while SSL
> handshaking, client: 210.77.27.XX, server: 127.0.0.1:443
>
>
>
> Here is a fstat of the open sockets. These sockets will never close until
> the nginx daemon is restarted.
>
> # fstat -n | grep inter
>
> daemon nginx 25619 6* internet stream tcp 0xfffffe821e98fd20
> 127.0.0.1:80
> daemon nginx 25619 7* internet stream tcp 0xfffffe820cb94970
> 127.0.0.1:443
> daemon nginx 25619 108* internet stream tcp 0xfffffe820e7354f0
> 127.0.0.1:443 <-- 210.77.27.XX:2406
> daemon nginx 25619 113* internet stream tcp 0x0 *:0
> daemon nginx 25619 115* internet stream tcp 0x0 *:0
> daemon nginx 25619 117* internet stream tcp 0x0 *:0
> daemon nginx 25619 118* internet stream tcp 0x0 *:0
> daemon nginx 25619 123* internet stream tcp 0x0 *:0
> daemon nginx 25619 124* internet stream tcp 0xfffffe82075ab2d0
> 127.0.0.1:443 <-- 210.77.27.XX:2284
> daemon nginx 25619 125* internet stream tcp 0xfffffe82075ab730
> 127.0.0.1:443 <-- 210.77.27.XX:2332
> daemon nginx 25619 126* internet stream tcp 0xfffffe8211d3dd90
> 127.0.0.1:443 <-- 210.77.27.XX:2358
> daemon nginx 25619 127* internet stream tcp 0xfffffe8217f1c8d8
> 127.0.0.1:443 <-- 210.77.27.XX:2386
> daemon nginx 25619 128* internet stream tcp 0xfffffe820e7352c0
> 127.0.0.1:443 <-- 210.77.27.XX:2376
> daemon nginx 25619 133* internet stream tcp 0xfffffe8217f1c018
> 127.0.0.1:443 <-- 210.77.27.XX:2309
> daemon nginx 25619 141* internet stream tcp 0xfffffe820e735950
> 127.0.0.1:443 <-- 210.77.27.XX:2413
> daemon nginx 25619 142* internet stream tcp 0xfffffe820cb942e0
> 127.0.0.1:443 <-- 210.77.27.XX:2315
> daemon nginx 25619 143* internet stream tcp 0xfffffe8211d3d930
> 127.0.0.1:443 <-- 210.77.27.XX:2434
>
>
> If it helps, memory usage and CPU time for the daemon is low:
>
> PID USERNAME PRI NICE SIZE RES STATE WAIT TIME CPU
> COMMAND
> 5611 daemon 2 0 17M 6032K sleep/1 kqread 1:52
> 0.10% nginx
> 7311 root 18 0 13M 1040K idle pause
> 0:00 0.00% nginx
>
>
> I just wanted to report this issue in case someone else had the same
> problem. I wish I had more information, but at this time I am not sure what
> the client is sending to cause the hanging open sockets. If there is any
> other information that will help or if a new patch needs testing please
> tell me.
>
> Have a great weekend!
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20121109/f71c3246/attachment-0001.html>
More information about the nginx
mailing list