upstream keepalive close connections actively

卫越 weiyue at taobao.com
Wed Aug 3 02:49:10 UTC 2011


Hi,

I use nginx 0.8.54 together with latest keepalive module and ajp module to build proxy for java runtime, and notice when "accept mutax" is on, the number of upstream connections is 10 to 20 times of that when "accept mutax" is off.

The test environment is 2 CPU cores, 4G RAM.

Nginx conf:

worker_processes    2;

http {
    upstream java_server {
        server 127.0.0.1:8009 srun_id=jvm1;
        keepalive 500 single;
    }

    server {
        listen              80 default;
        server_name        xxx.xxx.xxx

        location /index.jsp {
            ajp_intercept_errors   on;
            ajp_hide_header      X-Powered-By;
            ajp_buffers          16 8k;
            ajp_buffer_size       8k;
            ajp_read_timeout     30;
            ajp_connect_timeout  20;
            ajp_pass            java_server;
        }
    }
}

I use ab -c 100 -n 50000 XXXX to simulate visit, and netstat -an | grep 8009 -c to see how many connections to upstream is built.

when "accept mutax" is on, there is 28473 connections to upstream, but only 1674 connections when "accept mutax" is off.

If there is only on work process, the result is similar to "accept mutax" is off.

Captures packets:

When "accept mutax" is off, It is clear that it is upstream server who closes the connection:
2169    0.215060        127.0.0.1       127.0.0.1       TCP     41621 > 8009 [ACK] Seq=1333 Ack=4654 Win=42496 Len=0 TSV=1513155905 TSER=1513155905
12474   15.216063       127.0.0.1       127.0.0.1       TCP     8009 > 41621 [FIN, ACK] Seq=4654 Ack=1333 Win=42496 Len=0 TSV=1513159655 TSER=1513155905
12499   15.223667       127.0.0.1       127.0.0.1       TCP     41621 > 8009 [FIN, ACK] Seq=1333 Ack=4655 Win=42496 Len=0 TSV=1513159657 TSER=1513159655
12500   15.223672       127.0.0.1       127.0.0.1       TCP     8009 > 41621 [ACK] Seq=4655 Ack=1334 Win=42496 Len=0 TSV=1513159657 TSER=1513159657

When "accept mutax" is on, it comes to nginx:
5966    1.479476        127.0.0.1       127.0.0.1       TCP     54788 > 8009 [ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225689 TSER=1513225689
6008    1.483907        127.0.0.1       127.0.0.1       TCP     54788 > 8009 [FIN, ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225690 TSER=1513225689
6012    1.484331        127.0.0.1       127.0.0.1       TCP     8009 > 54788 [FIN, ACK] Seq=1035 Ack=298 Win=34944 Len=0 TSV=1513225690 TSER=1513225690
6013    1.484342        127.0.0.1       127.0.0.1       TCP     54788 > 8009 [ACK] Seq=298 Ack=1036 Win=34944 Len=0 TSV=1513225690 TSER=1513225690

I fix this by simply adding a test in the ngx_http_upstream_keepalive_close_handler

static void
ngx_http_upstream_keepalive_close_handler(ngx_event_t *ev)
{
    ngx_http_upstream_keepalive_srv_conf_t  *conf;
    ngx_http_upstream_keepalive_cache_t     *item;

    int                n;
    u_char             buf;
    ngx_connection_t  *c;

    ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0,
                   "keepalive close handler");

    c = ev->data;
    item = c->data;
    conf = item->conf;

    if ((n = c->recv(c, &buf, 1)) == 0) {
        ngx_queue_remove(&item->queue);
        ngx_close_connection(item->connection);
        ngx_queue_insert_head(&conf->free, &item->queue);
    }
}

However, I never understand why it fires the event handler.

This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you.

本电邮(包括任何附件)可能含有机密资料并受法律保护。如您不是正确的收件人,请您立即删除本邮件。请不要将本电邮进行复制并用作任何其他用途、或透露本邮件之内容。谢谢。


More information about the nginx mailing list