upstream keepalive close connections actively

卫越 weiyue at
Wed Aug 3 02:49:10 UTC 2011


I use nginx 0.8.54 together with latest keepalive module and ajp module to build proxy for java runtime, and notice when "accept mutax" is on, the number of upstream connections is 10 to 20 times of that when "accept mutax" is off.

The test environment is 2 CPU cores, 4G RAM.

Nginx conf:

worker_processes    2;

http {
    upstream java_server {
        server srun_id=jvm1;
        keepalive 500 single;

    server {
        listen              80 default;

        location /index.jsp {
            ajp_intercept_errors   on;
            ajp_hide_header      X-Powered-By;
            ajp_buffers          16 8k;
            ajp_buffer_size       8k;
            ajp_read_timeout     30;
            ajp_connect_timeout  20;
            ajp_pass            java_server;

I use ab -c 100 -n 50000 XXXX to simulate visit, and netstat -an | grep 8009 -c to see how many connections to upstream is built.

when "accept mutax" is on, there is 28473 connections to upstream, but only 1674 connections when "accept mutax" is off.

If there is only on work process, the result is similar to "accept mutax" is off.

Captures packets:

When "accept mutax" is off, It is clear that it is upstream server who closes the connection:
2169    0.215060       TCP     41621 > 8009 [ACK] Seq=1333 Ack=4654 Win=42496 Len=0 TSV=1513155905 TSER=1513155905
12474   15.216063       TCP     8009 > 41621 [FIN, ACK] Seq=4654 Ack=1333 Win=42496 Len=0 TSV=1513159655 TSER=1513155905
12499   15.223667       TCP     41621 > 8009 [FIN, ACK] Seq=1333 Ack=4655 Win=42496 Len=0 TSV=1513159657 TSER=1513159655
12500   15.223672       TCP     8009 > 41621 [ACK] Seq=4655 Ack=1334 Win=42496 Len=0 TSV=1513159657 TSER=1513159657

When "accept mutax" is on, it comes to nginx:
5966    1.479476       TCP     54788 > 8009 [ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225689 TSER=1513225689
6008    1.483907       TCP     54788 > 8009 [FIN, ACK] Seq=297 Ack=1035 Win=34944 Len=0 TSV=1513225690 TSER=1513225689
6012    1.484331       TCP     8009 > 54788 [FIN, ACK] Seq=1035 Ack=298 Win=34944 Len=0 TSV=1513225690 TSER=1513225690
6013    1.484342       TCP     54788 > 8009 [ACK] Seq=298 Ack=1036 Win=34944 Len=0 TSV=1513225690 TSER=1513225690

I fix this by simply adding a test in the ngx_http_upstream_keepalive_close_handler

static void
ngx_http_upstream_keepalive_close_handler(ngx_event_t *ev)
    ngx_http_upstream_keepalive_srv_conf_t  *conf;
    ngx_http_upstream_keepalive_cache_t     *item;

    int                n;
    u_char             buf;
    ngx_connection_t  *c;

    ngx_log_debug0(NGX_LOG_DEBUG_HTTP, ev->log, 0,
                   "keepalive close handler");

    c = ev->data;
    item = c->data;
    conf = item->conf;

    if ((n = c->recv(c, &buf, 1)) == 0) {
        ngx_queue_insert_head(&conf->free, &item->queue);

However, I never understand why it fires the event handler.

This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you.


More information about the nginx mailing list