Query about keepalive

Maxim Dounin mdounin at mdounin.ru
Tue Apr 24 17:38:10 UTC 2012


Hello!

On Tue, Apr 24, 2012 at 08:43:03PM +0530, Fasih wrote:

> Thanks a lot for the response.
> 
> 
> > Just a side note: you may want to avoid setting keepalive bigger
> > than your backend is able to handle, keeping in mind that it's
> > not a hard limit on connections established, but rather size of
> > connection cache kept by each worker.
> 
> I didnt realize that. I did run the test with keepalive 16 but the
> results are similiar.
> 
> 
> > Just a side note: please do not use html to post here.  We'll
> > won't see it anyway, and plain text of your message is somewhat
> > unreadable.
> 
> 
> Sorry about that. Wiki format:
>  || Session || Keepalive || Conns upstream || Conn Time || Unique
> upstream hosts || Reqs upstream || Avg time to 1st byte || Max
> upstream conn reuse || Client conns(1) || Client reqs(1) || Client
> replies(1) || Testdur(1) || Client conns || Client reqs || Client
> replies || Testdur ||
> | site | 1 | 48 | 8.30858 | 19 | 192 | 0.152623 | 31 | 2 | 130 | 130 |
> 31.219 | 8 | 520 | 520 | 78.064 |
> | site | 0 | 192 | 20.7169 | 19 | 192 | 0.167946 | 1 | 2 | 130 | 130 |
> 25.680 | 8 | 520 | 520 | 71.781 |
> 
> > How many times did you run the test?  From numbers it looks like
> > you are measuring your network and/or upstream server performance,
> > and I suspect this might fluctuate widely.  You might want to do
> > the test at least 3 times in each configuration to be able to see
> > the difference between two configurations.
> >
> I repeated this for quite a number of times, the trend is, the
> increased number of upstreams (ie. I configure a.mysite,
> images.mysites... etc) the slower it gets.

There shouldn't be any difference from number of upstreams from 
nginx side (despite the fact that it lowers chance to get cached 
connection).  Do these upstreams map to the same host?  If yes - 
is it able to cope with the number of connections opened?

> > If the difference will still be there, you may want to share more
> > details about your setup (provide nginx configs, network details,
> > description of the upstream servers involved, probably debug log
> > for a deeper investigation).
> 
> configuration consists of some 20s of these(0-20):
>   server {
>       server_name 0.my-site.com;
>       listen 10010;
>       location / {
>         proxy_pass http://0.my-site.com;
>         proxy_http_version 1.1;
>         proxy_set_header Connection "";
>         proxy_cache my-cache;
>       }
>     }

What's configured in upstream blocks?

How configuration looks like with keepalive disabled?  Most 
notably: do you remove/comment out proxy_http_version and 
proxy_set_header? If yes, please try with these lines in place 
(but without "keepalive" in upstream block) to rule out protocol 
differences.

What about details about the upstream servers used (software used, 
network connectivity details, how many connections they are able 
to handle)?

> And other than these I dont think I modified any setting
>     server_names_hash_max_size 1024;
>     proxy_cache_path  /tmp/cache levels=1:2 keys_zone=my-cache:8m
> max_size=1000m inactive=600m;
> 
> >
> > It is possible that keepalive connections to upstreams will lead
> > to worse overall performance than no keepalives (notably due to
> > various network effects and upstream servers behaviour), but I
> > wouldn't expect it to be slower in general.
> 
> If you can give pointers as to what to look for, I could investigate more.
> The logs generated are huge for a moderate load, not sure if attaching
> is a good idea. I can grep out keepalive|upstream if you want

First of all check if there are any "info" or more severe level 
messages which you don't expect and/or understand.  If there are 
any, post them here for a review.  If there are none - please 
compress logs and make them available for download (you may email me 
privately if there are any private data).

Maxim Dounin



More information about the nginx-devel mailing list