Nginx flv stream gets too slow on 2000 concurrent connections

shahzaib shahzaib shahzaib.cb at gmail.com
Wed Jan 23 19:00:36 UTC 2013


Following is the output of 2200+ concurrent connections and kernel version
is 2.6.32 :-

Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local)        01/23/2013
_x86_64_        (16 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.75    3.01    0.49    0.13    0.00   94.63

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              23.27      2008.64       747.29  538482374  200334422

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.97    0.00    1.10    0.19    0.00   97.74

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              30.00      2384.00       112.00       2384        112

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.13    0.00    0.52    0.13    0.00   99.22

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              21.00      1600.00         8.00       1600          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.19    0.00    0.45    0.26    0.00   99.10

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              37.00      2176.00         8.00       2176          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.45    0.00    0.58    0.19    0.00   98.77

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              24.00      1192.00         8.00       1192          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.45    0.19    0.00   99.03

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              29.00      2560.00         8.00       2560          8


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.65    0.19    0.00   98.84

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              35.00      2584.00       152.00       2584        152

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00    0.39    0.39    0.00   98.96

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              25.00      1976.00         8.00       1976          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.52    0.39    0.00   98.77

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              33.00      1352.00         8.00       1352          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00    0.58    0.26    0.00   98.90

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              28.00      2408.00         8.00       2408          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.45    0.00    0.65    0.06    0.00   98.84

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              37.00      1896.00         8.00       1896          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.71    0.00    0.97    0.13    0.00   98.19

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              33.00      2600.00        64.00       2600         64

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.65    0.26    0.00   98.77

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              20.00      1520.00         8.00       1520          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.19    0.00    0.39    0.19    0.00   99.22

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              49.00      3088.00        80.00       3088         80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.26    0.00    0.91    0.26    0.00   98.58

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              48.00      1328.00         8.00       1328          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.32    0.00    0.32    0.26    0.00   99.09

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              32.00      1528.00         8.00       1528          8

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.45    0.00    0.58    0.39    0.00   98.58

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              35.00      1624.00        72.00       1624         72

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.39    0.00    0.58    0.19    0.00   98.84



On Wed, Jan 23, 2013 at 11:07 PM, Lukas Tribus <luky-37 at hotmail.com> wrote:

>
> Can you send us a 20+ lines of output from "vmstat 1" under this load?
> Also, what exact linux kernel are you running ("cat /proc/version")?
>
>
> ________________________________
> > Date: Wed, 23 Jan 2013 21:51:43 +0500
> > Subject: Re: Nginx flv stream gets too slow on 2000 concurrent
> connections
> > From: shahzaib.cb at gmail.com
> > To: nginx at nginx.org
> >
> > Following is the output of 3000+ concurrent connections on iostat 1
> > command :-
> >
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >             1.72    2.96    0.47    0.12    0.00   94.73
> >
> > Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> > sda              22.47      1988.92       733.04  518332350  191037238
> >
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >             0.39    0.00    0.91    0.20    0.00   98.50
> >
> > Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> > sda              22.00      2272.00         0.00       2272          0
> >
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >             0.46    0.00    0.91    0.07    0.00   98.57
> >
> > Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> > sda              23.00       864.00        48.00        864         48
> >
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >             0.39    0.00    0.72    0.33    0.00   98.56
> >
> > Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> > sda              60.00      3368.00       104.00       3368        104
> >
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >             0.20    0.00    0.65    0.20    0.00   98.95
> >
> >
> >
> > On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib
> > <shahzaib.cb at gmail.com<mailto:shahzaib.cb at gmail.com>> wrote:
> > Sketchboy, i sent you the output of only 1000 concurrent connections
> > because it wasn't peak hours of traffic. I'll send you the output of
> > iostat 1 when concurrent  connections will hit to 2000+ in next hour.
> > Please keep in touch cause i need to resolve this issue :(
> >
> >
> > On Wed, Jan 23, 2013 at 8:21 PM, skechboy
> > <nginx-forum at nginx.us<mailto:nginx-forum at nginx.us>> wrote:
> > From your output I can see that it isn't IO issue, I wish I could help
> you
> > more.
> >
> > Posted at Nginx Forum:
> > http://forum.nginx.org/read.php?2,235447,235476#msg-235476<
> http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476>
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org<mailto:nginx at nginx.org>
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> >
> > _______________________________________________ nginx mailing list
> > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130124/25607036/attachment.html>


More information about the nginx mailing list