Nginx flv stream gets too slow on 2000 concurrent connections

shahzaib shahzaib shahzaib.cb at gmail.com
Wed Jan 23 12:13:59 UTC 2013


Sorry for above reply on wrong command. Following are the output of iostat
1 :-

Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local)        01/23/2013
_x86_64_        (16 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.72    2.94    0.46    0.11    0.00   94.77

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              20.53      1958.91       719.38  477854182  175484870

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.06    0.00    0.13    0.19    0.00   99.62

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              30.00      1040.00      5392.00       1040       5392

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.19    0.25    0.00   99.56

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              24.00      1368.00       104.00       1368        104


Thanks


On Wed, Jan 23, 2013 at 5:11 PM, shahzaib shahzaib <shahzaib.cb at gmail.com>wrote:

> Hello,
>
>        Following is the output of vmstat 1 on 1000+ concurrent connections
> :-
>
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa st
>  0  0      0 438364  43668 31548164    0    0    62    23    3    0  5  0
> 95  0  0
>  0  0      0 437052  43668 31548520    0    0  1292     0 19763 1570  0  0
> 99  1  0
>  0  1      0 435316  43668 31549940    0    0  1644     0 20034 1537  0  0
> 99  1  0
>  1  0      0 434688  43676 31551388    0    0  1104    12 19816 1612  0  0
> 100  0  0
>  0  0      0 434068  43676 31552304    0    0   512    24 20253 1541  0  0
> 99  0  0
>  1  0      0 430844  43676 31553156    0    0  1304     0 19322 1636  0  0
> 99  1  0
>  0  1      0 429480  43676 31554256    0    0   884     0 19993 1585  0  0
> 99  0  0
>  0  0      0 428988  43676 31555020    0    0  1008     0 19244 1558  0  0
> 99  0  0
>  0  0      0 416472  43676 31556368    0    0  1244     0 18752 1611  0  0
> 99  0  0
>  2  0      0 425344  43676 31557552    0    0  1120     0 19039 1639  0  0
> 99  0  0
>  0  0      0 421308  43676 31558212    0    0  1012     0 19921 1595  0  0
> 99  0  0
>
>
> This might be a stupid question ,which section should i focus from above
> output to analyze if I/O is performing well or in heavy load?
>
> Thanks
>
>
> On Wed, Jan 23, 2013 at 4:58 PM, skechboy <nginx-forum at nginx.us> wrote:
>
>> Have you checked HDD performance on the server in this periods with atop
>> or
>> iostat 1 ?
>> It's very likely to be related with this since I guess there's a lot's of
>> random reading on 2000 connections.
>>
>> Posted at Nginx Forum:
>> http://forum.nginx.org/read.php?2,235447,235456#msg-235456
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20130123/ca92dd78/attachment.html>


More information about the nginx mailing list