<div dir="ltr">And also the 20+ lines of vmstat are given below with 2.6.32 kernal :-<br><br>procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----<br> r b swpd free buff cache si so bi bo in cs us sy id wa st<br>
0 1 0 259020 49356 31418328 0 0 64 24 0 4 5 0 95 0 0<br> 1 0 0 248100 49356 31418564 0 0 704 4 35809 3159 0 1 99 0 0<br> 0 0 0 245248 49364 31419856 0 0 1340 48 35114 3217 0 0 99 0 0<br>
1 0 0 243884 49364 31421084 0 0 940 4 35176 3106 0 0 99 0 0<br> 0 0 0 243512 49364 31422152 0 0 812 4 35837 3204 0 0 99 0 0<br> 0 0 0 241608 49364 31423056 0 0 1304 4 35585 3177 1 1 98 0 0<br>
1 0 0 241076 49364 31424132 0 0 1004 4 35774 3199 0 0 99 0 0<br> 0 0 0 241332 49372 31424644 0 0 724 76 35526 3203 0 0 99 0 0<br> 0 0 0 240464 49372 31425376 0 0 776 4 35968 3162 0 0 99 0 0<br>
0 1 0 238236 49372 31426244 0 0 652 4 35705 3131 0 0 99 0 0<br> 0 0 0 234632 49372 31426924 0 0 1088 4 36220 3309 0 1 99 0 0<br> 0 0 0 233640 49372 31428492 0 0 872 4 35663 3235 0 1 99 0 0<br>
0 0 0 232896 49376 31429016 0 0 1272 44 35403 3179 0 0 99 0 0<br> 1 0 0 231024 49376 31430064 0 0 528 4 34713 3238 0 0 99 0 0<br> 0 0 0 239644 49376 31430564 0 0 808 4 35493 3143 0 1 99 0 0<br>
3 0 0 241704 49376 31431372 0 0 612 4 35610 3400 1 1 97 0 0<br> 1 0 0 244092 49376 31432028 0 0 280 4 35787 3333 1 1 99 0 0<br> 2 0 0 244348 49376 31433232 0 0 1260 8 34700 3072 0 0 99 0 0<br>
0 0 0 243908 49384 31433728 0 0 512 32 35019 3145 0 1 99 0 0<br> 1 0 0 241104 49384 31435004 0 0 1440 4 35586 3211 0 1 99 0 0<br> 0 0 0 234600 49384 31435476 0 0 868 4 35240 3235 0 1 99 0 0<br>
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----<br> r b swpd free buff cache si so bi bo in cs us sy id wa st<br> 1 0 0 233656 49384 31436376 0 0 704 4 35297 3126 0 1 99 0 0<br>
0 0 0 233284 49384 31437176 0 0 192 4 35022 3202 0 0 99 0 0<br> 0 0 0 228952 49392 31437336 0 0 868 32 34986 3211 0 1 99 0 0<br> 0 0 0 232176 49392 31438124 0 0 448 4 35785 3294 0 1 99 0 0<br>
0 0 0 230076 49392 31438664 0 0 1052 4 35532 3297 1 1 98 0 0<br> 1 0 0 231184 49392 31439608 0 0 436 4 34967 3177 0 1 99 0 0<br> 1 0 0 224300 49392 31440044 0 0 624 4 34577 3216 0 1 99 0 0<br>
0 0 0 223748 49396 31440664 0 0 460 44 34415 3155 0 0 99 0 0<br> 1 0 0 223260 49396 31441612 0 0 768 4 35287 3194 0 1 99 0 0<br> 0 0 0 230464 49396 31441996 0 0 772 4 35140 3208 0 0 99 0 0<br>
1 0 0 225504 49396 31442668 0 0 564 4 35316 3133 0 0 99 0 0<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 24, 2013 at 12:00 AM, shahzaib shahzaib <span dir="ltr"><<a href="mailto:shahzaib.cb@gmail.com" target="_blank">shahzaib.cb@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Following is the output of 2200+ concurrent connections and kernel version is 2.6.32 :-<div class="im"><br>
<br>Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013 _x86_64_ (16 CPU)<br><br></div><div class="im">avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
1.75 3.01 0.49 0.13 0.00 94.63<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 23.27 2008.64 747.29 538482374 200334422<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.97 0.00 1.10 0.19 0.00 97.74<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 30.00 2384.00 112.00 2384 112<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.13 0.00 0.52 0.13 0.00 99.22<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 21.00 1600.00 8.00 1600 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.19 0.00 0.45 0.26 0.00 99.10<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 37.00 2176.00 8.00 2176 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.45 0.00 0.58 0.19 0.00 98.77<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 24.00 1192.00 8.00 1192 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.32 0.00 0.45 0.19 0.00 99.03<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 29.00 2560.00 8.00 2560 8<div class="im">
<br><br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.32 0.00 0.65 0.19 0.00 98.84<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 35.00 2584.00 152.00 2584 152<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.26 0.00 0.39 0.39 0.00 98.96<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 25.00 1976.00 8.00 1976 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.32 0.00 0.52 0.39 0.00 98.77<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 33.00 1352.00 8.00 1352 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.26 0.00 0.58 0.26 0.00 98.90<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 28.00 2408.00 8.00 2408 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.45 0.00 0.65 0.06 0.00 98.84<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 37.00 1896.00 8.00 1896 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.71 0.00 0.97 0.13 0.00 98.19<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 33.00 2600.00 64.00 2600 64<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.32 0.00 0.65 0.26 0.00 98.77<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 20.00 1520.00 8.00 1520 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.19 0.00 0.39 0.19 0.00 99.22<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 49.00 3088.00 80.00 3088 80<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.26 0.00 0.91 0.26 0.00 98.58<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 48.00 1328.00 8.00 1328 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.32 0.00 0.32 0.26 0.00 99.09<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 32.00 1528.00 8.00 1528 8<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.45 0.00 0.58 0.39 0.00 98.58<div class="im"><br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br></div>sda 35.00 1624.00 72.00 1624 72<div class="im">
<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br></div>
0.39 0.00 0.58 0.19 0.00 98.84<br><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 23, 2013 at 11:07 PM, Lukas Tribus <span dir="ltr"><<a href="mailto:luky-37@hotmail.com" target="_blank">luky-37@hotmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Can you send us a 20+ lines of output from "vmstat 1" under this load? Also, what exact linux kernel are you running ("cat /proc/version")?<br>
<br>
<br>
________________________________<br>
> Date: Wed, 23 Jan 2013 21:51:43 +0500<br>
> Subject: Re: Nginx flv stream gets too slow on 2000 concurrent connections<br>
> From: <a href="mailto:shahzaib.cb@gmail.com" target="_blank">shahzaib.cb@gmail.com</a><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<div>><br>
> Following is the output of 3000+ concurrent connections on iostat 1<br>
> command :-<br>
><br>
> avg-cpu: %user %nice %system %iowait %steal %idle<br>
> 1.72 2.96 0.47 0.12 0.00 94.73<br>
><br>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>
> sda 22.47 1988.92 733.04 518332350 191037238<br>
><br>
> avg-cpu: %user %nice %system %iowait %steal %idle<br>
> 0.39 0.00 0.91 0.20 0.00 98.50<br>
><br>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>
> sda 22.00 2272.00 0.00 2272 0<br>
><br>
> avg-cpu: %user %nice %system %iowait %steal %idle<br>
> 0.46 0.00 0.91 0.07 0.00 98.57<br>
><br>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>
> sda 23.00 864.00 48.00 864 48<br>
><br>
> avg-cpu: %user %nice %system %iowait %steal %idle<br>
> 0.39 0.00 0.72 0.33 0.00 98.56<br>
><br>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>
> sda 60.00 3368.00 104.00 3368 104<br>
><br>
> avg-cpu: %user %nice %system %iowait %steal %idle<br>
> 0.20 0.00 0.65 0.20 0.00 98.95<br>
><br>
><br>
><br>
> On Wed, Jan 23, 2013 at 8:30 PM, shahzaib shahzaib<br>
</div><div>> <<a href="mailto:shahzaib.cb@gmail.com" target="_blank">shahzaib.cb@gmail.com</a><mailto:<a href="mailto:shahzaib.cb@gmail.com" target="_blank">shahzaib.cb@gmail.com</a>>> wrote:<br>
> Sketchboy, i sent you the output of only 1000 concurrent connections<br>
> because it wasn't peak hours of traffic. I'll send you the output of<br>
> iostat 1 when concurrent connections will hit to 2000+ in next hour.<br>
> Please keep in touch cause i need to resolve this issue :(<br>
><br>
><br>
> On Wed, Jan 23, 2013 at 8:21 PM, skechboy<br>
</div><div>> <<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a><mailto:<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a>>> wrote:<br>
> From your output I can see that it isn't IO issue, I wish I could help you<br>
> more.<br>
><br>
> Posted at Nginx Forum:<br>
</div>> <a href="http://forum.nginx.org/read.php?2,235447,235476#msg-235476" target="_blank">http://forum.nginx.org/read.php?2,235447,235476#msg-235476</a><<a href="http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476" target="_blank">http://forum.nginx.org/read.php?2%2c235447%2c235476#msg-235476</a>><br>
><br>
> _______________________________________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><mailto:<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a>><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
<div><div>><br>
><br>
><br>
> _______________________________________________ nginx mailing list<br>
> <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>