<div dir="ltr"><div>Sorry for above reply on wrong command. Following are the output of iostat 1 :-<br><br>Linux 2.6.32-279.19.1.el6.x86_64 (DNTX005.local) 01/23/2013 _x86_64_ (16 CPU)<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br>
1.72 2.94 0.46 0.11 0.00 94.77<br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>sda 20.53 1958.91 719.38 477854182 175484870<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br>
0.06 0.00 0.13 0.19 0.00 99.62<br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>sda 30.00 1040.00 5392.00 1040 5392<br><br>avg-cpu: %user %nice %system %iowait %steal %idle<br>
0.00 0.00 0.19 0.25 0.00 99.56<br><br>Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn<br>sda 24.00 1368.00 104.00 1368 104<br><br><br></div>
Thanks<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 23, 2013 at 5:11 PM, shahzaib shahzaib <span dir="ltr"><<a href="mailto:shahzaib.cb@gmail.com" target="_blank">shahzaib.cb@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hello,<br><br></div><div> Following is the output of vmstat 1 on 1000+ concurrent connections :-<br>
<br>procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----<br> r b swpd free buff cache si so bi bo in cs us sy id wa st<br>
0 0 0 438364 43668 31548164 0 0 62 23 3 0 5 0 95 0 0<br> 0 0 0 437052 43668 31548520 0 0 1292 0 19763 1570 0 0 99 1 0<br> 0 1 0 435316 43668 31549940 0 0 1644 0 20034 1537 0 0 99 1 0<br>
1 0 0 434688 43676 31551388 0 0 1104 12 19816 1612 0 0 100 0 0<br> 0 0 0 434068 43676 31552304 0 0 512 24 20253 1541 0 0 99 0 0<br> 1 0 0 430844 43676 31553156 0 0 1304 0 19322 1636 0 0 99 1 0<br>
0 1 0 429480 43676 31554256 0 0 884 0 19993 1585 0 0 99 0 0<br> 0 0 0 428988 43676 31555020 0 0 1008 0 19244 1558 0 0 99 0 0<br> 0 0 0 416472 43676 31556368 0 0 1244 0 18752 1611 0 0 99 0 0<br>
2 0 0 425344 43676 31557552 0 0 1120 0 19039 1639 0 0 99 0 0<br> 0 0 0 421308 43676 31558212 0 0 1012 0 19921 1595 0 0 99 0 0<br></div><div><br></div><div><br></div><div>This might be a stupid question ,which section should i focus from above output to analyze if I/O is performing well or in heavy load? <br>
<br></div><div>Thanks<br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 23, 2013 at 4:58 PM, skechboy <span dir="ltr"><<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Have you checked HDD performance on the server in this periods with atop or<br>
iostat 1 ?<br>
It's very likely to be related with this since I guess there's a lot's of<br>
random reading on 2000 connections.<br>
<br>
Posted at Nginx Forum: <a href="http://forum.nginx.org/read.php?2,235447,235456#msg-235456" target="_blank">http://forum.nginx.org/read.php?2,235447,235456#msg-235456</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>