Image Hosting
Ryan Malayter
malayter at gmail.com
Fri Oct 15 02:07:39 MSD 2010
On Thu, Oct 14, 2010 at 9:26 AM, Indo Php <iptablez at yahoo.com> wrote:
> Here's the output of "iostat -d 60 2"
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 85.58 4947.94 2619.30 593632866 314252346
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 114.41 8395.93 339.01 503840 20344
>
> here's the output of vmstat
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id
> wa
> 0 1 332 63136 628344 6085040 0 0 617 324 17 27 1 4 88
> 6
>
With 114 transactions (IOPS) per second, and 8396 block reads per
second, that's about 34 Mbit/s being read from disk (blocks are 512
bytes). Considering you say you're sending out 250 Mbit/s, I think
you're getting quite a bit of caching benefit from the OS filesystem
cache, and probably have a disk cache hit ratio above 80%.
Also, 114 IOPS is nothing. A single 15K disk can do 180 IOPS, so six
15K drives in a RAID-0 set should let you do >1000 IOPS.
In short... this doesn't look like a disk-bound system to me from a
hardware perspective.
What does the upstream network look like? Is it Gigabit though a good
provider? That could very well be the bottleneck. What metric are you
using to determine that it is "getting slower" or that you have a
"problem with disk I/O"?
--
RPM
More information about the nginx
mailing list