Image Hosting

iptablez at iptablez at
Fri Oct 15 05:08:48 MSD 2010

Hi Ryan,

Actually our ISP is giving us 1Gbit connection. Before, we already tested trough onother ISP, the connection is up to 900Mbps.
So I assume that's not a problem with ISP neither.

-----Original Message-----
From: Ryan Malayter <malayter at>
Date: Thu, 14 Oct 2010 17:07:39 
To: <nginx at>
Reply-To: nginx at
Subject: Re: Image Hosting

On Thu, Oct 14, 2010 at 9:26 AM, Indo Php <iptablez at> wrote:
> Here's the output of "iostat -d 60 2"
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda              85.58      4947.94      2619.30  593632866  314252346
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda             114.41      8395.93       339.01     503840      20344
> here's the output of vmstat
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa
>  0  1    332  63136 628344 6085040    0    0   617   324   17   27  1  4 88
> 6

With 114 transactions (IOPS) per second, and 8396 block reads per
second, that's about 34 Mbit/s being read from disk (blocks are 512
bytes). Considering you say you're sending out 250 Mbit/s, I think
you're getting quite a bit of caching benefit from the OS filesystem
cache, and probably have a disk cache hit ratio above 80%.

Also, 114 IOPS is nothing. A single 15K disk can do 180 IOPS, so six
15K drives in a RAID-0 set should let you do >1000 IOPS.

In short... this doesn't look like a disk-bound system to me from a
hardware perspective.

What does the upstream network look like? Is it Gigabit though a good
provider? That could very well be the bottleneck. What metric are you
using to determine that it is "getting slower" or that you have a
"problem with disk I/O"?


nginx mailing list
nginx at

More information about the nginx mailing list