Serving static file dynamically ...

Adam Zell zellster at gmail.com
Wed Mar 25 22:02:07 MSK 2009


With regards to serving large files through nginx, the following thread may
be interesting:

http://www.litespeedtech.com/support/forum/showthread.php?t=2446&page=2&highlight=sendfile

"nginx is very well SMP implemented webserver with awesome bw throttle
features, you can set max concurrent connection limits per each IP globally
or at VHost or for any path/file type individual using PCRE conditions
nginx is suitable for smaller files about 10~20 MB but for larger files more
than 100MB it gets in IO bottlenecks with my huge connections at peak
100% iowait for all cores which leads to high load and forces server to very
lower throughput
...
lighttpd, super webserver for static contents, specially when compiled with
LOCAL BUFFERING enabled in src\network_writev.c and lib async io

it [lighttpd] rocks! for me it can handle 2x throughput againts
litespeed/nginx without any iowait
iowait is almost 0 with that huge file sizes and awesome connetions"

On Wed, Mar 25, 2009 at 11:07 AM, Marcus Clyne <eugaia at gmail.com> wrote:

> Hi Brice,
>
> Firstly, I'm not talking about a full-blown database that's serving files,
> but a lightweight front to the files (as PBMS is).  PBMS is just an HTTP
> front to a storage engine for MySQL, and doesn't deal with SQL or anything
> like that.  It's really like serving the content directly out of a large
> file, in a pretty similar way to many caches do.
>
> Ordinarily, the filesystem will be much faster than even the lightest of
> fronts to a db, but if you have milllions of files, then each file will have
> metadata associated with it (which takes up space - usually at least 4KB)
> and the filesystem has to cope with all the files, and many filesystems
> struggle when you start getting to large numbers of files, and it slows
> things down.
>
> If you have billions of files, then you couldn't even serve them off a
> normal 32-bit fs, because you'd run out of inodes (I believe).
>
> For thousands of files, or tens of thousands, you'd be fine, though, and
> the filesystem will definitely be quicker than PBMS.
>
> When I did my tests with PBMS, I created 2M objects, and put them in the
> database with MySQL, then served them using PBMS.  My benchmarks showed that
> the number of req/s I could serve was similar to Apache 2 at best, and about
> 20% slower at worst (it depended on the index of the file).  That might not
> seem great, but that was serving from 1 file vs 2M.  I tried creating files
> in a hierarchical structure, and after I got to around 400k (? I can't quite
> remember), my system almost completely stalled, so I stopped trying to add
> files.
>
> In most scenarios, the filesystem will be quicker, but not always.
>
> Cheers,
>
> Marcus.
>
>
> Brice Leroy wrote:
>
>>
>> On Mar 24, 2009, at 1:55 PM, Marcus Clyne wrote:
>>
>>  Michael Shadle wrote:
>>>
>>>> not to mention that this is only useful if the OP is storing files in
>>>> the database to begin with, I believe.
>>>>
>>>> if it's filesystem, then X-Accel-Redirect is the way to go.
>>>>
>>>>  Yes, if the filesystem is the storage mechanism, then I'd agree.  If
>>> you have a very large number of files, though, storing them in the DB can be
>>> more efficient than using the filesystem (depending on platform, file
>>> number, directory hierarchy etc).
>>>
>>
>> How can the filesystem can be slower than DB to serve huge video files ?
>> That's completely on the opposite to my culture ! Can you explain me how
>> this is possible ?
>> My situation will be serving thousand(maybe more later) of different big
>> files(between 200MB and 4GB) to different users.
>>
>> Brice
>>
>>
>>
>
>


-- 
Adam
zellster at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090325/20be2fa4/attachment.html>


More information about the nginx mailing list