very poor performance for serving static files
Maxim Dounin
mdounin at mdounin.ru
Mon Jul 18 16:44:18 UTC 2011
Hello!
On Mon, Jul 18, 2011 at 11:46:41AM -0400, asdasd77 wrote:
> we are a video hosting company and storing files in mp4 format. we have
> another server, its litespeed and handle nearly 1gbit connection
> succesfully. slow but it is responsing requests at least. but this new
> server (nginx) cant handle requests. if bw usage hit the 150-200mbit its
> going down. actually not down, just doesnt response any http request. we
> tried and also hired someones and they tried but no success. this is the
> last chance. is there anybody knows what the problem?
>
> here is the nginx.conf
> ******************************************************************************************************************************************
> #user nobody;
> worker_processes 8;
> worker_rlimit_nofile 20480;
>
> error_log /var/log/nginx/error.log info;
>
> #pid logs/nginx.pid;
>
>
> events {
> worker_connections 768;
You are using really low number for worker_connections, with 8
workers you'll be only able to serve about 6k connections in
total. If you see nginx not responding to http requests - you are
probably hitting this limit. Try looking at error_log and
stub_status output to see if it's true.
> use epoll;
> }
[...]
> sendfile on;
You rely on OS to do actual IO, and this may not be a good idea if
you are serving large files. Your OS will likely use something
about 16k read requests and this will trash your disks with IOPS
and seeks.
Try either AIO or at least normal reading with big buffers (and
without sendfile), i.e.
sendfile off;
output_buffers 2 512k;
or something like.
[...]
> our server has 1tb hdd and 12gb ram and 8core cpu and 1gbit line
Just 1 spindle isn't really good, but you should be able to get
something about 600 Mbit/s (raw disk speed on sequentional
reading, test your disk to see more correct numbers) with large
files and proper tuning even if your working set is much bigger
than memory and effective caching isn't possible.
Maxim Dounin
More information about the nginx
mailing list