nginx serving large files - performance issues with more than ~800-1000 connections
Maxim Dounin
mdounin at mdounin.ru
Fri Jul 6 16:17:54 UTC 2012
Hello!
On Fri, Jul 06, 2012 at 10:32:08PM +0800, Tomasz Chmielewski wrote:
> On 05/24/2012 06:25 PM, Maxim Dounin wrote:
>
> >> Does it suggest nginx issues? Because the second nginx instance
> >> serves the files fine.
> >>
> >> Or maybe some system / sysctl parameters?
> >
> > It suggests you are disk-bound and all nginx workers are busy
> > waiting for I/O operations. Try looking here for basic
> > optimization steps:
> >
> > http://mailman.nginx.org/pipermail/nginx/2012-May/033761.html
>
> I've tried to follow these recommendations, but don't really see any improvement.
>
> The systems are not disk bound (see below).
If your system isn't disk bound these recommendations aren't
likely to help, indeed. You have to find what's bottleneck in
your case first.
On the other hand, the fact that second nginx instance running on
different port doesn't have problems (as indicated in your
original message) suggests there is some blocking which makes
first instance slow.
Try looking into what nginx processes do when this happens.
Something like
ps -eopid,user,args,wchan | grep nginx
should show where it blocks.
> Even if I try to fetch a file which is stored in tmpfs, it is
> slow - 20, 30 secs, even more, like here:
>
> [root at da1 ~]# time curl ca3/404/404.html
> curl: (7) couldn't connect to host
>
> real 1m3.204s
> user 0m0.000s
> sys 0m0.000s
>
>
> I see it only when the number of established connections to
> nginx is around 700 and more (most serving large files, so the
> connections are long-lived).
Actually, this is expected behaviour for disk blocking: nginx
worker processes are busy waiting for disk I/O and can't accept
and handle new connections in a timely manner. It doesn't matter
where requested file resides as the pause is a result of nginx
worker processes being blocked due to other requests.
[...]
Maxim Dounin
More information about the nginx
mailing list