nginx serving large files - performance issues with more than ~800-1000 connections
mdounin at mdounin.ru
Thu May 24 10:25:47 UTC 2012
On Thu, May 24, 2012 at 02:28:56PM +0700, Tomasz Chmielewski wrote:
> I have a cluster of 10 nginx 1.2.0 servers, on Linux. They
> primarily serve large files.
> Whenever the number of ESTABLISHED connections to nginx is above
> 800-1000, the things get very slow.
> I.e. it can take a minute or more before nginx starts serving
> such a connection; then, the file is served very slow (started
> from a server in the same rack):
> wget -O /dev/null
> I've tried different tuning parameters (nginx, sysctl etc.), but
> they don't seem to change much.
> The only thing which helps is starting one more nginx instance,
> on a different port.
> Then, this second instance serves the files just fine. I.e. with
> the number of established connections above 800-1000, this one
> is slow:
> PORT=80; wget -O /dev/null
> The second instance running on port 82 will reply fast and serve
> files fast:
> PORT=82; wget -O /dev/null
> Does it suggest nginx issues? Because the second nginx instance
> serves the files fine.
> Or maybe some system / sysctl parameters?
It suggests you are disk-bound and all nginx workers are busy
waiting for I/O operations. Try looking here for basic
More information about the nginx