nginx workers in "D" status in top
Ryan Malayter
malayter at gmail.com
Wed Jul 28 07:17:15 MSD 2010
On Tue, Jul 27, 2010 at 1:45 PM, Igor Sysoev <igor at sysoev.ru> wrote:
> nginx supports file AIO only in 0.8.11+, but the file AIO is functional
> on FreeBSD only. On Linux AIO is supported by nginx only on kerenl
> 2.6.22+ (although, CentOS 5.5 has backported the required AIO features).
> Anyway, on Linux AIO works only if file offset and size are aligned
> to a disk block size (usually 512 bytes) and this data can not be cached
> in OS VM cache (Linux AIO requires DIRECTIO that bypass OS VM cache).
> I believe a cause of so strange AIO implementaion is that AIO in Linux
> was developed mainly for databases by Oracle and IBM.
Thank you for the detailed explanation. I suppose I am somewhat
shocked by the state of AIO on Linux, but then again most server
applications are likely using blocking IO with threads, as the
programming model is more straightforward.
I often see 3 or more nginx workers in "uninterruptible sleep" state
at the same time, even if only for a few ms. I presume this means any
other connections being handled by those workers are also blocked,
even if they are proxy-only connections that don't hit the disk. We
noticed extended response times from our application at peak periods,
even though the observed CPU load on the back-end would actually dip
and then spike. So I think that nginx is effectively "queuing"
requests that are blocked behind requests that have large responses
that get spooled to disk.
Do you foresee an issue with running 10 or more workers per CPU core?
Is there any upper bound on the number nginx of workers after which
inter-process communication overhead starts to become problematic?
--
RPM
More information about the nginx
mailing list