Big dispersion in requests execution time.

Maxim Dounin mdounin at mdounin.ru
Mon Aug 25 17:57:10 UTC 2014


Hello!

On Tue, Aug 19, 2014 at 07:31:21AM -0400, yury_y wrote:

> Hi,
> 
> I faced the following problem. Our server works under the constant load of
> 300-400 requests per second.
> From request execution time statistics I see that in some cases "fast"
> request(that normally executes in few milliseconds) may hang for seconds.
> 
> Here is an illustration of this problem.
> I execute the following GET request "http://127.0.0.1:777/fcgi/auth..."(no
> ssl, no dns lookup, just http on localhost) from local client(on the same
> sever).
> Usually this request executes in less then 1 millisecond, but in this case
> execution time is 130 milliseconds.
> 
> From tcpdump I can conclude following:
> 16:18:43.095716 - client sent request to nginx
> 16:18:43.225903 - nginx sent request to upstream
> 16:18:43.226178 - upstream replied to nginx
> 16:18:43.226235 - nginx replied to client
> 
> So request was processed by upstream in less then 1 millisecond, but it took
> about 130 microseconds to read request from client and pass it to upstream.
> I observe similar behavior both for fcgi upstreams and for static requests.
> 
> Does anybody have similar problems? In which direction should I
> investigate?

Most likely, the reason for such delays is that all nginx workers 
were busy doing some other work.  In particular, this may happen 
with disk-bound workloads due to blocking on disk.  You may try 
looking into top(1) output for states of nginx worker processes, 
it usually makes things much clearer.

Some additional reading:

http://nginx.org/r/sendfile_max_chunk
http://nginx.org/r/aio
http://nginx.org/r/output_buffers

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx mailing list