$request_time is 0.000 with gzip/chunked?

Maxim Dounin mdounin at mdounin.ru
Fri Jan 31 13:55:05 UTC 2014


Hello!

On Fri, Jan 31, 2014 at 05:04:25PM +1300, Robert Coup wrote:

> Hi Maxim,
> 
> On Fri, Jan 31, 2014 at 3:40 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> 
> >
> > A zero request time indicate that nginx never exhausted socket's
> > output buffer while sending a response to a client, and hence was
> > able to complete sending within a single event loop iteration.
> > Time only refreshed before an event loop iteration, so in this
> > case logged $request_time will be zero.
> >
> 
> Makes sense - so you're suggesting I wouldn't see it on a non-loopback
> network connection where socket buffer is < file size? I was seeing it on a
> non-loopback connection originally, but will do some more tests to verify.
> I guess I also wouldn't see it if/when upstream/proxy buffers are
> exhausted?

It is possible to see this on fast enough non-loopback 
connections, though unlikely as non-loopback connections are 
usually much slower.

> I guess I would have expected (not being an nginx guru!) an event loop call
> to happen for each chain/block being passed through the filters and out to
> the socket rather than only when the socket rejected it.
> 
> Regardless, if $request_time is defined as "request processing time in
> seconds with a milliseconds resolution; time elapsed between the first
> bytes were read from the client and the log write after the last bytes were
> sent to the client" - and the last bytes couldn't possibly be sent in
> 0.000ms (since the gzipping takes ~10s), doesn't that indicate a bug?

>From pedantic point of view, it's documented that time is updated 
once per event loop iteration, see http://nginx.org/r/timer_resolution.

> Background: This isn't an entirely theoretical exercise... we're sending
> gzipped XML documents to clients with massively varying response sizes
> (3KB-300MB) and I was looking at writing a module to save the time taken
> until the first body chunk is sent to the client, so we could track the
> server response times. And stumbled across this issue.

If you just care about precision of a time measured, on Linux you may try 
using timer_resolution with some reasonable value like 10ms.  On 
Linux time is updated using signals in this case, and time logged 
will be more accurate regardless of how long event loop iteration 
takes.

The really bad thing is actually that this can easily introduce 
potentially huge latency spikes.  If this is often happens in 
practice, it might be a good idea to introduce something similar 
to sendfile_max_chunk we already have to resolve similar problems 
with sendfile(), see http://nginx.org/r/sendfile_max_chunk.

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx-devel mailing list