nginx & upstream_response_time
Maxim Dounin
mdounin at mdounin.ru
Tue Oct 2 14:30:07 UTC 2012
Hello!
On Tue, Oct 02, 2012 at 03:44:38PM +0200, Marcin Deranek wrote:
> Hi,
>
> Currently I'm running nginx 1.2.4 with uwsgi backend. To my
> understanding $upstream_response_time should represent time taken
> to deliver content by upsteam (in my case uwsgi backend). It looks like
> it't not the case for myself.
>
> uwsgi specific snippet:
>
> server {
> ...
> uwsgi_buffering off;
[...]
> When I use "slow" client connecting to nginx (eg. socat
> TCP:127.0.0.1:80,rcvbuf=128 STDIO) I can see the following hapening:
> Backend server gets busy only for ~10s (this is what I expect). If I
> issue 2 concurrent requests one is served immediately and 2nd one after
> ~10s. This behaviour would indicate that backend was able to deliver
> content in ~10s (whole response was buffered as buffer size is big
> enough to accommodate full response and we have only 1 worked at the
> backend). Unfortunately access log disagrees with that as it makes
> $upstream_response_time almost equal to $request_time (eg. ~1000s vs
> ~10s of expected). Is this an expected behaviour ?
You asked nginx to work in unbuffered mode, and in this mode
doesn't pay much attention to what happens with backend connection
if it isn't able to write data it already has to a client. In
particular it won't detect connection close by the backend (and
stop counting $upstream_response_time).
This is probably could be somewhat enhanced, but if you care about
$upstream_response_time it most likely means you don't need
"uwsgi_buffering off" and vice versa.
--
Maxim Dounin
http://nginx.com/support.html
More information about the nginx
mailing list