request_time much slower than upstream_response_time

Rt Ibmer rtibmx at yahoo.com
Sat Jun 21 20:01:41 MSD 2008


I've noticed a periodic performance issue with our nginx 0.6.3.1 box running on Fedora core 8.

We are primarily running nginx as the front end load balancer for a web service we run, that serves mostly identical requests over and over to different web clients.

We are logging both the request_time and upstream_response_time.  With the majority of requests these values log identical results. For instance 0.011ms of request_time and the same 0.011ms for the upstream_response_time.  Although certainly this times fluctuate with each request (ranging from say 0.005ms to 0.020ms) whatever the ms is is typically logged as the same for both parameters. 

However we have some requests where the request_time is MUCH higher.  For instance request_time will be logged at like 0.635ms with an upstream_response_time still of only 0.005ms.

Any ideas on what can be causing this?

A few supporting notes that I hope will be helpful in your assessment:

- the server is not under load. for instance at the most it is handling about 3 req/sec and I've seen this disparity between request_time and upstream_response_time even when it was handling like 1 request every few seconds

- the server is set up to handle both SSL requests and non-SSL requests.  I have seen this issue with both SSL and non-SSL, although it is a LOT more prominent with the SSL requests.  For instance out of 2000 cases where request_time was much longer, about 80% of those were SSL requests.  That being said, I still see this happen on some non-SSL requests.

- normally I see a slight disparity with ssl pages.  For instance normally for ssl I'll see a request_time of 0.076ms with an upstream time of 0.012ms. I expect this and am NOT counting this as the basis for this performance issue I am reporting.

- i have a rather simplistic configuration file.  It has maybe 10 location blocks at the most.  Some handle nginx serving static content, but the most frequent logic path and the one involved with this performance disparity simply uses proxy_pass to send the request to an upstream server.

- I track stub_status over time  - it shows reading and writing typically around 1, and waiting typically around 40-60.

- My SSL config looks like this (in case this has anything to do with it, but I doubt it since it happens on non-ssl requests too even when the server is near idle:

        ssl                  on;
        ssl_certificate      mydomain.com.crt;
        ssl_certificate_key  mydomain.com.key;

        ssl_session_cache       builtin:20480;
        ssl_session_timeout     3m;

        ssl_protocols  SSLv2 SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
        ssl_prefer_server_ciphers   on;

- I have worker_connections  4096; and 1 worker process and
    keepalive_timeout   30;

In summary, I definitely think something is askew somewhere since the problem tends to happen intermittantly and not under load.

I would greatly appreciate some tips on what you think may be causing this and how I can go about troubleshooting this further.  Also please let me know what additional technical details I can collect to help with assessing this performance matter.  Thank you so very much!!!





      






More information about the nginx mailing list