request_time much slower than upstream_response_time

Igor Sysoev is at rambler-co.ru
Mon Jun 23 19:17:30 MSD 2008


On Mon, Jun 23, 2008 at 07:40:21AM -0700, Rt Ibmer wrote:

> > I suspect two causes of the tardiness:
> > 
> > 1) It may be caused by SSL handshake: there is 4 packets
> > exchange, and while
> >    3 packets TCP handshake is not accounted in
> > $request_time, the SSL
> >    handshake is accounted.
> > 
> > 2) It may be caused by SSL builtin session cache cleanups.
> > Try to use
> >    shared only cache (even if you use the single worker):
> 
> Thanks Igor. I tried changing the cache cleanups as you suggested. It may be helpful for avoiding any blocking on other requests so thanks! However it did not help speed the slower ssl connections.
> 
> If it is being caused by #1 above, am I correct to assume then that the quality of the user's connection to the internet could be playing a factor in this? For instance if they are on a poor dialup connection?
> 
> What is strange is that in some cases the request_time and upstream_time are the same, say about 0.015 seconds.  Then in many cases there is what I would consider an "understandable" difference due to ssl handshake overhead like request_time of 0.3 - 0.5 sec with upstream time of 0.012 sec.
> 
> However we still have a fair number of cases where the request_time is between 1-3 seconds (maybe like 1 out of every 50 ssl requests) while the upstream time is still very low like 0.015 sec.
> 
> Basically our nginx server is hit in the following way: a user embeds a tag in their web page that points to our nginx box. Our nginx front end then goes to an upstream server (than nginx proxies) to get about 10KB of static content, which nginx then returns to them.

Yes, user's connection speed is important here. $request_time is time between
getting first byte of request and sending last byte of response to a kernel.
As a request usually goes in one packets this means that nginx gets the request
immediately. And if response is small (as your 10K) is mean that nginx
passes it into kernel immediately. So for small plain HTTP responses
$request_time does not show time of sending data to client.

In SSL case $request_time includes 4 packets exchange, i.e,
4 round trip times (RTT) plus possible TCP retransmits, etc.

> The content we are serving is not in any way sensitive information. However if the user's web page is running under https and they try to pull that small content from us running http, then the browser will complain. So, we support https transactions simply so that we can integrate with an ssl page and keep the browser happy.
> 
> Also another interesting element here is that the browser will cache our content and we are only referenced about 1 to 3 times within a web page.  So on one web page hit the browser may hit our nginx box 1-3 times for static content on that page, but this same browser will not come back to nginx at all again during their session.  So the ssl need is very short lived for that uses session.
> 
> Based on the fact that our ssl transmitted data is not actually sensitive data (so we don't care about the strength/quality of the encryption) and that it is short lived for each user (1-3 initial hits to us from a web page from the same user and then not again the rest of their session), can you recommend some ideas for the SSL configuration options I can use within nginx to try and keep things as fast and efficient as possible (as far as what to use for ssl_timeouts and other types of setings like this)?

You should use keepalive. Small ssl_timeouts should not help in your case.

You may also try to use 56-bit and 128-bit ciphers first:

ssl_ciphers      DES-CBC-SHA:RC4-MD5:RC4-SHA:AES128-SHA:DES-CBC3-SHA;


-- 
Igor Sysoev
http://sysoev.ru/en/





More information about the nginx mailing list