How can the number of parallel/redundant open streams/temp_files be controlled/limited?

Paul Schlie schlie at comcast.net
Tue Jul 1 20:11:58 UTC 2014


Thank you for your patience.

I mistakenly thought the 5 second default value associated with proxy_cache_lock_timeout was the maximum delay allowed between successive responses from the backend server is satisfaction of the reverse proxy request being cached prior to the cache lock being released, not the maximum delay for the response to be completely received and cached as it appears to actually be.

Now that I understand, please consider setting the default value much higher, or more ideally set in proportion to the size of the item being cached and possibly some measure of the activity of the stream; as in most circumstances, redundant streams should never be opened, as it will tend to only make matters worse.

Thank you.

On Jul 1, 2014, at 12:40 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
>> Then how could multiple streams and corresponding temp_files 
>> ever be created upon successive requests for the same $uri with 
>> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all 
>> subsequent requests are locked to the same cache_node created by 
>> the first request even prior to its completion?
> 
> Quoting documentation, http://nginx.org/r/proxy_cache_lock:
> 
> : When enabled, only one request at a time will be allowed to 
> : populate a new cache element identified according to the 
> : proxy_cache_key directive by passing a request to a proxied 
> : server. Other requests of the same cache element will either wait 
> : for a response to appear in the cache or the cache lock for this 
> : element to be released, up to the time set by the 
> : proxy_cache_lock_timeout directive.
> 
> So, there are at least two cases "prior to its completion" which 
> are explicitly documented:
> 
> 1. If the cache lock is released - this happens, e.g., if the 
>   response isn't cacheable according to the response headers.
> 
> 2. If proxy_cache_lock_timeout expires.
> 
> -- 
> Maxim Dounin
> http://nginx.org/



More information about the nginx mailing list