How can the number of parallel/redundant open streams/temp_files be controlled/limited?

Maxim Dounin mdounin at mdounin.ru
Tue Jul 1 13:20:04 UTC 2014


Hello!

On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote:

> As it appears a downstream response is not cached until first 
> completely read into a temp_file (which for a large file may 
> require 100's if not 1,000's of MB be transferred), there 
> appears to be no "cache node formed" which to "lock" or serve 
> "stale" responses from, and thereby until the first "cache node" 
> is useably created, proxy_cache_lock has nothing to lock 
> requests to?
> 
> The code does not appear to be forming a "cache node" using the 
> designated cache_key until the requested downstream element has 
> completed transfer as you've noted?

Your reading of the code is incorrect.

A node in shared memory is created on a request start, and this is 
enough for proxy_cache_lock to work.  On the request completion, 
the temporary file is placed into the cache directory, and the 
node is updated to reflect that the cache file exists and can be 
used.

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx mailing list