How can the number of parallel/redundant open streams/temp_files be controlled/limited?

Paul Schlie schlie at
Tue Jul 1 14:15:47 UTC 2014

Then how could multiple streams and corresponding temp_files ever be created upon successive requests for the same $uri with "proxy_cache_key $uri" and "proxy_cache_lock on"; if all subsequent requests are locked to the same cache_node created by the first request even prior to its completion?

You've previously noted:

> In theory, cache code can be improved (compared to what we 
> currently have) to introduce sending of a response being loaded 
> into a cache to multiple clients.  I.e., stop waiting for a cache 
> lock once we've got the response headers, and stream the response 
> body being load to all clients waited for it.  This should/can 
> help when loading large files into a cache, when waiting with 
> proxy_cache_lock for a complete response isn't cheap.  In 
> practice, introducing such a code isn't cheap either, and it's not 
> about using other names for temporary files.

Being what I apparently incorrectly understood proxy_cache_lock to actually do.

So if not the above, what does proxy_cache_lock actually do upon receipt of subsequent requests for the same $uri?

On Jul 1, 2014, at 9:20 AM, Maxim Dounin <mdounin at> wrote:

> Hello!
> On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote:
>> As it appears a downstream response is not cached until first 
>> completely read into a temp_file (which for a large file may 
>> require 100's if not 1,000's of MB be transferred), there 
>> appears to be no "cache node formed" which to "lock" or serve 
>> "stale" responses from, and thereby until the first "cache node" 
>> is useably created, proxy_cache_lock has nothing to lock 
>> requests to?
>> The code does not appear to be forming a "cache node" using the 
>> designated cache_key until the requested downstream element has 
>> completed transfer as you've noted?
> Your reading of the code is incorrect.
> A node in shared memory is created on a request start, and this is 
> enough for proxy_cache_lock to work.  On the request completion, 
> the temporary file is placed into the cache directory, and the 
> node is updated to reflect that the cache file exists and can be 
> used.
> -- 
> Maxim Dounin
> _______________________________________________
> nginx mailing list
> nginx at

More information about the nginx mailing list