How can the number of parallel/redundant open streams/temp_files be controlled/limited?
Paul Schlie
schlie at comcast.net
Tue Jul 1 21:03:47 UTC 2014
Lastly, is there any way to try to get proxy_store to work in combination with proxy_cache, possibly by enabling the completed temp_file to be saved as a proxy_store file within its uri logical path hierarchy, and the cache_file descriptor aliased to it, or visa versa?
(As it's often nice to be able to view/access cached files within their natural uri hierarchy, being virtually impossible if stored using their corresponding hashed names alone; and not lose the benefit of being able to lock multiple pending requests to the same cache_node being fetched so as to minimize otherwise redundant down-stream requests prior to the file being cached.)
On Jul 1, 2014, at 4:11 PM, Paul Schlie <schlie at comcast.net> wrote:
> Thank you for your patience.
>
> I mistakenly thought the 5 second default value associated with proxy_cache_lock_timeout was the maximum delay allowed between successive responses from the backend server is satisfaction of the reverse proxy request being cached prior to the cache lock being released, not the maximum delay for the response to be completely received and cached as it appears to actually be.
>
> Now that I understand, please consider setting the default value much higher, or more ideally set in proportion to the size of the item being cached and possibly some measure of the activity of the stream; as in most circumstances, redundant streams should never be opened, as it will tend to only make matters worse.
>
> Thank you.
>
> On Jul 1, 2014, at 12:40 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:
>> On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote:
>>> Then how could multiple streams and corresponding temp_files
>>> ever be created upon successive requests for the same $uri with
>>> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all
>>> subsequent requests are locked to the same cache_node created by
>>> the first request even prior to its completion?
>>
>> Quoting documentation, http://nginx.org/r/proxy_cache_lock:
>>
>> : When enabled, only one request at a time will be allowed to
>> : populate a new cache element identified according to the
>> : proxy_cache_key directive by passing a request to a proxied
>> : server. Other requests of the same cache element will either wait
>> : for a response to appear in the cache or the cache lock for this
>> : element to be released, up to the time set by the
>> : proxy_cache_lock_timeout directive.
>>
>> So, there are at least two cases "prior to its completion" which
>> are explicitly documented:
>>
>> 1. If the cache lock is released - this happens, e.g., if the
>> response isn't cacheable according to the response headers.
>>
>> 2. If proxy_cache_lock_timeout expires.
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
More information about the nginx
mailing list