How can the number of parallel/redundant open streams/temp_files be controlled/limited?

Paul Schlie schlie at comcast.net
Wed Jun 25 00:58:32 UTC 2014


Again thank you. However ... (below)

On Jun 24, 2014, at 8:30 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:

> Hello!
> 
> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote:
> 
>> Thank you; however it appears to have no effect on reverse proxy_store'd static files?
> 
> Yes, it's part of the cache machinery.  The proxy_store 
> functionality is dumb and just provides a way to store responses 
> received, nothing more.

- There should be no difference between how reverse proxy'd files are accessed and first stored into corresponding temp_files (and below).

> 
>> (Which seems odd, if it actually works for cached files; as both 
>> are first read into temp_files, being the root of the problem.)
> 
> See above (and below).
> 
>> Any idea on how to prevent multiple redundant streams and 
>> corresponding temp_files being created when reading/updating a 
>> reverse proxy'd static file from the backend?
> 
> You may try to do so using limit_conn, and may be error_page and 
> limit_req to introduce some delay.  But unlikely it will be a 
> good / maintainable / easy to write solution.

- Please consider implementing by default that no more streams than may become necessary if a previously opened stream appears to have died (timed out), as otherwise only more bandwidth and thereby delay will most likely result to complete the request.  Further as there should be no difference between how reverse proxy read-streams and corresponding temp_files are created, regardless of whether they may be subsequently stored as either symbolically-named static files, or hash-named cache files; this behavior should be common to both.

>> (Out of curiosity, why would anyone ever want many multiple 
>> redundant streams/temp_files ever opened by default?)
> 
> You never know if responses are going to be the same.  The part 
> which knows (or, rather, tries to) is called "cache", and has 
> lots of directives to control it.

- If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx.
(unless a backend server is frequently dropping connections, it's counterproductive to open multiple redundant streams; as doing so by default will only likely result in higher-bandwidth and thereby slower response completion.)

> -- 
> Maxim Dounin
> http://nginx.org/
> 
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx



More information about the nginx mailing list