proxy_cache
Maxim Dounin
mdounin at mdounin.ru
Thu Jan 12 12:34:01 UTC 2012
Hello!
On Thu, Jan 12, 2012 at 11:49:23AM +0200, Anatoli Marinov wrote:
> I know this configuration variable. It has been added by Maxim last
> mouth in unstable (as i remembered but I am not absolutely sure). It
> seem to be a workaround and will not solve the problem. I think it
> is unusable.
>
> If we use it for the same case:
> In first 1 second A receives 1000 requests. Only 1 request will be
> send to B, for first request that A receives. The others 999 will
> wait for example 5 seconds. The link btw A and B is 1 MB per second
> and for 5 seconds A may receive 5 MB of data, so after 5 seconds 999
> requests will be sent to B.
>
> Is it right?
Yes. The remaining questions are: are you serving "many big files –
1GB – 2GB" over 1MB/s link? And 1000 simulteneous requests to the
same file are likely situation in your workload? If yes, you may
reconsider your network configuration.
You may also try to tune proxy_cache_lock_timeout (default is set
low enough to ensure minimal QoS impact), but it isn't likely to
help much in the particular situation described.
Ideally (for big files use case), we should be able to stream the
same response to all clients requesting the file (while
downloading it from upstream), but this isn't likely to happen
soon.
In relatively near plans is to improve cache lock mechanism to
make it possible to switch off cache (and thus save some disk
resources) in case of lock timeout.
Maxim Dounin
>
>
> On 01/12/2012 11:33 AM, Andrew Alexeev wrote:
> >Check this one, pls :)
> >http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_lock
> >
> >On Jan 12, 2012, at 1:32 PM, Anatoli Marinov wrote:
> >
> >>Hello Colleagues,
> >>
> >>I found a performance issue with proxy_cache module.
> >>
> >>
> >>For example I have installed 2 servers with nginx-1.0.10. First
> >>one(A) works as a reverse proxy and the second one(B) is a
> >>storage with many big files – 1GB – 2GB.
> >>
> >>The link between A and B for example may serve 1 MBps.
> >>
> >>
> >>There is a new object on B and it is not yet cached on A.
> >>
> >>Let we assume this is a hot new object and A receives 1000
> >>requests for 3 seconds for it. Because the object is not cached
> >>the requests will pass through upstream to B and incoming 1000
> >>streams will be saved on A in tmp directory as a separate files.
> >>After every request has completed the files from tmp directory
> >>will be moved to cache directory. 1000 equal operations for one
> >>and the same object. In addition every object will be cached
> >>slow because there are 999 other streams.
> >>
> >>
> >>This 1 GB object will be downloaded 1000 times before it may be
> >>cached and this is not optimal at all.
> >>
> >>
> >>Am I missing something? It may be my configuration issue?
> >>
> >>Is there a solution for that?
> >>
> >>Cheers
> >>Anatoli Marinov
> >>
> >>_______________________________________________
> >>nginx-devel mailing list
> >>nginx-devel at nginx.org <mailto:nginx-devel at nginx.org>
> >>http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >
> >
> >_______________________________________________
> >nginx-devel mailing list
> >nginx-devel at nginx.org
> >http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
More information about the nginx-devel
mailing list