Multiple nginx instances share same proxy cache storage
rpaprocki at fearnothingproductions.net
Mon Aug 11 00:24:04 UTC 2014
Any options then to support an architecture with multiple nginx nodes sharing or distributing a proxy cache between them? i.e., a HAProxy machine load balances to several nginx nodes (for failover reasons), and each of these nodes handles http proxy + proxy cache for a remote origin? If nginx handles cache info in memory, it seems that multiple instances could not be used to maintain the same cache info (something like rsyncing the cache contents between nodes thus would not work); are there any recommendations to achieve such a solution?
> On Aug 4, 2014, at 17:49, Maxim Dounin <mdounin at mdounin.ru> wrote:
>> On Mon, Aug 04, 2014 at 07:42:20PM -0400, badtzhou wrote:
>> I am thinking about setting up multiple nginx instances share single proxy
>> cache storage using NAS, NFS or some kind of distributed file system. Cache
>> key will be the same for all nginx instances.
>> Will this theory work? What kind of problem will it cause(locking, cached
>> corruption or missing metadata in the memory)?
> As soon as a cache is loaded, nginx relies on it's memory data to
> manage cache (keep it under the specified size, remove inactive
> items and so on). As a result it won't be happy if you'll try to run
> multiple nginx instances working with the same cache directory.
> It can tolerate multiple instances working with the same cache for
> a short period of time (e.g., during binary upgrade). But running
> nginx this way intentionally is a bad idea.
> Besides, using NFS (as well as other NASes) for nginx cache is a
> bad idea due to blocking file operations.
> Maxim Dounin
> nginx mailing list
> nginx at nginx.org
More information about the nginx