Can multiple nginx instances share a single proxy cache store?
i.hailperin at heinlein-support.de
Tue Jul 31 14:11:36 UTC 2012
Thank you Maxim, I have made my decision now!
On 07/31/2012 03:58 PM, Maxim Dounin wrote:
> On Tue, Jul 31, 2012 at 03:35:25PM +0200, Isaac Hailperin wrote:
>> I am planning to deploy multiple nginx servers (10) to proxy a bunch
>> of apaches(20). The apaches host about 4000 vhosts, with a total
>> volume of about 1TB.
>> One scenario I am thinking of with regard to hard disk storage of
>> the proxy cache would be to have a single storage object, eg. NAS,
>> connected to all 10 nginx servers via fibre channel.
>> This would have the advantage of only pulling items into cache once,
>> and would also avoid cache inconsistencies, which could at least be
>> a temporal problem if all 10 nginx servers would have their own
>> My question now is: would this work in theory?
>> Can multiple nginx instances share a single proxy cache store?
>> I am thinking of cache management, all 10 nginx instances would try
>> to manage the same cache directory. I don't know enough about the
>> cache management to understand if there are problems with this
>> Strictly speaking this is a second question, but still: the
>> alternative would be to give the nginx local storage for the proxy
>> cache (e.g. a raid 5, or even jbod (just a bunch of disks)). This
>> would obviously be much simpler to set up and manage, und thus be
>> more robust (the single storage would be a single point of failure).
>> Which would you recommend?
> Use local storage.
> The main disadvantage of a network storage pretending to be a
> local filesystem is blocking I/O. Even with fully working AIO (in
> contrast to one available under Linux, which requires directio)
> there are still blocking operations like open()/fstat(), and this
> will likely result in suboptimal nginx performance even if just
> serving static files from such storage.
> Maxim Dounin
> nginx mailing list
> nginx at nginx.org
More information about the nginx