Proxy_Cache Locking or Waiting method?

Resicow resicow at
Tue Sep 29 03:10:27 MSD 2009

Hi Igor,

I hope you are well. I just wanted to know if you had a time frame as to 
when we may start to see backend locking for proxy_cache.

I have an example where nginx will be caching large files, and sometimes 
a customer will use an HTTP monitoring service to check if a file works, 
and that may generate 40+ hits to the file all at the same time, causing 
nginx to proxy and write the file 40 times to the disk.

Is there a way to either:

A) Cause nginx to pause connections for subsequent requests for the same 
file, until the file is in cache from the first connection. Obviously 
this isn't very clean, as it can cause connections to pile up.


B) Prevent nginx for writing the duplicate files to the hard disk. So if 
nginx sees that the file is not in cache, but is currently being fetched 
from the backend, then it can proxy the connection from the backend 
still, but just not write the file to the tmp directory, etc. This would 
be the ideal method I believe.

This is a huge issue when using SSDs, as there is a major amount of 
wasted write cycles, which of course causes other issues like the page 
cache filling up with files that will just be deleted, etc. Plus, if you 
are proxying a 1GB file, then in the example above, you'll need 40GBs of 
space available on the tmp volume, to hold the 40 incoming files, just 
to have 39 of them deleted.



More information about the nginx mailing list