<div>Hello,</div><div><br></div><div>I'm looking at using Nginx as a reverse proxy to cache a few millions HTML pages coming from a backend server. </div><div><br></div><div>The cached content will very seldom (if at all) change so both proxy_cache and proxy_store could do, but all page URLs have a "/foo/$ID" pattern and IIUC with proxy_store that would cause millions of files in the same directory, which the filesystem might not be ecstatic about. So for now I'm going with proxy_cache and two levels of directories. All is going great in my preliminary tests.</div>
<div><br></div><div>Now, rather than caching uncompressed files and gzipping them before serving them most of the time, it would be great if cached content could be gzipped once (on disk) and served as such most of the time. This would decrease both disk space requirements (by 7-8 times) and processor load.</div>
<div><br></div><div>Is this doable? Patching/recompiling nginx as well as using Lua are fine with me. Serving gzipped content from the backend would in theory be possible though for other reasons better avoided.</div><div>
<br></div>
<div>Thanks for any insight!</div><div>Massimiliano</div><div><br></div><div><br></div>