<br><div class="gmail_quote">On Wed, Feb 15, 2012 at 3:55 PM, rmalayter <span dir="ltr"><<a href="mailto:nginx-forum@nginx.us">nginx-forum@nginx.us</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
</div>There's no reason the "backend" for your caching layer cannot be another<br>
nginx server block running on a high port bound to localhost. This<br>
high-port server block could do gzip compression, and proxy-pass to the<br>
back end with "Accept-Encoding: identity", so the back-end never has to<br>
do compression. The backend server will have to use "gzip_http_version<br>
1.0" and "gzip_proxied any" to do compression because it is being<br>
proxied from the front-end.<br></blockquote><div><br></div><div>Ah, good point. I tried to take this an extra step further by using a virtual host of the same server as "compression backend" and it appears to work nicely. Below is what I did so far, in case anyone is looking for the same and Google leads them here.</div>
<div><br></div><div>(Feels a bit like getting out of the door and back in through the window :) but perhaps just like we have internal redirects it would be possible to use ngx_lua to simulate an internal proxy and avoid the extra HTTP request.)</div>
<div><br></div><div><div> proxy_cache_path /var/lib/nginx/cache/myapp</div><div> levels=1:2</div><div> keys_zone=myapp_cache:10m</div><div> max_size=1g</div><div> inactive=2d;</div><div> </div>
<div> log_format cache '***$time_local '</div><div> '$upstream_cache_status '</div><div> 'Cache-Control: $upstream_http_cache_control '</div><div> 'Expires: $upstream_http_expires '</div>
<div> '$host '</div><div> '"$request" ($status) '</div><div> '"$http_user_agent" '</div><div> 'Args: $args ';</div><div> </div><div> access_log /var/log/nginx/cache.log cache;</div>
<div> </div><div> upstream backend {</div><div> server localhost:8002;</div><div> }</div><div> </div><div> server { # this step only does compression</div><div> listen 85;</div><div> </div>
<div> server_name myapp.local;</div><div> include proxy_params;</div><div> </div><div> location / {</div><div> gzip_http_version 1.0;</div><div> proxy_set_header Accept-Encoding identity;</div>
<div> proxy_pass <a href="http://backend">http://backend</a>;</div><div> }</div><div> }</div><div> </div><div> server {</div><div> listen 80;</div><div> </div><div> server_name myapp.local;</div>
<div> include proxy_params;</div><div> </div><div> location / {</div><div> proxy_pass <a href="http://127.0.0.1:85">http://127.0.0.1:85</a>;</div><div> }</div><div> </div>
<div> location /similar-to {</div><div> proxy_set_header Accept-Encoding gzip;</div><div> proxy_cache_key "$scheme$host$request_uri";</div><div> proxy_cache_valid 2d;</div>
<div> proxy_cache myapp_cache;</div><div> proxy_pass <a href="http://127.0.0.1:85">http://127.0.0.1:85</a>;</div><div> }</div><div> }</div><div> </div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Also note there may be better options in the latest nginx versions, or<br>
by using the gunzip 3rd-party module:<br>
<a href="http://mdounin.ru/hg/ngx_http_gunzip_filter_module/file/27f057249155/README" target="_blank">http://mdounin.ru/hg/ngx_http_gunzip_filter_module/file/27f057249155/README</a><br>
<br>
With the gunzip module, you can configure things so that you always<br>
cache compressed data, then only decompress it for the small number of<br>
clients that don't support gzip compression.<br></blockquote><div><br></div><div>This looks perfect for having a gzip-only cache, which may not lead to save that much disk space but it certainly helps with mind space.</div>
<div><br></div><div>Cheers,</div><div>Massimiliano</div><div><br></div></div><br>