Pre-compressed (gzip) HTML using fastcgi_cache?

B.R. reallfqq-nginx at yahoo.fr
Sat Oct 29 22:17:20 UTC 2016


$http_accept_encoding gets the value of the HTTP Accept-Encoding header.
This might vary depending on the client being used, unless you control them
and their value.

Thus, the same request being made with a different (set of) value(s) in
this header will generate another key.

If you simply want to check a specific value is being used in this header,
you might filter its content through a (series of) map and used the
filtered values only as part of a cache key.
That is a quick idea, I have put much brains in it, but it could be a step
in the direction you want to go.
---
*B. R.*

On Thu, Oct 27, 2016 at 8:41 PM, seo010 <nginx-forum at forum.nginx.org> wrote:

> Hi!
>
> I was wondering if anyone has an idea to serve pre-compressed (gzip) HTML
> using proxy_cache / fastcgi_cache.
>
> I tried a solution with a map of http_accept_encoding as part of the
> fastcgi_cache_key with gzip compressed output from the script, but it
> resulted into strange behavior (the MD5 hash for the first request
> corresponds to the KEY, the next requests with an unknown MD5 hash using
> the
> same KEY.
>
> Nginx version: 1.11.1
>
> The initial solution to serve pre-compressed gzip HTML from proxy_cache /
> fastcgi_cache was the following:
>
> Map:
> map $http_accept_encoding $gzip_enabled {
>     ~*gzip                gzip;
> }
>
> Server:
> fastcgi_cache_path /path/to/cache/nginx levels=1:2 keys_zone=XXX:20m
> max_size=4g inactive=7d;
>
> PHP-FPM proxy:
> set $cache_key "$gzip_enabled$request_method$request_uri";
>
> fastcgi_pass unix:/path/to/php-fpm.sock;
> fastcgi_index index.php;
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PHP_VALUE "error_log=/path/to/logs/php.error.log";
> fastcgi_intercept_errors on;
>
> # full page cache
> fastcgi_no_cache $skip_cache_save;
> fastcgi_cache_bypass $skip_cache;
> fastcgi_cache XXX;
> fastcgi_cache_use_stale error timeout invalid_header updating http_500;
> fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
> fastcgi_cache_valid 200 7d; # valid for 7 days
> fastcgi_cache_valid 301 302 304 1h;
> fastcgi_cache_valid any 5m;
> fastcgi_cache_lock on;
> fastcgi_cache_lock_timeout 5s;
> fastcgi_cache_key $cache_key;
>
> add_header X-Cache $upstream_cache_status;
> #add_header X-Cache-Key $cache_key;
>
> include fastcgi_params;
>
> It did work when testing in 1 browser: it showed "MISS" and "HIT" for 2
> requests. The cache directory showed the correct MD5 hash for the key.
>
> But when testing the same URL again in a different browser, a yet
> unexplained behavior occurred. A totally new MD5 hash was used to store the
> same pre-compressed content. When viewing the cached file, the exact same
> KEY was shown (without additional spaces or special characters).
>
> Although the solution with a GZIP parameter may work, I was wondering if
> anyone knows of a better solution to serve pre-compressed HTML from Nginx
> cache as it results into 4 to 10ms latency saving per request on a idle
> quad-core server with 4x SSD in RAID 10.
>
> I could not find any information related to a solution in Google while it
> appears to be a major potential for performance gain on high traffic
> websites.
>
> Best Regards,
> Jan Jaap
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,270604,270604#msg-270604
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20161030/cbf51dc9/attachment.html>


More information about the nginx mailing list