Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again

B.R. reallfqq-nginx at
Mon Jan 20 17:43:02 UTC 2014

Jonathan idea looks like a nice solution, because there is no modification
of original nginx (good for updates and maintenance thus good for security).
Always avoid breaking the update chain (thus diverting from original
source, unless having another repository being reactive to - security -
updates which you could pull from).

However that means uncompressed traffic between backend and nginx proxy,
++ traffic volume (memory) in backend interface compared to frontend
-- CPU time on backend to compress data and on proxy to uncompress it

Depending on your application, that could create a bottleneck. Maybe
caching the compressed result on the proxy would help reducing backend work
and traffic and as a result hope to limit the burden to it.

My 2 cents,
*B. R.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list