Default value of gzip_proxied
reallfqq-nginx at yahoo.fr
Sun Mar 22 14:14:22 UTC 2015
I do not get why you focus on the gzip_vary directive, while I was
explicitely talking about gzip_proxied.
The fact that content supposedly compressed might actually not be because
it contains a 'Via' header is the root cause of our trouble... and you just
told me it was for HTTP/1.0 compatibility.
This behavior, put aside the HTTP/1.0 compatibility, is strange and
disruptive at best.
I willingly join you on the fact that still a lot of software uses
HTTP/1.0, but I usually distinguish that from the reasons behind it and
what it should be.
I assume nginx defaults to talking HTTP/1.0 with backend because it is the
lowest common denominator. That allows to handle outdated software and I
can understand that when you wish to be universal.
nginx seems to be stuck not knowing which way the wind is blowing,
sometimes promoting modernity and sometimes enforcing backwards (yes,
HTTP/1.0 means looking backwards) compatibility.
While setting default values to be interoperable the most, which I
understand perfectly, there should be somewhere bright pointers about the
fact that some directives only exists for such reasons. I would be more
than welcoming that defualt configuration introduces commented examples
about what modern configuration/usage of nginx shall be.
'gzip on' clearly is clearly not enough if you want to send compressed
content. How much people know about it? 'RTFM' stance is no longer valid
when multiple directives shall be activated at once on a modern
infrastructure. nginx configuration was supposed to be lean and clean. It
is, provided that you use outdate protocol to serve content: minimal
configuration for compatibility is smaller than the one for modern
protocols... and you need to dig by yourself to learn that. WTF?
On Sun, Mar 22, 2015 at 2:31 AM, Maxim Dounin <mdounin at mdounin.ru> wrote:
> On Sat, Mar 21, 2015 at 04:05:05PM +0100, B.R. wrote:
> > Hello Maxim,
> > So HTTP/1.0 is the reason of all that.
> > Now I also understand why there are those parameters allowing to compress
> > data that should not be cached: nginx as webserver tries to be smarter
> > those dumb HTTP/1.0 proxies.
> > I was wondering, though: are there real numbers to back this
> > thing?
> > Is not there a point in time when the horizon could be set, denying
> > backwards compatibility for older software/standards?
> > HTTP/1.1, is the most used version of the protocol, nginx supports SPDY,
> > HTTP/2.0 is coming... and there are strangeness there for
> > backwards-compatibility with HTTP/1.0.
> > That behavior made us cache uncompressed content 'randomly' since the
> > pattern was hard to find/reproduce, and I got a bit of luck determining
> > condition under which we were caching uncompressed data...
> > What is the ratio benefits/costs of dropping compatibility (at least
> > partially) with HTTP/1.0?
> > I know I am being naive here, considering the most part of the Web is
> > HTTP/1.1-compliant, but how far am I for reality?
> There are two problems:
> - You assume HTTP/1.0 is dying. That's not true. While uncommon
> nowadays for browsers, it's still widely used by various
> software. In particular, nginx itself use it by default when
> talking to upstream servers.
> - You assume that the behaviour in question is only needed for
> HTTP/1.0 clients. That's, again, not true, as using "Vary:
> isn't a good idea either. As already mentioned, even if
> correctly supported it will cause cache data duplication.
> If you don't like the behaviour, you can always configure nginx to
> do whatever you want. But I don't think the default worth
> Maxim Dounin
> nginx mailing list
> nginx at nginx.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the nginx