Is it possible to send html HEAD early (chunked)?

Maxim Dounin mdounin at
Tue Jul 15 12:45:04 UTC 2014


On Mon, Jul 14, 2014 at 08:35:40PM +0200, Martin Grotzke wrote:

> Am 14.07.2014 14:54 schrieb "Maxim Dounin" <mdounin at>:
> >
> > By default, nginx just sends what's already available.  And for
> > SSI, it uses chunked encoding.
> I don't understand this. In my understanding SSI (the virtual include
> directive) goes downstream (e.g. gets some data from a backend) so that the
> backend defines how to respond to nginx. What does it mean that nginx uses
> chunked encoding?

The transfer encoding is something what happens on hop-by-hop 
basis, and a backend can't define transfer encoding used between 
nginx and the client.

The transfer encoding is selected by nginx as appropriate - if 
Content-Length is know it will be identity (or rather no transfer 
encoding at all), if it's not known (and the client uses HTTP/1.1) - 
chunked will be used.

In case of SSI, content length isn't known in advance due to SSI 
processing yet to happen, and hence chunked transfer encoding will 
be used.

> > That is, if a html head is
> > immediately available in your case, it will be just sent to a
> > client.
> Does it matter if the html head is pulled into the page via SSI or not?

It doesn't matter.

> > There is a caveat though: the above might not happen due to
> > buffering in various places.  Notably, this includes
> > postpone_output and gzip filter.  To ensure buffering will not
> > happen you should either disable appropriate filters, or use
> > flushes.  Latter is automatically done on each buffer sent when
> > using "proxy_buffering off" ("fastcgi_buffering off" and so on).
> Ok. Might this have a negative impact on my backend when there are slow
> clients? So that when a client consumes the response very slow my backend
> is kept "busy" (delivering the response as slow as the client consumes it)
> and cannot just hand off the data / response to nginx?

Yes, switching off proxy buffering may have negative effects on 
some workloads and it is not generally recommended.

Maxim Dounin

More information about the nginx mailing list