<p dir="ltr">Am 14.07.2014 14:54 schrieb "Maxim Dounin" <<a href="mailto:mdounin@mdounin.ru">mdounin@mdounin.ru</a>>:<br>
><br>
> By default, nginx just sends what's already available. And for<br>
> SSI, it uses chunked encoding.</p>
<p dir="ltr">I don't understand this. In my understanding SSI (the virtual include directive) goes downstream (e.g. gets some data from a backend) so that the backend defines how to respond to nginx. What does it mean that nginx uses chunked encoding?</p>
<p dir="ltr">> That is, if a html head is<br>
> immediately available in your case, it will be just sent to a<br>
> client.</p>
<p dir="ltr">Does it matter if the html head is pulled into the page via SSI or not? <br></p>
<p dir="ltr">> There is a caveat though: the above might not happen due to<br>
> buffering in various places. Notably, this includes<br>
> postpone_output and gzip filter. To ensure buffering will not<br>
> happen you should either disable appropriate filters, or use<br>
> flushes. Latter is automatically done on each buffer sent when<br>
> using "proxy_buffering off" ("fastcgi_buffering off" and so on).</p>
<p dir="ltr">Ok. Might this have a negative impact on my backend when there are slow clients? So that when a client consumes the response very slow my backend is kept "busy" (delivering the response as slow as the client consumes it) and cannot just hand off the data / response to nginx? </p>
<p dir="ltr">Thanks && cheers, <br>
Martin <br><br></p>
<p dir="ltr">> Flush can be also done explicitly via $r->flush() when when using<br>
> the embedded perl module.<br>
><br>
> --<br>
> Maxim Dounin<br>
> <a href="http://nginx.org/">http://nginx.org/</a><br>
><br>
> _______________________________________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</p>