<p dir="ltr">Am 13.07.2014 18:37 schrieb "mex" <<a href="mailto:nginx-forum@nginx.us">nginx-forum@nginx.us</a>>:<br>
><br>
> in your case i'd say the cleanest way would be a reengineering<br>
> of your application; the other way would imply a full regex<br>
> on every request coming back from your app-servers to filter out<br>
> those stuff that already has been send.<br>
> the problem: appservers like tomcat/jboss/rails a.s.o.<br>
> usually send full html-pages;</p>
<p dir="ltr">We're using the play framework, we can easily send partial content using chunked encoding.</p>
<p dir="ltr">> if you find a way to just<br>
> send the <body> itself, the rest like sending html-headers early<br>
> from cache seems easy:<br>
><br>
><br>
> location /blah {<br>
> content_by_lua '<br>
> ngx.say(html_header)<br>
> local res = ngx.location.capture("/get_stuff_from_backend")<br>
> if res.status == 200 then<br>
> ngx.say(res.body)<br>
> end<br>
> ngx.say(html_footer)<br>
> ';<br>
> }</p>
<p dir="ltr">The html head, page header and page footer are dynamic as well and depend on the current request (but are easy to calculate - sorry if my previous answer was misleading here).<br>
I think the cleanest solution would be if the backend could receive 1 request and just split the content/response into chunks and send what's immediately available (html head + perhaps page header as well) as first chunk and send the rest afterwards. <br>
</p>
<p dir="ltr">> do you refer to something similar to this?<br>
> <a href="https://github.com/bigpipe/bigpipe">https://github.com/bigpipe/bigpipe</a></p>
<p dir="ltr">Not exactly this framework but the bigpipe concept. The idea I really like is that the browser can start to download js + CSS and that the user can already see the page header with navigation while the backend is still working - therefore a much better perceived performance.</p>
<p dir="ltr">Cheers, <br>
Martin <br><br><br></p>
<p dir="ltr">> ><br>
> > > from what i understand you have a "static" part that should get send<br>
> > > early/from<br>
> > > cache and a "dynamic" part that must wait for the backend?<br>
> ><br>
> > Exactly.<br>
> ><br>
> > Cheers,<br>
> > Martin<br>
> ><br>
> > > the only solution i could think of in such an asynchronous delivery<br>
> > > is using nginx + lua, or maybe varnish (iirc you yould mark parts of<br>
> > a<br>
> > > page cacheable, but dont know if you can deliver asynchronously<br>
> > though)<br>
> > ><br>
> > ><br>
> > ><br>
> > > regards,<br>
> > ><br>
> > ><br>
> > > mex<br>
> > ><br>
> > > Posted at Nginx Forum:<br>
> > <a href="http://forum.nginx.org/read.php?2,251717,251719#msg-251719">http://forum.nginx.org/read.php?2,251717,251719#msg-251719</a><br>
> > ><br>
> > > _______________________________________________<br>
> > > nginx mailing list<br>
> > > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > > <a href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> > _______________________________________________<br>
> > nginx mailing list<br>
> > <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> > <a href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
><br>
> Posted at Nginx Forum: <a href="http://forum.nginx.org/read.php?2,251717,251722#msg-251722">http://forum.nginx.org/read.php?2,251717,251722#msg-251722</a><br>
><br>
> _______________________________________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</p>