Is it possible to send html HEAD early (chunked)?

Martin Grotzke martin.grotzke at googlemail.com
Sun Jul 13 18:53:06 UTC 2014


Am 13.07.2014 18:37 schrieb "mex" <nginx-forum at nginx.us>:
>
> in your case i'd say the cleanest way would be a reengineering
> of your application; the other way would imply a full regex
> on every request coming back from your app-servers to filter out
> those stuff that already has been send.
> the problem: appservers like tomcat/jboss/rails a.s.o.
> usually send full html-pages;

We're using the play framework, we can easily send partial content using
chunked encoding.

> if you find a way to just
> send the <body> itself,  the rest like sending html-headers early
> from cache seems easy:
>
>
> location /blah {
>             content_by_lua '
>                 ngx.say(html_header)
>                 local res =
ngx.location.capture("/get_stuff_from_backend")
>                 if res.status == 200 then
>                     ngx.say(res.body)
>                 end
>                 ngx.say(html_footer)
>                ';
>         }

The html head, page header and page footer are dynamic as well and depend
on the current request (but are easy to calculate - sorry if my previous
answer was misleading here).
I think the cleanest solution would be if the backend could receive 1
request and just split the content/response into chunks and send what's
immediately available (html head + perhaps page header as well) as first
chunk and send the rest afterwards.

> do you refer to something similar to this?
> https://github.com/bigpipe/bigpipe

Not exactly this framework but the bigpipe concept. The idea I really like
is that the browser can start to download js + CSS and that the user can
already see the page header with navigation while the backend is still
working - therefore a much better perceived performance.

Cheers,
Martin


> >
> > > from what i understand you have a "static" part that should get send
> > > early/from
> > > cache and a "dynamic" part that must wait for the backend?
> >
> > Exactly.
> >
> > Cheers,
> > Martin
> >
> > > the only solution i could think of in such an asynchronous delivery
> > > is using nginx + lua, or maybe varnish (iirc you yould mark parts of
> > a
> > > page cacheable, but dont know if you can deliver asynchronously
> > though)
> > >
> > >
> > >
> > > regards,
> > >
> > >
> > > mex
> > >
> > > Posted at Nginx Forum:
> > http://forum.nginx.org/read.php?2,251717,251719#msg-251719
> > >
> > > _______________________________________________
> > > nginx mailing list
> > > nginx at nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,251717,251722#msg-251722
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20140713/997b35b6/attachment-0001.html>


More information about the nginx mailing list