Optimizing NGINX TLS Time To First Byte (TTTFB)

Ilya Grigorik igrigorik at gmail.com
Wed Dec 18 00:03:27 UTC 2013


>
> >>> What I don't get from your patch, it seems like you are hardcoding the
> >>> buffer to 16384 bytes during handshake (line 570) and only later use a
> >>> 1400 byte buffer (via NGX_SSL_BUFSIZE).
> >>>
> >>> Am I misunderstanding the patch/code?
> >
> > It may well be the case that I'm misunderstanding it too :) ... The
> > intent is:
> >
> > - set maximum record size for application data to 1400 bytes. [1]
> > - allow the handshake to set/use the maximum 16KB bufsize to avoid extra
> > RTTs during tunnel negotiation.
>
> Ok, what I read from the patch and your intent is the same :)
>
> I was confused about the 16KB bufsize for the initial negotiation, but now
> I've read the bug report [1] and the patch [2] about the extra RTT when
> using long certificate chains, and I understand it.
>
> But I don't really get the openssl documentation about this [3]:
> > The initial buffer size is DEFAULT_BUFFER_SIZE, currently 4096. Any
> attempt
> > to reduce the buffer size below DEFAULT_BUFFER_SIZE is ignored.
>
> In other words this would mean we cannot set the buffer size below 4096,
> but
> you are doing exactly this, by setting the buffer size to 1400 byte. Also,
> you measurements indicate success, so it looks like this statement in the
> openssl documentation is wrong?
>
> Or does setting the buffer size to 1400 "just" reset it from 16KB to 4KB
> and
> thats the improvement you see in your measurement?
>

Looking at the tcpdump after applying the patch does show ~1400 byte
records:
http://cloudshark.org/captures/714cf2e0ca10?filter=tcp.stream%3D%3D2

Although now on closer inspection there seems to be another gotcha in there
that I overlooked: it's emitting two packets, one is 1389 bytes, and second
is ~31 extra bytes, which means the actual record is 1429 bytes. Obviously,
this should be a single packet... and 1400 bytes.


> > I think that's a minimum patchset that would significantly improve
> > performance over current defaults. From there, I'd like to see several
> > improvements. For example, either (a) provide a way to configure the
> > default record size via a config flag (not a recompile, that's a deal
> > breaker for most), or, (b) implement a smarter strategy where each
> session
> > begins with small record size (1400 bytes) but grows its record size as
> the
> > connection gets older -- this allows us to eliminate unnecessary
> buffering
> > latency at the beginning when TCP cwnd is low, and then decrease the
> > framing overhead (i.e. go back to 16KB records) once the connection is
> > warmed up.
> >
> > P.S. (b) would be much better, even if takes a bit more work.
>
> Well, I'm not sure (b) its so easy, nginx would need to understand whether
> there is bulk or interactive traffic. Such heuristics may backfire in more
> complex scenarios.
>
> But setting an optimal buffer size for pre- and post-handshake seems to be
> a good compromise and 'upstream-able'.
>

If you only distinguish pre and post TLS handshake then you'll still
(likely) incur the extra RTT on first app-data record -- that's what we're
trying to avoid by reducing the default record size. For HTTP traffic, I
think you want 1400 bytes records. Once we're out of slow-start, you can
switch back to larger record size.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131217/e05daf8e/attachment-0001.html>


More information about the nginx mailing list