Optimizing NGINX TLS Time To First Byte (TTTFB)

Ilya Grigorik igrigorik at gmail.com
Thu Dec 19 00:04:59 UTC 2013


On Tue, Dec 17, 2013 at 7:59 PM, Alex <alex at zeitgeist.se> wrote:
>
> I did some empirical testing and with my configuration (given cipher
> size, padding, and all), I came to 1370 bytes as being the optimal size
> for avoid fragmenting of TLS record fragmentation.
>

Ah, right, we're not setting the "total" record size... Rather, we're
setting the maximum payload size within the record. On top of that there is
the extra 5 bytes for the record header, plus MAC and padding (if block
cipher is used) -- so that's 5 bytes + up to 32 extra bytes per record. Add
IP (40 bytes for IPv6), TCP header (20), and some room for TCP options
(40), and we're looking at ~1360 bytes.. Which is close to what you're
seeing in your testing.


> > If you only distinguish pre and post TLS handshake then you'll still
> (likely) incur the extra RTT on first app-data record -- that's what we're
> trying to avoid by reducing the default record size. For HTTP traffic, I
> think you want 1400 bytes records. Once we're out of slow-start, you can
> switch back to larger record size.
>
> Maybe I am wrong but I was of the belief that you should always try to
> fit TLS records into individual TCP segments. Hence you should always
> try to keep TLS record ~1400 bytes (or 1370 in my case), no matter the
> TCP Window.
>

For interactive traffic I think that's generally true as it eliminates the
edge case of CWND overflows (extra RTT of buffering) and minimizes impact
of packet reordering and packet loss. FWIW, for these exact reasons the
Google frontend servers have been using TLS record = TCP segment for a few
years now... So there is good precedent to using this as a default.

That said, small records do incur overhead due to extra framing, plus more
CPU cycles (more MACs and framing processing). So, in some instances, if
you're delivering large streams (e.g. video), you may want to use larger
records... Exposing record size as a configurable option would address this.

On Wed, Dec 18, 2013 at 8:38 AM, Maxim Dounin <mdounin at mdounin.ru> wrote:

>
> > Although now on closer inspection there seems to be another gotcha in
> there
> > that I overlooked: it's emitting two packets, one is 1389 bytes, and
> second
> > is ~31 extra bytes, which means the actual record is 1429 bytes.
> Obviously,
> > this should be a single packet... and 1400 bytes.
>
> We've discussed this alot here a while ago, and it turns
> out that it's very non-trivial task to fill exactly one packet -
> as space in packets may vary depending on TCP options used, MTU,
> tunnels used on a way to a client, etc.
>

Yes, that's a good point.


> On the other hand, it looks good enough to have records up to
> initial CWND in size without any significant latency changes.  And
> with IW10 this basically means that anything up to about 14k
> should be fine (with RFC3390, something like 4k should be ok).
> It also reduces bandwidth costs associated with using multiple
> records.
>

In theory, I agree with you, but in practice even while trying to play with
this on my own server it appears to be more tricky than that: to ~reliably
avoid the CWND overflow I have to set the record size <10k.. There are also
differences in how the CWND is increased (byte based vs packet based)
across different platforms, and other edge cases I'm surely overlooking.
Also, while this addresses the CWND overflow during slowstart, smaller
records offer additional benefits as they help minimize impact of
reordering and packet loss (not eliminate, but reduce its negative impact
in some cases).

Just in case, below is a patch to play with SSL buffer size:
>
> # HG changeset patch
> # User Maxim Dounin <mdounin at mdounin.ru>
> # Date 1387302972 -14400
> #      Tue Dec 17 21:56:12 2013 +0400
> # Node ID 090a57a2a599049152e87693369b6921efcd6bca
> # Parent  e7d1a00f06731d7508ec120c1ac91c337d15c669
> SSL: ssl_buffer_size directive.
>

Just tried it on my local server, works as advertised. :-)

Defaults matter and we should optimize for best performance out of the
box... Can we update NGX_SSL_BUFSIZE size as part of this patch? My current
suggestion is 1360 bytes as this guarantees best possible case for helping
the browser start processing data as soon as possible: minimal impact of
reordering / packet loss / no CWND overflows.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20131218/2a3ad021/attachment.html>


More information about the nginx mailing list