Optimizing NGINX TLS Time To First Byte (TTTFB)
mdounin at mdounin.ru
Thu Dec 19 13:15:18 UTC 2013
On Wed, Dec 18, 2013 at 04:04:59PM -0800, Ilya Grigorik wrote:
> > On the other hand, it looks good enough to have records up to
> > initial CWND in size without any significant latency changes. And
> > with IW10 this basically means that anything up to about 14k
> > should be fine (with RFC3390, something like 4k should be ok).
> > It also reduces bandwidth costs associated with using multiple
> > records.
> In theory, I agree with you, but in practice even while trying to play with
> this on my own server it appears to be more tricky than that: to ~reliably
> avoid the CWND overflow I have to set the record size <10k.. There are also
> differences in how the CWND is increased (byte based vs packet based)
> across different platforms, and other edge cases I'm surely overlooking.
> Also, while this addresses the CWND overflow during slowstart, smaller
> records offer additional benefits as they help minimize impact of
> reordering and packet loss (not eliminate, but reduce its negative impact
> in some cases).
The problem that there are even more edge cases with packet-sized
records. Also, in practice with packet-sized records there seems
to be significant difference in throughput. In my limited testing
packet-sized records resulted in 2x slowdown on large responses.
Of course the overhead may be somewhat reduced by applying smaller
records deeper in the code, but a) even in theory, there is some
overhead, and b) it doesn't looks like a trivial task when using
OpenSSL. Additionally, there may be wierd "Nagle vs. delayed ack"
related effects on fast connections, it needs additional
As of now, I tend to think that 4k (or 8k on systems with IW10)
buffer size is optimal for latency-sensitive workloads.
> > Just in case, below is a patch to play with SSL buffer size:
> > # HG changeset patch
> > # User Maxim Dounin <mdounin at mdounin.ru>
> > # Date 1387302972 -14400
> > # Tue Dec 17 21:56:12 2013 +0400
> > # Node ID 090a57a2a599049152e87693369b6921efcd6bca
> > # Parent e7d1a00f06731d7508ec120c1ac91c337d15c669
> > SSL: ssl_buffer_size directive.
> Just tried it on my local server, works as advertised. :-)
> Defaults matter and we should optimize for best performance out of the
> box... Can we update NGX_SSL_BUFSIZE size as part of this patch? My current
> suggestion is 1360 bytes as this guarantees best possible case for helping
> the browser start processing data as soon as possible: minimal impact of
> reordering / packet loss / no CWND overflows.
I don't think that changing the default is a good idea, it
may/will cause performance degradation with large requests, see
above. While reducing latency is important in some cases, it's
certainly not the only thing to consider during performance
More information about the nginx