Optimizing NGINX TLS Time To First Byte (TTTFB)

Lukas Tribus luky-37 at hotmail.com
Tue Dec 17 20:19:12 UTC 2013


>>> What I don't get from your patch, it seems like you are hardcoding the
>>> buffer to 16384 bytes during handshake (line 570) and only later use a
>>> 1400 byte buffer (via NGX_SSL_BUFSIZE).
>>> Am I misunderstanding the patch/code?
> It may well be the case that I'm misunderstanding it too :) ... The
> intent is:
> - set maximum record size for application data to 1400 bytes. [1]
> - allow the handshake to set/use the maximum 16KB bufsize to avoid extra
> RTTs during tunnel negotiation.

Ok, what I read from the patch and your intent is the same :)

I was confused about the 16KB bufsize for the initial negotiation, but now
I've read the bug report [1] and the patch [2] about the extra RTT when
using long certificate chains, and I understand it.

But I don't really get the openssl documentation about this [3]:
> The initial buffer size is DEFAULT_BUFFER_SIZE, currently 4096. Any attempt
> to reduce the buffer size below DEFAULT_BUFFER_SIZE is ignored.

In other words this would mean we cannot set the buffer size below 4096, but
you are doing exactly this, by setting the buffer size to 1400 byte. Also,
you measurements indicate success, so it looks like this statement in the
openssl documentation is wrong?

Or does setting the buffer size to 1400 "just" reset it from 16KB to 4KB and
thats the improvement you see in your measurement?

> I think that's a minimum patchset that would significantly improve
> performance over current defaults. From there, I'd like to see several
> improvements. For example, either (a) provide a way to configure the
> default record size via a config flag (not a recompile, that's a deal
> breaker for most), or, (b) implement a smarter strategy where each session
> begins with small record size (1400 bytes) but grows its record size as the
> connection gets older -- this allows us to eliminate unnecessary buffering
> latency at the beginning when TCP cwnd is low, and then decrease the
> framing overhead (i.e. go back to 16KB records) once the connection is
> warmed up.
> P.S. (b) would be much better, even if takes a bit more work.

Well, I'm not sure (b) its so easy, nginx would need to understand whether
there is bulk or interactive traffic. Such heuristics may backfire in more
complex scenarios.

But setting an optimal buffer size for pre- and post-handshake seems to be
a good compromise and 'upstream-able'.

I suspect that haproxy suffers from the same problem with an extra RTT when
using a small tune.ssl.maxrecord value. I will see if I can reproduce this.

Thanks for clarifying,


[1] http://trac.nginx.org/nginx/ticket/413
[2] http://hg.nginx.org/nginx/rev/a720f0b0e083
[3] https://www.openssl.org/docs/crypto/BIO_f_buffer.html 		 	   		  

More information about the nginx mailing list