<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="im">>>> What I don't get from your patch, it seems like you are hardcoding the<br>
>>> buffer to 16384 bytes during handshake (line 570) and only later use a<br>
>>> 1400 byte buffer (via NGX_SSL_BUFSIZE).<br>
>>><br>
>>> Am I misunderstanding the patch/code?<br>
><br>
> It may well be the case that I'm misunderstanding it too :) ... The<br>
> intent is:<br>
><br>
> - set maximum record size for application data to 1400 bytes. [1]<br>
> - allow the handshake to set/use the maximum 16KB bufsize to avoid extra<br>
> RTTs during tunnel negotiation.<br>
<br>
</div>Ok, what I read from the patch and your intent is the same :)<br>
<br>
I was confused about the 16KB bufsize for the initial negotiation, but now<br>
I've read the bug report [1] and the patch [2] about the extra RTT when<br>
using long certificate chains, and I understand it.<br>
<br>
But I don't really get the openssl documentation about this [3]:<br>
> The initial buffer size is DEFAULT_BUFFER_SIZE, currently 4096. Any attempt<br>
> to reduce the buffer size below DEFAULT_BUFFER_SIZE is ignored.<br>
<br>
In other words this would mean we cannot set the buffer size below 4096, but<br>
you are doing exactly this, by setting the buffer size to 1400 byte. Also,<br>
you measurements indicate success, so it looks like this statement in the<br>
openssl documentation is wrong?<br>
<br>
Or does setting the buffer size to 1400 "just" reset it from 16KB to 4KB and<br>
thats the improvement you see in your measurement?<br></blockquote><div><br></div><div>Looking at the tcpdump after applying the patch does show ~1400 byte records:</div><div><a href="http://cloudshark.org/captures/714cf2e0ca10?filter=tcp.stream%3D%3D2">http://cloudshark.org/captures/714cf2e0ca10?filter=tcp.stream%3D%3D2</a><br>
</div><div><br></div><div>Although now on closer inspection there seems to be another gotcha in there that I overlooked: it's emitting two packets, one is 1389 bytes, and second is ~31 extra bytes, which means the actual record is 1429 bytes. Obviously, this should be a single packet... and 1400 bytes.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im">> I think that's a minimum patchset that would significantly improve<br>
> performance over current defaults. From there, I'd like to see several<br>
> improvements. For example, either (a) provide a way to configure the<br>
> default record size via a config flag (not a recompile, that's a deal<br>
> breaker for most), or, (b) implement a smarter strategy where each session<br>
> begins with small record size (1400 bytes) but grows its record size as the<br>
> connection gets older -- this allows us to eliminate unnecessary buffering<br>
> latency at the beginning when TCP cwnd is low, and then decrease the<br>
> framing overhead (i.e. go back to 16KB records) once the connection is<br>
> warmed up.<br>
><br>
> P.S. (b) would be much better, even if takes a bit more work.<br>
<br>
</div>Well, I'm not sure (b) its so easy, nginx would need to understand whether<br>
there is bulk or interactive traffic. Such heuristics may backfire in more<br>
complex scenarios.<br>
<br>
But setting an optimal buffer size for pre- and post-handshake seems to be<br>
a good compromise and 'upstream-able'.<br></blockquote><div><br></div><div>If you only distinguish pre and post TLS handshake then you'll still (likely) incur the extra RTT on first app-data record -- that's what we're trying to avoid by reducing the default record size. For HTTP traffic, I think you want 1400 bytes records. Once we're out of slow-start, you can switch back to larger record size.</div>
</div></div></div>