[nginx] SSL: ssl_buffer_size directive.

Maxim Dounin mdounin at mdounin.ru
Sun Dec 22 22:27:36 UTC 2013


Hello!

On Fri, Dec 20, 2013 at 10:58:43AM -0800, Ilya Grigorik wrote:

> Awesome, really glad to see this! A couple of followup questions...
> 
> (a) Is there any way to force a packet flush on record end? At the moment
> nginx will fragment multiple records across packet boundaries, which is
> suboptimal as it means that I need a minimum of two packets to decode any
> record - e.g. if I set my record size to 1370 bytes, the first packet will
> contain the first full record plus another 20-50 bytes of next record.

There is OpenSSL socket layer on the way down.  It may be possible 
to achieve something with SSL_[CTX_]set_max_send_fragment() in 
OpenSSL 1.0.0+, but I haven't looked into details.  (As I already 
said, I don't think that using packet-sized records is a good 
idea, it looks like an overkill and waste of resources, both 
network and CPU.)

> (b) Current NGX_SSL_BUFSIZE is set to 16KB which is effectively guaranteed
> to overflow the CWND of a new connection and introduce another RTT for
> interactive traffic - e.g. HTTP pages. I would love to see a lower starting
> record size to mitigate this problem -- defaults matter!

We are considering using 4k or 8k as the default in the 
future.  For now, the directive is mostly to simplify 
experimenting with various buffer sizes.

> On the subject of optimizing record size, the GFE team at Google recently
> landed ~following logic:
> 
> - new connections default to small record size
> -- each record fits into a TCP packet
> -- packets are flushed at record boundaries
> - server tracks number of bytes written since reset and timestamp of last
> write
> -- if bytes written > {configurable byte threshold) then boost record size
> to 16KB
> -- if last write timestamp > now - {configurable time threshold} then reset
> sent byte count
> 
> In other words, start with small record size to optimize for delivery of
> small/interactive objects (bulk of HTTP traffic). Then, if large file is
> being transferred bump record size to 16KB and continue using that until
> the connection goes idle.. when communication resumes, start with small
> record size and repeat. Overall, this is aimed to optimize delivery of
> small files where incremental delivery is a priority, and also for large
> downloads where overall throughput is a priority.
> 
> Both byte and time thresholds are exposed as configurable flags, and
> current defaults in GFE are 1MB and 1s.
> 
> This would require a bit more work than the current patch, but I'd love to
> see a similar strategy in nginx. Hardcoding a fixed record size will
> inevitably lead to suboptimal delivery of either interactive or bulk
> traffic. Thoughts?

While some logic like this is certainly needed to use packet-sized 
records, it looks overcomplicated and probably not at all needed 
with 4k/8k buffers.

-- 
Maxim Dounin
http://nginx.org/



More information about the nginx-devel mailing list