[nginx] SSL: ssl_buffer_size directive.

Ilya Grigorik igrigorik at gmail.com
Tue Jan 7 23:41:48 UTC 2014


(waking up from post-holiday coma :-)) ... Happy 2014!

On Fri, Dec 20, 2013 at 12:49 PM, Alex <alex at zeitgeist.se> wrote:

> > This would require a bit more work than the current patch, but I'd love
> to see a similar strategy in nginx. Hardcoding a fixed record size will
> inevitably lead to suboptimal delivery of either interactive or bulk
> traffic. Thoughts?
>
> It'd be interesting to know how difficult it'd be to implement such a
> dynamic behavior of the SSL buffer size. An easier, albeit less optimal
> solution would be to adjust the ssl_buffer_size directive depending on
> the request URI (via location blocks). Not sure if Maxim's patch would
> allow for that already? If large files are served from a known request
> URI pattern, you could then increase the SSL buffer size accordingly for
>  that location.
>

No, ssl_buffer_size is a server-wide directive [1]. Further, I don't think
you want to go down this path: just because you're serving a large stream
does not mean you don't want a fast TTFB at the beginning of the stream.
For example, for video streaming you still want to optimize for the "time
to first frame" such that you can decode the stream headers and get the
video preview / first few frames on the screen as soon as possible. That
said, once you've got the first few frames on screen, then by all means,
max out the record size to decrease framing overhead. In short, for best
performance, you want dynamic behavior.

[1] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size

On Sun, Dec 22, 2013 at 2:27 PM, Maxim Dounin <mdounin at mdounin.ru> wrote:

> > Awesome, really glad to see this! A couple of followup questions...
> >
> > (a) Is there any way to force a packet flush on record end? At the moment
> > nginx will fragment multiple records across packet boundaries, which is
> > suboptimal as it means that I need a minimum of two packets to decode any
> > record - e.g. if I set my record size to 1370 bytes, the first packet
> will
> > contain the first full record plus another 20-50 bytes of next record.
>
> There is OpenSSL socket layer on the way down.  It may be possible
>
> to achieve something with SSL_[CTX_]set_max_send_fragment() in
>
> OpenSSL 1.0.0+, but I haven't looked into details.  (As I already
>
> said, I don't think that using packet-sized records is a good
>
> idea, it looks like an overkill and waste of resources, both
> network and CPU.)> (b) Current NGX_SSL_BUFSIZE is set to 16KB which is
> effectively guaranteed
> > to overflow the CWND of a new connection and introduce another RTT for
> > interactive traffic - e.g. HTTP pages. I would love to see a lower
> starting
> > record size to mitigate this problem -- defaults matter!
>
> We are considering using 4k or 8k as the default in the
>
> future.  For now, the directive is mostly to simplify
>
> experimenting with various buffer sizes.
>
> > On the subject of optimizing record size, the GFE team at Google recently
> > landed ~following logic:
> >
> > - new connections default to small record size
> > -- each record fits into a TCP packet
> > -- packets are flushed at record boundaries
> > - server tracks number of bytes written since reset and timestamp of last
> > write
> > -- if bytes written > {configurable byte threshold) then boost record
> size
> > to 16KB
> > -- if last write timestamp > now - {configurable time threshold} then
> reset
> > sent byte count
> >
> > In other words, start with small record size to optimize for delivery of
> > small/interactive objects (bulk of HTTP traffic). Then, if large file is
> > being transferred bump record size to 16KB and continue using that until
> > the connection goes idle.. when communication resumes, start with small
> > record size and repeat. Overall, this is aimed to optimize delivery of
> > small files where incremental delivery is a priority, and also for large
> > downloads where overall throughput is a priority.
> >
> > Both byte and time thresholds are exposed as configurable flags, and
> > current defaults in GFE are 1MB and 1s.
> >
> > This would require a bit more work than the current patch, but I'd love
> to
> > see a similar strategy in nginx. Hardcoding a fixed record size will
> > inevitably lead to suboptimal delivery of either interactive or bulk
> > traffic. Thoughts?
>
> While some logic like this is certainly needed to use packet-sized
>
> records, it looks overcomplicated and probably not at all needed
> with 4k/8k buffers.
>

This logic is not at all specific to packet-sized records -- that said,
yes, it delivers most benefit when you are starting the session with a
packet-sized record. For sake of an example, let's say we set the new
default to 4k:

+ all records will fit into a minimum CWND (IW4 and IW10)
- packet loss is still a factor and may affect TTFB, but impact is much
less than current 16KB record.
- we incur fixed 4x overhead (bytes and CPU cycles) on large streams

The "dynamic" implementation simply addresses the last two shortcomings:
(a) using a packet-size record guarantees that we deliver the best TTFB,
and (b) we minimize the CPU/byte overhead costs of smaller records by
raising record size once connection is "warmed up". Further, I think it's
misleading to say that "for large streams, just use a larger default
record"... as I noted above, even large streams (e.g. video) need a fast
time to first byte/frame.

I think its important that we optimize for the out-of-the box
performance/experience: your average web developer / system admin won't
know what record size to set for their use case, and they'll have a mix of
payloads which don't lend themselves to any one record size. Besides, any
static value will inevitably lead to a tradeoff in TTFB or throughput,
which is an unnecessary trade-off to begin with.

If we want configuration knobs, then as advanced options, offer ability to
customize the "boost threshold" (in KB), and inactivity timeout (to revert
back to smaller size). Finally, if you want, the definition of "small
record" could be a flag as well - setting it to 16KB would effectively
disable the logic and give you current behavior... Yes, this is more
complex than just setting a static record size, but the performance gains
are significant both in throughput and latency -- and after all, isn't that
what nginx is all about? :)

ig
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20140107/7fff949e/attachment-0001.html>


More information about the nginx-devel mailing list