Is sendfile_max_chunk needed when using aio on FreeBSD 9?

Maxim Khitrov max at
Wed Feb 8 14:48:24 UTC 2012

On Wed, Feb 8, 2012 at 8:46 AM, Maxim Dounin <mdounin at> wrote:
> Hello!
> On Wed, Feb 08, 2012 at 07:49:29AM -0500, Maxim Khitrov wrote:
>> Hi all,
>> Following Igor's advice [1], I'm using the following configuration for
>> file handling on FreeBSD 9.0 amd64:
>> sendfile on;
>> aio sendfile;
>> tcp_nopush on;
>> read_ahead 256K;
>> My understanding of this setup is that sendfile, which is a
>> synchronous operation, is restricted to sending only the bytes that
>> are already buffered in memory. Once the data has to be read from
>> disk, sendfile returns and nginx issues a 1-byte aio_read operation to
>> buffer an additional 256 KB of data.
>> The question is whether it is beneficial to use sendfile_max_chunk
>> option is this configuration as well? Since sendfile is guaranteed to
>> return as soon as it runs out of buffered data, is there any real
>> advantage to further restricting how much it can send in a single
>> call?
> It may make sense as in exreame conditions (i.e. if
> sendfile(NODISKIO) fails to send anything right after aio preread)
> nginx will fallback to normal sendfile() (without SF_NODISKIO).
> On the other hand, if the above happens - it means you have
> problem anyway.

I see. So there shouldn't be any harm in specifying something like
'sendfile_max_chunk 512K', since that limit would almost never come
into play.

Would I see anything in the log files when the fallback to the normal
sendfile happens?

>> By the way, is tcp_nopush needed here just to make sure that the first
>> packet, which contains headers, doesn't get sent out without any data
>> in it? I think this would also prevent transmission of partial packets
>> when sendfile runs out of data to send and nginx has to wait for the
>> aio_read to finish. Wouldn't it be better in this case to send the
>> packet without waiting for disk I/O?
> The tcp_nopush is expected to prevent transmission of incomplete
> packets.  I see no problem here.

The way I was thinking about it is that if the system has a packet
that is half-full from the last sendfile call, and is now going to
spend some number of ms buffering the next chunk, then for
interactivity and throughput reasons it may be better to send the
packet now.

It would consume a bit more bandwidth for the TCP/IP headers, but as
long as the entire packet is sent before the aio_read call is
finished, you win on the throughput front. This might be completely
insignificant, I'm not sure.

- Max

More information about the nginx mailing list