[PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk
Maxim Dounin
mdounin at mdounin.ru
Tue Oct 26 15:27:39 UTC 2021
Hello!
On Tue, Oct 26, 2021 at 02:35:01PM +0300, Sergey Kandaurov wrote:
>
> > On 11 Oct 2021, at 21:58, Maxim Dounin <mdounin at mdounin.ru> wrote:
> >
> > # HG changeset patch
> > # User Maxim Dounin <mdounin at mdounin.ru>
> > # Date 1633978533 -10800
> > # Mon Oct 11 21:55:33 2021 +0300
> > # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0
> > # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89
> > Switched to using posted next events after sendfile_max_chunk.
> >
> > Previously, 1 millisecond delay was used instead. In certain edge cases
> > this might result in noticeable performance degradation though, notably on
> > Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms),
> > sendfile_max_chunk 2m, and link speed above 2.5 Gbps.
> >
> > Using posted next events removes the artificial delay and makes processing
> > fast in all cases.
> >
> > diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c
> > --- a/src/http/ngx_http_write_filter_module.c
> > +++ b/src/http/ngx_http_write_filter_module.c
> > @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t
> > && c->write->ready
> > && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize))
> > {
> > - c->write->delayed = 1;
> > - ngx_add_timer(c->write, 1);
> > + ngx_post_event(c->write, &ngx_posted_next_events);
> > }
> >
> > for (cl = r->out; cl && cl != chain; /* void */) {
> >
>
> A side note: removing c->write->delayed no longer prevents further
> writing from ngx_http_write_filter() within one worker cycle.
> For a (somewhat degenerate) example, if we stepped on the limit in
> ngx_http_send_header(), a subsequent ngx_http_output_filter() will
> write something more. Specifically, with sendfile_max_chunk 256k:
[...]
> This can also be achieved with multiple explicit $r->flush() in Perl,
> for simplicity, but that is assumed to be a controlled environment.
> For a less controlled environment, it could be a large response proxied
> from upstream, but this requires large enough proxy buffers to read in
> such that they would exceed (in total) a configured limit.
>
> Although data is not transferred with a single write operation,
> it still tends to monopolize a worker proccess, but still
> I don't think this should really harm in real use cases.
As of currently implemented, sendfile_max_chunk cannot prevent all
worker monopolization cases. Notably, when not using sendfile,
but using instead output_buffers smaller than sendfile_max_chunk -
it simply cannot not prevent infinite sending if network is faster
than disk.
This change does not seem to make worker monopolization possible
in cases where it previously wasn't possible. In particular, when
using output_buffers, as long as buffer size is smaller than
sendfile_max_chunk, worker monopolization is currently possible
(regardless of the number of buffers), and this change doesn't
make things worse. As long as buffer size is larger than
sendfile_max_chunk, the exact behaviour and the amount of data
sent might change, since ngx_output_chain() will attempt further
reading and sending as long as it have free buffers - yet
eventually it will exhaust all buffers anyway, and stop till the
next event loop iteration.
Proper solution for all possible cases of worker monopolization
when reading from disk and writing to the network is probably
implementing something similar to c->read->available on c->write,
with counting total bytes sent in the particular event loop
iteration. This looks like a separate non-trivial task though.
--
Maxim Dounin
http://mdounin.ru/
More information about the nginx-devel
mailing list