[PATCH 1 of 4] Switched to using posted next events after sendfile_max_chunk
Sergey Kandaurov
pluknet at nginx.com
Tue Oct 26 11:35:01 UTC 2021
> On 11 Oct 2021, at 21:58, Maxim Dounin <mdounin at mdounin.ru> wrote:
>
> # HG changeset patch
> # User Maxim Dounin <mdounin at mdounin.ru>
> # Date 1633978533 -10800
> # Mon Oct 11 21:55:33 2021 +0300
> # Node ID d175cd09ac9d2bab7f7226eac3bfce196a296cc0
> # Parent ae7c767aa491fa55d3168dfc028a22f43ac8cf89
> Switched to using posted next events after sendfile_max_chunk.
>
> Previously, 1 millisecond delay was used instead. In certain edge cases
> this might result in noticeable performance degradation though, notably on
> Linux with typical CONFIG_HZ=250 (so 1ms delay becomes 4ms),
> sendfile_max_chunk 2m, and link speed above 2.5 Gbps.
>
> Using posted next events removes the artificial delay and makes processing
> fast in all cases.
>
> diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c
> --- a/src/http/ngx_http_write_filter_module.c
> +++ b/src/http/ngx_http_write_filter_module.c
> @@ -331,8 +331,7 @@ ngx_http_write_filter(ngx_http_request_t
> && c->write->ready
> && c->sent - sent >= limit - (off_t) (2 * ngx_pagesize))
> {
> - c->write->delayed = 1;
> - ngx_add_timer(c->write, 1);
> + ngx_post_event(c->write, &ngx_posted_next_events);
> }
>
> for (cl = r->out; cl && cl != chain; /* void */) {
>
A side note: removing c->write->delayed no longer prevents further
writing from ngx_http_write_filter() within one worker cycle.
For a (somewhat degenerate) example, if we stepped on the limit in
ngx_http_send_header(), a subsequent ngx_http_output_filter() will
write something more. Specifically, with sendfile_max_chunk 256k:
: *1 write new buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA77F010, size: 1147353 file: 0, size: 0
: *1 http write filter: l:0 f:0 s:1147353
: *1 http write filter limit 262144
: *1 writev: 262144 of 262144
: *1 http write filter 000055596A8D1220
: *1 post event 000055596AC04970
: *1 add cleanup: 000055596A8D1368
: *1 http output filter "/file?"
: *1 http copy filter: "/file?"
: *1 image filter
: *1 xslt filter body
: *1 http postpone filter "/file?" 00007FFC91E2E090
: *1 write old buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA7BF010, size: 885209 file: 0, size: 0
: *1 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576
: *1 http write filter: l:0 f:0 s:1933785
: *1 http write filter limit 262144
: *1 writev: 262144 of 262144
: *1 http write filter 000055596A8D1220
: *1 update posted event 000055596AC04970
: *1 http copy filter: -2 "/file?"
: *1 call_sv: 0
: *1 perl handler done: 0
: *1 http output filter "/file?"
: *1 http copy filter: "/file?"
: *1 image filter
: *1 xslt filter body
: *1 http postpone filter "/file?" 00007FFC91E2E470
: *1 write old buf t:1 f:0 00007F3CBA77F010, pos 00007F3CBA7FF010, size: 623065 file: 0, size: 0
: *1 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 1048576
: *1 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0
: *1 http write filter: l:1 f:0 s:1671641
: *1 http write filter limit 262144
: *1 writev: 262144 of 262144
: *1 http write filter 000055596A8D1220
: *1 update posted event 000055596AC04970
: *1 http copy filter: -2 "/file?"
: *1 http finalize request: 0, "/file?" a:1, c:2
This can also be achieved with multiple explicit $r->flush() in Perl,
for simplicity, but that is assumed to be a controlled environment.
For a less controlled environment, it could be a large response proxied
from upstream, but this requires large enough proxy buffers to read in
such that they would exceed (in total) a configured limit.
Although data is not transferred with a single write operation,
it still tends to monopolize a worker proccess, but still
I don't think this should really harm in real use cases.
--
Sergey Kandaurov
More information about the nginx-devel
mailing list