[PATCH] Add io_uring support in AIO(async io) module

Vadim Fedorenko vadimjunk at gmail.com
Sun Feb 21 21:55:10 UTC 2021


Hi!
Looks like this small fix doesn't work in case when the total size of the
file is less than the size of the buffer and it was partly read.
In my case the size of the file is 16384 bytes and only one page of the
file was in page cache. This patch produces size = 8192 bytes
for my case and the next call reads 12288 bytes and generates errors like
below:
"[alert] 28441#28441: *20855 pread() read only 12288 of 8192 from
<filename>"
changing to
size = ngx_min(size, dst->end - dst->last);
fixes the problem
Thanks,
Vadim

пн, 25 янв. 2021 г. в 08:25, Zhao, Ping <ping.zhao at intel.com>:

> Hello, add a small update to correct the length when part of request
> already received in previous.
> This case may happen when using io_uring and throughput increased.
>
> # HG changeset patch
> # User Ping Zhao <ping.zhao at intel.com>
> # Date 1611566408 18000
> #      Mon Jan 25 04:20:08 2021 -0500
> # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027
> # Parent  1372f9ee2e829b5de5d12c05713c307e325e0369
> Correct length calculation when part of request received.
>
> diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c
> --- a/src/core/ngx_output_chain.c       Wed Jan 13 11:10:05 2021 -0500
> +++ b/src/core/ngx_output_chain.c       Mon Jan 25 04:20:08 2021 -0500
> @@ -531,6 +531,14 @@
>
>      size = ngx_buf_size(src);
>      size = ngx_min(size, dst->end - dst->pos);
> +#if (NGX_HAVE_FILE_IOURING)
> +    /*
> +     * check if already received part of the request in previous,
> +     * calculate the remain length
> +     */
> +    if(dst->last > dst->pos && size > (dst->last - dst->pos))
> +        size = size - (dst->last - dst->pos);
> +#endif
>
>      sendfile = ctx->sendfile && !ctx->directio;
>
> -----Original Message-----
> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Zhao, Ping
> Sent: Thursday, January 21, 2021 9:44 AM
> To: nginx-devel at nginx.org
> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>
> Hi Vladimir,
>
> No special/extra configuration needed, but need check if 'aio on' and
> 'sendfile off' is correctly set. This is my Nginx config for reference:
>
> user nobody;
> daemon off;
> worker_processes 1;
> error_log error.log ;
> events {
>     worker_connections 65535;
>     use epoll;
> }
>
> http {
>     include mime.types;
>     default_type application/octet-stream;
>     access_log on;
>     aio on;
>     sendfile off;
>     directio 2k;
>
>     # Cache Configurations
>     proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m
> max_size=1400g inactive=4d use_temp_path=off; ......
>
>
> To better measure the disk io performance data, I do the following steps:
> 1. To exclude other impact, and focus on disk io part.(This patch only
> impact disk aio read process) Use cgroup to limit Nginx memory usage.
> Otherwise Nginx may also use memory as cache storage and this may cause
> test result not so straight.(since most cache hit in memory, disk io bw is
> low, like my previous mail found which didn't exclude the memory cache
> impact)
>      echo 2G > memory.limit_in_bytes
>      use ' cgexec -g memory:nginx' to start Nginx.
>
> 2. use wrk -t 100 -c 1000, with random 25000 http requests.
>      My previous test used -t 200 connections, comparing with -t 1000,
> libaio performance drop more when connections numbers increased from 200 to
> 1000, but io_uring doesn't. It's another advantage of io_uring.
>
> 3. First clean the cache disk and run the test for 30 minutes to let Nginx
> store the cache files to nvme disk as much as possible.
>
> 4. Rerun the test, this time Nginx will use ngx_file_aio_read to extract
> the cache files in nvme cache disk. Use iostat to track the io data. The
> data should be align with NIC bw since all data should be from cache
> disk.(need exclude memory as cache storage impact)
>
> Following is the test result:
>
> Nginx worker_processes 1:
>                 4k              100k            1M
> Io_uring        220MB/s 1GB/s           1.3GB/s
> Libaio          70MB/s          250MB/s 600MB/s(with -c 200, 1.0GB/s)
>
>
> Nginx worker_processes 4:
>                 4k              100k            1M
> Io_uring        800MB/s 2.5GB/s         2.6GB/s(my nvme disk io maximum bw)
> libaio          250MB/s 900MB/s 2.0GB/s
>
> So for small request, io_uring has huge improvement than libaio. In
> previous mail, because I didn't exclude the memory cache storage impact,
> most cache file is stored in memory, very few are from disk in case of
> 4k/100k. The data is not correct.(for 1M, because the cache is too big to
> store in memory, it wat in disk)  Also I enabled directio option "directio
> 2k" this time to avoid this.
>
> Regards,
> Ping
>
> -----Original Message-----
> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Vladimir
> Homutov
> Sent: Wednesday, January 20, 2021 12:43 AM
> To: nginx-devel at nginx.org
> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>
> On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote:
> > It depends on if disk io is the performance hot spot or not. If yes,
> > io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx
> > thread it's hard to see performance difference because iostat is only
> > around ~10MB/100MB per second. Disk io is not the performance bottle
> > neck, both libaio and io_uring have the same performance. If you
> > increase request size or Nginx threads number, for example 1MB length
> > or Nginx thread number 4. In this case, disk io became the performance
> > bottle neck, you will see io_uring performance improvement.
>
> Can you please provide full test results with specific nginx configuration?
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20210221/47ec7895/attachment.htm>


More information about the nginx-devel mailing list