[PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping ping.zhao at intel.com
Tue Feb 9 08:36:58 UTC 2021


Hi Mikhail,

The performance improvement of Io_uring vs. libaio locates at disk io interface. So it needs exclude other factors when test, such as memory cache storage which is much faster than disk io.

If I didn't use memory limitation, libaio and io_uring network bandwidth is very close because both of them use memory as cache file location, so we can't see the disk io change from it. In following data, as example, it used 17G memory as cache, network speed is same of io_uring and libaio, both of them has very few disk io load, which means very low io_uring/libaio usage.

memory
free -lh
              total        used        free      shared  buff/cache   available
Mem:          376Gi       3.2Gi       356Gi       209Mi        17Gi       370Gi

libaio:
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  1   1  99   0   0|4097B   80k|4554k  104M|   0     0 |  77k 1344
  1   1  98   0   0|8192B  104k|9955k  236M|   0     0 | 151k 1449
  1   1  97   0   0|  56k   32k|  10M  241M|   0     0 | 148k 1652
  2   1  97   0   0|  16k   16k|9552k  223M|   0     0 | 142k 1366
  1   1  97   0   0|  16k   24k|9959k  234M|   0     0 | 146k 1570
  1   1  97   0   0|   0  1064k|  10M  237M|   0     0 | 150k 1472
  2   1  97   0   0|  16k   48k|9650k  227M|   0     0 | 143k 1555
  2   1  97   0   0|  12k   16k|9185k  216M|   0     0 | 139k 1304

Io_uring:
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  2   1  97   0   0|   0     0 |9866k  232M|   0     0 | 148k 1286
  2   1  97   0   0|   0     0 |9388k  220M|   0     0 | 144k 1345
  2   1  97   0   0|   0     0 |9080k  213M|   0     0 | 137k 1388
  2   1  97   0   0|   0     0 |9611k  226M|   0     0 | 144k 1615
  1   1  97   0   0|   0   232k|9830k  231M|   0     0 | 147k 1524

I used a Intel Xeon server Platinum 8280L CPU @ 2.70GHz, with 376G memory, 50G network. If I limit nginx memory to 2GB, the cache memory will be about 2.6G and won't increase during test. And disk io speed is close to network speed which means this can shows the disk io change of libaio vs. io_uring. This shows io_uring performance improvement. My previous data is based on this configuration.

Memory:
free -lh
              total        used        free      shared  buff/cache   available
Mem:          376Gi       3.2Gi       370Gi       141Mi       2.6Gi       370Gi

Libaio:
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  1   0  98   1   0|  60M    0 |2925k   68M|   0     0 |  50k   16k
  1   0  98   1   0|  60M 8192B|2923k   68M|   0     0 |  50k   16k
  1   0  98   1   0|  61M    0 |2923k   68M|   0     0 |  50k   16k
  0   0  98   1   0|  60M    0 |2929k   68M|   0     0 |  50k   16k
  1   0  98   1   0|  60M  264k|2984k   69M|   0     0 |  51k   16k

Io_uring:
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  1   2  93   4   0| 192M 8192B|7951k  187M|   0     0 | 146k   90k
  1   2  93   4   0| 196M    0 |7953k  187M|   0     0 | 144k   89k
  1   2  93   4   0| 191M  300k|7854k  185M|   0     0 | 145k   87k
  1   2  94   3   0| 186M 8192B|7861k  185M|   0     0 | 143k   86k
  1   2  94   3   0| 180M   16k|7995k  188M|   0     0 | 146k   86k
  2   1  94   3   0| 163M   16k|7273k  171M|   0     0 | 133k   80k
  1   1  94   3   0| 173M 1308k|7995k  188M|   0     0 | 144k   83k

Considering that server memory won't be always enough for cache storage when traffic increased and then Nginx will use disk as cache storage. In this case, io_uring will shows big performance improvement than libaio on disk io interface. This is the value of this patch.

BR,
Ping

-----Original Message-----
From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Mikhail Isachenkov
Sent: Tuesday, February 9, 2021 1:17 AM
To: nginx-devel at nginx.org
Subject: Re: [PATCH] Add io_uring support in AIO(async io) module

Hi Zhao Ping,

First of all, thank you for pointing me to AWS patch -- on Fedora 33 with 5.10 kernel I don't see any errors now.

I've tested patch on Amazon EC2 NVMe SSD (and found this drive pretty fast!). Server is i3en.xlarge, client is c5n.2xlarge instance type, with up to 25 Gigabit network.

As in previous test, I've created a number of 100kb files, but try to reach them via proxy_cache as on your stand. After warming up disk cache, I've got the following results:

a) with 4 worker processes, I've got 3Gb/sec in all tests regardless of sendfile/libaio/io_uring.

b) with 1 worker process, sendfile is faster (up to 1.9 Gb/sec) than libaio (1.40 Gb/sec) and io_uring (up to 1.45 Gb/sec).

I didn't use any memory limitations, but I ran 'echo 3 > /proc/sys/vm/drop_caches' before each pass. When I try to limit memory to 2G with cgroups, results are generally the same. Maybe 2G is not enough?

Could you please run the test for ~60 seconds, and run 'dstat' on other console? I'd like to check disk and network bandwidth at the same timestamps and compare them to mine.

Thanks in advance!

07.02.2021 05:16, Zhao, Ping пишет:
> Hi Mikhail,
> 
> I reproduced your problem with kernel 5.8.0-1010-aws. And I tried 
> kernel 5.8.0 which doesn't has this problem. I can confirm there's a 
> regression of aws patch(linux-aws_5.8.0-1010.10.diff)
> 
> Updated 'sendfile on' & 'aio off' test result with 4KB data which is almost same as libaio:
> 
> Nginx worker_processes 1:
>                      4k               100k              1M
> Io_uring    220MB/s    1GB/s            1.3GB/s
> Libaio        70MB/s       250MB/s      600MB/s(with -c 200, 1.0GB/s)
> sendfile    70MB/s       260MB/s       700MB/s
> 
> 
> Nginx worker_processes 4:
>                      4k               100k              1M
> Io_uring    800MB/s     2.5GB/s        2.6GB/s(my nvme disk io maximum bw)
> libaio         250MB/s     900MB/s      2.0GB/s
> sendfile     250MB/s     900MB/s      1.6GB/s
> 
> BR,
> Ping
> 
> -----Original Message-----
> From: Zhao, Ping
> Sent: Friday, February 5, 2021 2:43 PM
> To: nginx-devel at nginx.org
> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
> 
> Hi Mikhail,
> 
> Added 'sendfile on' & 'aio off' test result with previous table:
> 
> Following is the test result with 100KB and 1MB: (4KB to be test)
> 
> Nginx worker_processes 1:
>                      4k               100k              1M
> Io_uring    220MB/s    1GB/s            1.3GB/s
> Libaio        70MB/s       250MB/s      600MB/s(with -c 200, 1.0GB/s)
> sendfile    tbt                260MB/s       700MB/s
> 
> 
> Nginx worker_processes 4:
>                      4k               100k              1M
> Io_uring    800MB/s     2.5GB/s        2.6GB/s(my nvme disk io maximum bw)
> libaio         250MB/s     900MB/s      2.0GB/s
> sendfile     tbt                900MB/s      1.6GB/s
> 
> Regards,
> Ping
> 
> -----Original Message-----
> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Mikhail 
> Isachenkov
> Sent: Thursday, February 4, 2021 4:55 PM
> To: nginx-devel at nginx.org
> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> 
> Hi Zhao Ping,
> 
> My test is much simpler than yours. I created
> /usr/local/html/(11111...99999) files on SSD (100 kb size) and wrote small lua script for wrk that adds 5 random digits to request. There are no such errors without patch with aio enabled.
> These files does not change during test.
> 
> I'll try to reproduce this on CentOS 8 -- which repository do you use to install 5.x kernel?
> 
> Also, could you please run the test with 'sendfile on' and 'aio off' to get reference numbers for sendfile too?
> 
> Thanks in advance!
> 
> 04.02.2021 10:08, Zhao, Ping пишет:
>> Another possible cause is that "/usr/local/html/64746" was changed/removed when other user tried to read it.
>>
>> -----Original Message-----
>> From: Zhao, Ping
>> Sent: Thursday, February 4, 2021 10:33 AM
>> To: nginx-devel at nginx.org
>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>>
>> Hi Mikhail,
>>
>> I didn't see this error in my log. Following is my OS/Kernel:
>> CentOS:  8.1.1911
>> Kernel:    5.7.19
>> Liburing: liburing-1.0.7-3.el8.x86_64,
>> liburing-devel-1.0.7-3.el8.x86_64 (from yum repo)
>>
>> Regarding the error: 11: Resource temporarily unavailable. It's probably that too many read "/usr/local/html/64746" at one time which is still locked by previous read. I tried to repro this error with single file but it seems nginx auto store the signal file in memory and I don't see error. How do you perform the test? I want to repro this if possible.
>>
>> My nginx reported this error before:
>> 2021/01/04 05:04:29 [alert] 50769#50769: *11498 pread() read only 7101 of 15530 from "/mnt/cache1/17/68aae9d816ec02340ee617b7ee52a117", client: 11.11.11.3, server: _, request: "GET /_100kobject?version=cdn003191&thread=64 HTTP/1.1", host: "11.11.11.1:8080"
>> Which is fixed by my 2nd patch(Jan 25) already.
>>
>> BR,
>> Ping
>>
>> -----Original Message-----
>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of 
>> Mikhail Isachenkov
>> Sent: Wednesday, February 3, 2021 10:11 PM
>> To: nginx-devel at nginx.org
>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>
>> Hi Ping Zhao,
>>
>> When I try to repeat this test, I've got a huge number of these errors:
>>
>> 2021/02/03 10:22:48 [crit] 30018#30018: *2 aio read 
>> "/usr/local/html/64746" failed (11: Resource temporarily unavailable) 
>> while sending response to client, client: 127.0.0.1, server:
>> localhost,
>> request: "GET /64746 HTTP/1.1", host: "localhost"
>>
>> I tested this patch on Ubuntu 20.10 (5.8.0-1010-aws kernel version) and Fedora 33 (5.10.11-200.fc33.x86_64) with the same result.
>>
>> Did you get any errors in error log with patch applied? Which OS/kernel did you use for testing? Did you perform any specific tuning before running?
>>
>> 25.01.2021 11:24, Zhao, Ping пишет:
>>> Hello, add a small update to correct the length when part of request already received in previous.
>>> This case may happen when using io_uring and throughput increased.
>>>
>>> # HG changeset patch
>>> # User Ping Zhao <ping.zhao at intel.com> # Date 1611566408 18000
>>> #      Mon Jan 25 04:20:08 2021 -0500
>>> # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027
>>> # Parent  1372f9ee2e829b5de5d12c05713c307e325e0369
>>> Correct length calculation when part of request received.
>>>
>>> diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c
>>> --- a/src/core/ngx_output_chain.c	Wed Jan 13 11:10:05 2021 -0500
>>> +++ b/src/core/ngx_output_chain.c	Mon Jan 25 04:20:08 2021 -0500
>>> @@ -531,6 +531,14 @@
>>>     
>>>         size = ngx_buf_size(src);
>>>         size = ngx_min(size, dst->end - dst->pos);
>>> +#if (NGX_HAVE_FILE_IOURING)
>>> +    /*
>>> +     * check if already received part of the request in previous,
>>> +     * calculate the remain length
>>> +     */
>>> +    if(dst->last > dst->pos && size > (dst->last - dst->pos))
>>> +        size = size - (dst->last - dst->pos); #endif
>>>     
>>>         sendfile = ctx->sendfile && !ctx->directio;
>>>
>>> -----Original Message-----
>>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Zhao, 
>>> Ping
>>> Sent: Thursday, January 21, 2021 9:44 AM
>>> To: nginx-devel at nginx.org
>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>>>
>>> Hi Vladimir,
>>>
>>> No special/extra configuration needed, but need check if 'aio on' and 'sendfile off' is correctly set. This is my Nginx config for reference:
>>>
>>> user nobody;
>>> daemon off;
>>> worker_processes 1;
>>> error_log error.log ;
>>> events {
>>>        worker_connections 65535;
>>>        use epoll;
>>> }
>>>
>>> http {
>>>        include mime.types;
>>>        default_type application/octet-stream;
>>>        access_log on;
>>>        aio on;
>>>        sendfile off;
>>>        directio 2k;
>>>
>>>        # Cache Configurations
>>>        proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m max_size=1400g inactive=4d use_temp_path=off; ......
>>>
>>>
>>> To better measure the disk io performance data, I do the following steps:
>>> 1. To exclude other impact, and focus on disk io part.(This patch only impact disk aio read process) Use cgroup to limit Nginx memory usage. Otherwise Nginx may also use memory as cache storage and this may cause test result not so straight.(since most cache hit in memory, disk io bw is low, like my previous mail found which didn't exclude the memory cache impact)
>>>         echo 2G > memory.limit_in_bytes
>>>         use ' cgexec -g memory:nginx' to start Nginx.
>>>
>>> 2. use wrk -t 100 -c 1000, with random 25000 http requests.
>>>         My previous test used -t 200 connections, comparing with -t 1000, libaio performance drop more when connections numbers increased from 200 to 1000, but io_uring doesn't. It's another advantage of io_uring.
>>>
>>> 3. First clean the cache disk and run the test for 30 minutes to let Nginx store the cache files to nvme disk as much as possible.
>>>
>>> 4. Rerun the test, this time Nginx will use ngx_file_aio_read to 
>>> extract the cache files in nvme cache disk. Use iostat to track the 
>>> io data. The data should be align with NIC bw since all data should 
>>> be from cache disk.(need exclude memory as cache storage impact)
>>>
>>> Following is the test result:
>>>
>>> Nginx worker_processes 1:
>>> 		4k		100k		1M
>>> Io_uring	220MB/s	1GB/s		1.3GB/s
>>> Libaio		70MB/s		250MB/s	600MB/s(with -c 200, 1.0GB/s)
>>>
>>>
>>> Nginx worker_processes 4:
>>> 		4k		100k		1M
>>> Io_uring	800MB/s	2.5GB/s		2.6GB/s(my nvme disk io maximum bw)
>>> libaio		250MB/s	900MB/s	2.0GB/s
>>>
>>> So for small request, io_uring has huge improvement than libaio. In previous mail, because I didn't exclude the memory cache storage impact, most cache file is stored in memory, very few are from disk in case of 4k/100k. The data is not correct.(for 1M, because the cache is too big to store in memory, it wat in disk)  Also I enabled directio option "directio 2k" this time to avoid this.
>>>
>>> Regards,
>>> Ping
>>>
>>> -----Original Message-----
>>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of 
>>> Vladimir Homutov
>>> Sent: Wednesday, January 20, 2021 12:43 AM
>>> To: nginx-devel at nginx.org
>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>>
>>> On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote:
>>>> It depends on if disk io is the performance hot spot or not. If 
>>>> yes, io_uring shows improvement than libaio. With 4KB/100KB length 
>>>> 1 Nginx thread it's hard to see performance difference because 
>>>> iostat is only around ~10MB/100MB per second. Disk io is not the 
>>>> performance bottle neck, both libaio and io_uring have the same 
>>>> performance. If you increase request size or Nginx threads number, 
>>>> for example 1MB length or Nginx thread number 4. In this case, disk 
>>>> io became the performance bottle neck, you will see io_uring performance improvement.
>>>
>>> Can you please provide full test results with specific nginx configuration?
>>>
>>> _______________________________________________
>>> nginx-devel mailing list
>>> nginx-devel at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>> _______________________________________________
>>> nginx-devel mailing list
>>> nginx-devel at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>> _______________________________________________
>>> nginx-devel mailing list
>>> nginx-devel at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>
>>
>> --
>> Best regards,
>> Mikhail Isachenkov
>> NGINX Professional Services
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>
> 
> --
> Best regards,
> Mikhail Isachenkov
> NGINX Professional Services
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> 

--
Best regards,
Mikhail Isachenkov
NGINX Professional Services
_______________________________________________
nginx-devel mailing list
nginx-devel at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


More information about the nginx-devel mailing list