[PATCH] Add io_uring support in AIO(async io) module

Vadim Fedorenko vadimjunk at gmail.com
Mon Feb 15 11:05:40 UTC 2021


Hi!
There was a regression detected in io_uring in kernel version 5.7.16 with
TWA_SIGNAL handling. Looks like fix was not queued the stable branch, but
could be backported to Fedora 33 kernel. It could potentially lead to some
differences in metrics.
Thanks,
Vadim

пн, 15 февр. 2021 г., 8:11 Mikhail Isachenkov <mikhail.isachenkov at nginx.com
>:

> Hi Zhao Ping,
>
> First of all, happy Chinese New Year!
>
> Yes, I've checked this first. Nginx binary linked with liburing and,
> without 'ulimit -l unlimited', there were 'io_uring_queue_init_params()
> failed (12: Cannot allocate memory)' error in error.log. I believe that
> io_uring is working correctly.
>
> But, as I can see, numbers in last two columns of dstat output
> (interrupts and context switches) much larger in your case instead of
> mine. So, could you please re-test (following my steps) when you will
> return to office?
>
> Thanks in advance!
>
> 15.02.2021 09:08, Zhao, Ping пишет:
> > Hi Mikhail,
> >
> > Sorry for late. I'm on Chinese New Year holiday leave. I can't see any
> problem from your steps, but the result is different from mine. It's
> strange, I'll try when I'm back to office next week. Would you check the
> io_uring nginx with ldd to see if it's linked with liburing first?
> >
> > # ldd /path/to/nginx
> >          linux-vdso.so.1 (0x00007ffce4ff8000)
> >          libdl.so.2 => /lib64/libdl.so.2 (0x00007f72aa689000)
> >          liburing.so.1 => /lib64/liburing.so.1 (0x00007f72aa485000)
> > ....
> >
> > Or you can enable the debug log option in config to see if any
> 'io_uring_peek_cqe' in the log:
> >
> > [debug] 53391#53391: io_uring_peek_cqe: START
> >
> > BR,
> > Ping
> >
> > -----Original Message-----
> > From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Mikhail
> Isachenkov
> > Sent: Tuesday, February 9, 2021 9:31 PM
> > To: nginx-devel at nginx.org
> > Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> >
> > Hi Zhao Ping,
> >
> > Unfortunately, I still couldn't reproduce these results. Maybe you could
> point me where I'm wrong? Please find my steps below and configuration/lua
> script for wrk attached.
> >
> > 1. Create 90k files on SSD on Amazon EC2 instance. I created 1k, 100k,
> 1M files.
> > 2. Create separate cgroup 'nginx': mkdir /sys/fs/cgroup/memory/nginx 3.
> Limit memory to 80 Mb, for example: echo
> > 80M>/sys/fs/cgroup/memory/nginx/memory.limit_in_bytes
> > 4. Disable limit for locked memory: ulimit -l unlimited 5. Start nginx:
> cgexec -g memory:nginx /usr/local/sbin/nginx 6. Run wrk on client: ./wrk -d
> 30 -t 100 -c 1000 -s add_random.lua http://...
> >
> > I tried different values for limit_in_bytes (from 80M to 2G) and
> different file sizes -- 1k, 100k, 1M. In fact, maximum bandwidth is the
> same with libaio and io_uring.
> >
> > For example, with 100kb files and 1 worker process:
> >
> > free -lh
> >                 total        used        free      shared  buff/cache
> > available
> > Mem:           15Gi       212Mi        14Gi        13Mi       318Mi
> >     14Gi
> >
> > dstat/libaio
> >     5   6  73  17   0| 251M    0 |1253k  265M|   0     0 |  33k 1721
> >     4   4  73  17   0| 250M    0 |1267k  264M|   0     0 |  33k 1739
> >     6   5  72  16   0| 250M  924k|1308k  270M|   0     0 |  34k 2017
> >     5   5  72  17   0| 250M    0 |1277k  258M|   0     0 |  34k 1945
> >     5   5  73  17   0| 250M    0 |1215k  263M|   0     0 |  33k 1720
> >     5   5  72  16   0| 250M    0 |1311k  267M|   0     0 |  34k 1721
> >     5   5  73  16   0| 250M    0 |1280k  264M|   0     0 |  34k 1718
> >     6   6  72  16   0| 250M   24k|1362k  268M|   0     0 |  35k 1825
> >     5   5  73  17   0| 250M    0 |1342k  262M|   0     0 |  34k 1726
> > dstat/io_uring
> >     5   6  60  29   0| 250M    0 |1079k  226M|   0     0 |  36k   10k
> >     5   6  64  25   0| 251M    0 | 906k  204M|   0     0 |  32k 8607
> >     4   6  62  27   0| 250M    0 |1034k  221M|   0     0 |  35k   10k
> >     5   6  63  26   0| 250M   20k| 909k  209M|   0     0 |  32k 8595
> >     4   6  62  27   0| 250M    0 |1003k  217M|   0     0 |  35k   10k
> >     4   5  61  28   0| 250M    0 |1019k  226M|   0     0 |  35k 9700
> >     4   5  62  27   0| 250M    0 | 948k  210M|   0     0 |  32k 8433
> >     4   6  61  28   0| 250M    0 |1094k  216M|   0     0 |  35k 9811
> >     5   6  62  26   0| 250M    0 |1083k  226M|   0     0 |  35k 9479
> >
> > As you can see, libaio even faster a bit.
> >
> > 09.02.2021 11:36, Zhao, Ping пишет:
> >> Hi Mikhail,
> >>
> >> The performance improvement of Io_uring vs. libaio locates at disk io
> interface. So it needs exclude other factors when test, such as memory
> cache storage which is much faster than disk io.
> >>
> >> If I didn't use memory limitation, libaio and io_uring network
> bandwidth is very close because both of them use memory as cache file
> location, so we can't see the disk io change from it. In following data, as
> example, it used 17G memory as cache, network speed is same of io_uring and
> libaio, both of them has very few disk io load, which means very low
> io_uring/libaio usage.
> >>
> >> memory
> >> free -lh
> >>                 total        used        free      shared  buff/cache
>  available
> >> Mem:          376Gi       3.2Gi       356Gi       209Mi        17Gi
>    370Gi
> >>
> >> libaio:
> >> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
> >> usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
> >>     1   1  99   0   0|4097B   80k|4554k  104M|   0     0 |  77k 1344
> >>     1   1  98   0   0|8192B  104k|9955k  236M|   0     0 | 151k 1449
> >>     1   1  97   0   0|  56k   32k|  10M  241M|   0     0 | 148k 1652
> >>     2   1  97   0   0|  16k   16k|9552k  223M|   0     0 | 142k 1366
> >>     1   1  97   0   0|  16k   24k|9959k  234M|   0     0 | 146k 1570
> >>     1   1  97   0   0|   0  1064k|  10M  237M|   0     0 | 150k 1472
> >>     2   1  97   0   0|  16k   48k|9650k  227M|   0     0 | 143k 1555
> >>     2   1  97   0   0|  12k   16k|9185k  216M|   0     0 | 139k 1304
> >>
> >> Io_uring:
> >> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
> >> usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
> >>     2   1  97   0   0|   0     0 |9866k  232M|   0     0 | 148k 1286
> >>     2   1  97   0   0|   0     0 |9388k  220M|   0     0 | 144k 1345
> >>     2   1  97   0   0|   0     0 |9080k  213M|   0     0 | 137k 1388
> >>     2   1  97   0   0|   0     0 |9611k  226M|   0     0 | 144k 1615
> >>     1   1  97   0   0|   0   232k|9830k  231M|   0     0 | 147k 1524
> >>
> >> I used a Intel Xeon server Platinum 8280L CPU @ 2.70GHz, with 376G
> memory, 50G network. If I limit nginx memory to 2GB, the cache memory will
> be about 2.6G and won't increase during test. And disk io speed is close to
> network speed which means this can shows the disk io change of libaio vs.
> io_uring. This shows io_uring performance improvement. My previous data is
> based on this configuration.
> >>
> >> Memory:
> >> free -lh
> >>                 total        used        free      shared  buff/cache
>  available
> >> Mem:          376Gi       3.2Gi       370Gi       141Mi       2.6Gi
>    370Gi
> >>
> >> Libaio:
> >> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
> >> usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
> >>     1   0  98   1   0|  60M    0 |2925k   68M|   0     0 |  50k   16k
> >>     1   0  98   1   0|  60M 8192B|2923k   68M|   0     0 |  50k   16k
> >>     1   0  98   1   0|  61M    0 |2923k   68M|   0     0 |  50k   16k
> >>     0   0  98   1   0|  60M    0 |2929k   68M|   0     0 |  50k   16k
> >>     1   0  98   1   0|  60M  264k|2984k   69M|   0     0 |  51k   16k
> >>
> >> Io_uring:
> >> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
> >> usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
> >>     1   2  93   4   0| 192M 8192B|7951k  187M|   0     0 | 146k   90k
> >>     1   2  93   4   0| 196M    0 |7953k  187M|   0     0 | 144k   89k
> >>     1   2  93   4   0| 191M  300k|7854k  185M|   0     0 | 145k   87k
> >>     1   2  94   3   0| 186M 8192B|7861k  185M|   0     0 | 143k   86k
> >>     1   2  94   3   0| 180M   16k|7995k  188M|   0     0 | 146k   86k
> >>     2   1  94   3   0| 163M   16k|7273k  171M|   0     0 | 133k   80k
> >>     1   1  94   3   0| 173M 1308k|7995k  188M|   0     0 | 144k   83k
> >>
> >> Considering that server memory won't be always enough for cache storage
> when traffic increased and then Nginx will use disk as cache storage. In
> this case, io_uring will shows big performance improvement than libaio on
> disk io interface. This is the value of this patch.
> >>
> >> BR,
> >> Ping
> >>
> >> -----Original Message-----
> >> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of Mikhail
> >> Isachenkov
> >> Sent: Tuesday, February 9, 2021 1:17 AM
> >> To: nginx-devel at nginx.org
> >> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> >>
> >> Hi Zhao Ping,
> >>
> >> First of all, thank you for pointing me to AWS patch -- on Fedora 33
> with 5.10 kernel I don't see any errors now.
> >>
> >> I've tested patch on Amazon EC2 NVMe SSD (and found this drive pretty
> fast!). Server is i3en.xlarge, client is c5n.2xlarge instance type, with up
> to 25 Gigabit network.
> >>
> >> As in previous test, I've created a number of 100kb files, but try to
> reach them via proxy_cache as on your stand. After warming up disk cache,
> I've got the following results:
> >>
> >> a) with 4 worker processes, I've got 3Gb/sec in all tests regardless of
> sendfile/libaio/io_uring.
> >>
> >> b) with 1 worker process, sendfile is faster (up to 1.9 Gb/sec) than
> libaio (1.40 Gb/sec) and io_uring (up to 1.45 Gb/sec).
> >>
> >> I didn't use any memory limitations, but I ran 'echo 3 >
> /proc/sys/vm/drop_caches' before each pass. When I try to limit memory to
> 2G with cgroups, results are generally the same. Maybe 2G is not enough?
> >>
> >> Could you please run the test for ~60 seconds, and run 'dstat' on other
> console? I'd like to check disk and network bandwidth at the same
> timestamps and compare them to mine.
> >>
> >> Thanks in advance!
> >>
> >> 07.02.2021 05:16, Zhao, Ping пишет:
> >>> Hi Mikhail,
> >>>
> >>> I reproduced your problem with kernel 5.8.0-1010-aws. And I tried
> >>> kernel 5.8.0 which doesn't has this problem. I can confirm there's a
> >>> regression of aws patch(linux-aws_5.8.0-1010.10.diff)
> >>>
> >>> Updated 'sendfile on' & 'aio off' test result with 4KB data which is
> almost same as libaio:
> >>>
> >>> Nginx worker_processes 1:
> >>>                        4k               100k              1M
> >>> Io_uring    220MB/s    1GB/s            1.3GB/s
> >>> Libaio        70MB/s       250MB/s      600MB/s(with -c 200, 1.0GB/s)
> >>> sendfile    70MB/s       260MB/s       700MB/s
> >>>
> >>>
> >>> Nginx worker_processes 4:
> >>>                        4k               100k              1M
> >>> Io_uring    800MB/s     2.5GB/s        2.6GB/s(my nvme disk io maximum
> bw)
> >>> libaio         250MB/s     900MB/s      2.0GB/s
> >>> sendfile     250MB/s     900MB/s      1.6GB/s
> >>>
> >>> BR,
> >>> Ping
> >>>
> >>> -----Original Message-----
> >>> From: Zhao, Ping
> >>> Sent: Friday, February 5, 2021 2:43 PM
> >>> To: nginx-devel at nginx.org
> >>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
> >>>
> >>> Hi Mikhail,
> >>>
> >>> Added 'sendfile on' & 'aio off' test result with previous table:
> >>>
> >>> Following is the test result with 100KB and 1MB: (4KB to be test)
> >>>
> >>> Nginx worker_processes 1:
> >>>                        4k               100k              1M
> >>> Io_uring    220MB/s    1GB/s            1.3GB/s
> >>> Libaio        70MB/s       250MB/s      600MB/s(with -c 200, 1.0GB/s)
> >>> sendfile    tbt                260MB/s       700MB/s
> >>>
> >>>
> >>> Nginx worker_processes 4:
> >>>                        4k               100k              1M
> >>> Io_uring    800MB/s     2.5GB/s        2.6GB/s(my nvme disk io maximum
> bw)
> >>> libaio         250MB/s     900MB/s      2.0GB/s
> >>> sendfile     tbt                900MB/s      1.6GB/s
> >>>
> >>> Regards,
> >>> Ping
> >>>
> >>> -----Original Message-----
> >>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of
> >>> Mikhail Isachenkov
> >>> Sent: Thursday, February 4, 2021 4:55 PM
> >>> To: nginx-devel at nginx.org
> >>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> >>>
> >>> Hi Zhao Ping,
> >>>
> >>> My test is much simpler than yours. I created
> >>> /usr/local/html/(11111...99999) files on SSD (100 kb size) and wrote
> small lua script for wrk that adds 5 random digits to request. There are no
> such errors without patch with aio enabled.
> >>> These files does not change during test.
> >>>
> >>> I'll try to reproduce this on CentOS 8 -- which repository do you use
> to install 5.x kernel?
> >>>
> >>> Also, could you please run the test with 'sendfile on' and 'aio off'
> to get reference numbers for sendfile too?
> >>>
> >>> Thanks in advance!
> >>>
> >>> 04.02.2021 10:08, Zhao, Ping пишет:
> >>>> Another possible cause is that "/usr/local/html/64746" was
> changed/removed when other user tried to read it.
> >>>>
> >>>> -----Original Message-----
> >>>> From: Zhao, Ping
> >>>> Sent: Thursday, February 4, 2021 10:33 AM
> >>>> To: nginx-devel at nginx.org
> >>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
> >>>>
> >>>> Hi Mikhail,
> >>>>
> >>>> I didn't see this error in my log. Following is my OS/Kernel:
> >>>> CentOS:  8.1.1911
> >>>> Kernel:    5.7.19
> >>>> Liburing: liburing-1.0.7-3.el8.x86_64,
> >>>> liburing-devel-1.0.7-3.el8.x86_64 (from yum repo)
> >>>>
> >>>> Regarding the error: 11: Resource temporarily unavailable. It's
> probably that too many read "/usr/local/html/64746" at one time which is
> still locked by previous read. I tried to repro this error with single file
> but it seems nginx auto store the signal file in memory and I don't see
> error. How do you perform the test? I want to repro this if possible.
> >>>>
> >>>> My nginx reported this error before:
> >>>> 2021/01/04 05:04:29 [alert] 50769#50769: *11498 pread() read only
> 7101 of 15530 from "/mnt/cache1/17/68aae9d816ec02340ee617b7ee52a117",
> client: 11.11.11.3, server: _, request: "GET
> /_100kobject?version=cdn003191&thread=64 HTTP/1.1", host: "11.11.11.1:8080
> "
> >>>> Which is fixed by my 2nd patch(Jan 25) already.
> >>>>
> >>>> BR,
> >>>> Ping
> >>>>
> >>>> -----Original Message-----
> >>>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of
> >>>> Mikhail Isachenkov
> >>>> Sent: Wednesday, February 3, 2021 10:11 PM
> >>>> To: nginx-devel at nginx.org
> >>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> >>>>
> >>>> Hi Ping Zhao,
> >>>>
> >>>> When I try to repeat this test, I've got a huge number of these
> errors:
> >>>>
> >>>> 2021/02/03 10:22:48 [crit] 30018#30018: *2 aio read
> >>>> "/usr/local/html/64746" failed (11: Resource temporarily
> >>>> unavailable) while sending response to client, client: 127.0.0.1,
> server:
> >>>> localhost,
> >>>> request: "GET /64746 HTTP/1.1", host: "localhost"
> >>>>
> >>>> I tested this patch on Ubuntu 20.10 (5.8.0-1010-aws kernel version)
> and Fedora 33 (5.10.11-200.fc33.x86_64) with the same result.
> >>>>
> >>>> Did you get any errors in error log with patch applied? Which
> OS/kernel did you use for testing? Did you perform any specific tuning
> before running?
> >>>>
> >>>> 25.01.2021 11:24, Zhao, Ping пишет:
> >>>>> Hello, add a small update to correct the length when part of request
> already received in previous.
> >>>>> This case may happen when using io_uring and throughput increased.
> >>>>>
> >>>>> # HG changeset patch
> >>>>> # User Ping Zhao <ping.zhao at intel.com> # Date 1611566408 18000
> >>>>> #      Mon Jan 25 04:20:08 2021 -0500
> >>>>> # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027
> >>>>> # Parent  1372f9ee2e829b5de5d12c05713c307e325e0369
> >>>>> Correct length calculation when part of request received.
> >>>>>
> >>>>> diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c
> >>>>> --- a/src/core/ngx_output_chain.c Wed Jan 13 11:10:05 2021 -0500
> >>>>> +++ b/src/core/ngx_output_chain.c Mon Jan 25 04:20:08 2021 -0500
> >>>>> @@ -531,6 +531,14 @@
> >>>>>
> >>>>>           size = ngx_buf_size(src);
> >>>>>           size = ngx_min(size, dst->end - dst->pos);
> >>>>> +#if (NGX_HAVE_FILE_IOURING)
> >>>>> +    /*
> >>>>> +     * check if already received part of the request in previous,
> >>>>> +     * calculate the remain length
> >>>>> +     */
> >>>>> +    if(dst->last > dst->pos && size > (dst->last - dst->pos))
> >>>>> +        size = size - (dst->last - dst->pos); #endif
> >>>>>
> >>>>>           sendfile = ctx->sendfile && !ctx->directio;
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of
> >>>>> Zhao, Ping
> >>>>> Sent: Thursday, January 21, 2021 9:44 AM
> >>>>> To: nginx-devel at nginx.org
> >>>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
> >>>>>
> >>>>> Hi Vladimir,
> >>>>>
> >>>>> No special/extra configuration needed, but need check if 'aio on'
> and 'sendfile off' is correctly set. This is my Nginx config for reference:
> >>>>>
> >>>>> user nobody;
> >>>>> daemon off;
> >>>>> worker_processes 1;
> >>>>> error_log error.log ;
> >>>>> events {
> >>>>>          worker_connections 65535;
> >>>>>          use epoll;
> >>>>> }
> >>>>>
> >>>>> http {
> >>>>>          include mime.types;
> >>>>>          default_type application/octet-stream;
> >>>>>          access_log on;
> >>>>>          aio on;
> >>>>>          sendfile off;
> >>>>>          directio 2k;
> >>>>>
> >>>>>          # Cache Configurations
> >>>>>          proxy_cache_path /mnt/cache0 levels=2
> keys_zone=nginx-cache0:400m max_size=1400g inactive=4d use_temp_path=off;
> ......
> >>>>>
> >>>>>
> >>>>> To better measure the disk io performance data, I do the following
> steps:
> >>>>> 1. To exclude other impact, and focus on disk io part.(This patch
> only impact disk aio read process) Use cgroup to limit Nginx memory usage.
> Otherwise Nginx may also use memory as cache storage and this may cause
> test result not so straight.(since most cache hit in memory, disk io bw is
> low, like my previous mail found which didn't exclude the memory cache
> impact)
> >>>>>           echo 2G > memory.limit_in_bytes
> >>>>>           use ' cgexec -g memory:nginx' to start Nginx.
> >>>>>
> >>>>> 2. use wrk -t 100 -c 1000, with random 25000 http requests.
> >>>>>           My previous test used -t 200 connections, comparing with
> -t 1000, libaio performance drop more when connections numbers increased
> from 200 to 1000, but io_uring doesn't. It's another advantage of io_uring.
> >>>>>
> >>>>> 3. First clean the cache disk and run the test for 30 minutes to let
> Nginx store the cache files to nvme disk as much as possible.
> >>>>>
> >>>>> 4. Rerun the test, this time Nginx will use ngx_file_aio_read to
> >>>>> extract the cache files in nvme cache disk. Use iostat to track the
> >>>>> io data. The data should be align with NIC bw since all data should
> >>>>> be from cache disk.(need exclude memory as cache storage impact)
> >>>>>
> >>>>> Following is the test result:
> >>>>>
> >>>>> Nginx worker_processes 1:
> >>>>>           4k              100k            1M
> >>>>> Io_uring  220MB/s 1GB/s           1.3GB/s
> >>>>> Libaio            70MB/s          250MB/s 600MB/s(with -c 200,
> 1.0GB/s)
> >>>>>
> >>>>>
> >>>>> Nginx worker_processes 4:
> >>>>>           4k              100k            1M
> >>>>> Io_uring  800MB/s 2.5GB/s         2.6GB/s(my nvme disk io maximum bw)
> >>>>> libaio            250MB/s 900MB/s 2.0GB/s
> >>>>>
> >>>>> So for small request, io_uring has huge improvement than libaio. In
> previous mail, because I didn't exclude the memory cache storage impact,
> most cache file is stored in memory, very few are from disk in case of
> 4k/100k. The data is not correct.(for 1M, because the cache is too big to
> store in memory, it wat in disk)  Also I enabled directio option "directio
> 2k" this time to avoid this.
> >>>>>
> >>>>> Regards,
> >>>>> Ping
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: nginx-devel <nginx-devel-bounces at nginx.org> On Behalf Of
> >>>>> Vladimir Homutov
> >>>>> Sent: Wednesday, January 20, 2021 12:43 AM
> >>>>> To: nginx-devel at nginx.org
> >>>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
> >>>>>
> >>>>> On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote:
> >>>>>> It depends on if disk io is the performance hot spot or not. If
> >>>>>> yes, io_uring shows improvement than libaio. With 4KB/100KB length
> >>>>>> 1 Nginx thread it's hard to see performance difference because
> >>>>>> iostat is only around ~10MB/100MB per second. Disk io is not the
> >>>>>> performance bottle neck, both libaio and io_uring have the same
> >>>>>> performance. If you increase request size or Nginx threads number,
> >>>>>> for example 1MB length or Nginx thread number 4. In this case,
> >>>>>> disk io became the performance bottle neck, you will see io_uring
> performance improvement.
> >>>>>
> >>>>> Can you please provide full test results with specific nginx
> configuration?
> >>>>>
> >>>>> _______________________________________________
> >>>>> nginx-devel mailing list
> >>>>> nginx-devel at nginx.org
> >>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>>> _______________________________________________
> >>>>> nginx-devel mailing list
> >>>>> nginx-devel at nginx.org
> >>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>>> _______________________________________________
> >>>>> nginx-devel mailing list
> >>>>> nginx-devel at nginx.org
> >>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>>>
> >>>>
> >>>> --
> >>>> Best regards,
> >>>> Mikhail Isachenkov
> >>>> NGINX Professional Services
> >>>> _______________________________________________
> >>>> nginx-devel mailing list
> >>>> nginx-devel at nginx.org
> >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>> _______________________________________________
> >>>> nginx-devel mailing list
> >>>> nginx-devel at nginx.org
> >>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>>
> >>>
> >>> --
> >>> Best regards,
> >>> Mikhail Isachenkov
> >>> NGINX Professional Services
> >>> _______________________________________________
> >>> nginx-devel mailing list
> >>> nginx-devel at nginx.org
> >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>> _______________________________________________
> >>> nginx-devel mailing list
> >>> nginx-devel at nginx.org
> >>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>>
> >>
> >> --
> >> Best regards,
> >> Mikhail Isachenkov
> >> NGINX Professional Services
> >> _______________________________________________
> >> nginx-devel mailing list
> >> nginx-devel at nginx.org
> >> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >> _______________________________________________
> >> nginx-devel mailing list
> >> nginx-devel at nginx.org
> >> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >>
> >
> > --
> > Best regards,
> > Mikhail Isachenkov
> > NGINX Professional Services
> > _______________________________________________
> > nginx-devel mailing list
> > nginx-devel at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx-devel
> >
>
> --
> Best regards,
> Mikhail Isachenkov
> NGINX Professional Services
> _______________________________________________
> nginx-devel mailing list
> nginx-devel at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx-devel/attachments/20210215/a2b31dcb/attachment-0001.htm>


More information about the nginx-devel mailing list