nginx KTLS and HTTP/2 performance degradation
Lyuben Stoev
lyuben at stoev.eu
Thu Dec 2 12:05:52 UTC 2021
Hello,
I have tested the nginx with the patch
https://hg.nginx.org/nginx/rev/65946a191197 (SSL: SSL_sendfile() support
with kernel TLS.) following the nginx blog article
https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/
And it sort of works, but I have bad performance when making HTTP/2
requests. If I made a HTTP/1.1 request there is 30-35% increase in
performance as the Nginx blog article stated, but when I changed the
request to use HTTP/2 the request was 40% slower than an ordinary nginx
without KTLS enabled. Does anyone have such perfomance degradation with
nginx KTLS and HTTP/2? I am using generic setup - Ubuntu 20.04.3 LTS and
kernels 5.8.0-63-generic (the same results are with 5.4.0-91-generic).
The nginx vritual host is the same as in the Nginx blog article with
exception of adding http2 to the listen! OpenSSL 3.0.0 and nginx 1.21.4
are used.
The KTLS seems to work, because the strace and debug logs show it. Just
the sstrange thing is when using HTTP2, the sendfile syscalls look:
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1418218] => [1426410], 8192) = 8192
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1426410] => [1434602], 8192) = 8192
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1434602] => [1442794], 8192) = 8192
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1442794] => [1450986], 8192) = 8192
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1450986] => [1459178], 8192) = 8192
write(39, "\0 \0\0\0\0\0\0\1", 9) = 9
sendfile(39, 131, [1459178] => [1467370], 8192) = 8192
It is always 8K and there are thousands of sendfile syscalls....
And when the HTTP/1.1 is used the sendfile syscall changes the window:
sendfile(32, 33, [586170368] => [586694656], 487571456) = 524288
sendfile(32, 33, [586694656], 487047168) = -1 EAGAIN (Resource
temporarily unavailable)
epoll_wait(8, [{EPOLLOUT, {u32=1602391248, u64=94426458393808}}],
512, 59711) = 1
sendfile(32, 33, [586694656] => [594673664], 487047168) = 7979008
sendfile(32, 33, [594673664] => [595001344], 479068160) = 327680
sendfile(32, 33, [595001344], 478740480) = -1 EAGAIN (Resource
temporarily unavailable)
epoll_wait(8, [{EPOLLOUT, {u32=1602391248, u64=94426458393808}}],
512, 60000) = 1
sendfile(32, 33, [595001344] => [604028928], 478740480) = 9027584
sendfile(32, 33, [604028928] => [604356608], 469712896) = 327680
sendfile(32, 33, [604356608] => [604749824], 469385216) = 393216
sendfile(32, 33, [604749824] => [605192192], 468992000) = 442368
Does anyone receive similar results?
root at srv-tests:~# curl --http1.1 --tls13-ciphers TLS_AES_256_GCM_SHA384
https://mytests.local/1G --output /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 1170M 0 --:--:-- --:--:-- --:--:--
1168M
root at srv-tests:~# curl --http1.1 --tls13-ciphers TLS_AES_256_GCM_SHA384
https://mytests.local/1G --output /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 1197M 0 --:--:-- --:--:-- --:--:--
1196M
root at srv-tests:~# curl --http2 --tls13-ciphers TLS_AES_256_GCM_SHA384
https://mytests.local/1G --output /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 340M 0 0:00:03 0:00:03
--:--:-- 340M
root at srv-tests:~# curl --http2 --tls13-ciphers TLS_AES_256_GCM_SHA384
https://mytests.local/1G --output /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 338M 0 0:00:03 0:00:03
--:--:-- 338M
srv-prod ~ # h2load -n 1 -c 1 -t 1 https://mytests.local/1G
starting benchmark...
spawning thread #0: 1 total client(s). 1 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_256_GCM_SHA384
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 100% done
finished in 7.64s, 0.13 req/s, 134.11MB/s
requests: 1 total, 1 started, 1 done, 1 succeeded, 0 failed, 0 errored,
0 timeout
status codes: 1 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.00GB (1074922031) total, 492B (492) headers (space savings
21.66%), 1.00GB (1073741824) data
min max mean sd +/- sd
time for request: 7.63s 7.63s 7.63s 0us 100.00%
time for connect: 13.46ms 13.46ms 13.46ms 0us 100.00%
time to 1st byte: 15.05ms 15.05ms 15.05ms 0us 100.00%
req/s : 0.13 0.13 0.13 0.00 100.00%
srv-prod ~ # h2load --h1 -n 1 -c 1 -t 1 https://mytests.local/1G
starting benchmark...
spawning thread #0: 1 total client(s). 1 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_256_GCM_SHA384
Server Temp Key: X25519 253 bits
Application protocol: http/1.1
progress: 100% done
finished in 2.65s, 0.38 req/s, 386.41MB/s
requests: 1 total, 1 started, 1 done, 1 succeeded, 0 failed, 0 errored,
0 timeout
status codes: 1 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.00GB (1073742546) total, 631B (631) headers (space savings
0.00%), 1.00GB (1073741824) data
min max mean sd +/- sd
time for request: 2.64s 2.64s 2.64s 0us 100.00%
time for connect: 13.72ms 13.72ms 13.72ms 0us 100.00%
time to 1st byte: 14.86ms 14.86ms 14.86ms 0us 100.00%
req/s : 0.38 0.38 0.38 0.00 100.00%
With regards,
Lyuben Stoev
More information about the nginx-devel
mailing list