From hongyi.zhao at gmail.com Tue Jan 5 12:08:23 2021 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Tue, 5 Jan 2021 20:08:23 +0800 Subject: About the native rtmps protocol support in nginx. Message-ID: Currently, on Ubuntu20.10, I've compiled the latest git master version of FFmpeg with the native TLS/SSL support through the following configuration option: $ ./configure --enable-openssl $ ffmpeg -protocols |& egrep -i 'in|out|rtmps' Input: rtmps Output: rtmps At this moment, I want to use Nginx as the media streaming server for rtmps protocol, and I've noticed that the capability of rtmp protocol support in nginx is enabled by this module: . But I'm not sure whether the module mentioned above has the native rtmps protocol support capability. Any hints will be greatly appreciated. Regards, -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials Hebei Polytechnic University of Science and Technology engineering NO. 552 North Gangtie Road, Xingtai, China From cnewton at netflix.com Tue Jan 5 13:24:04 2021 From: cnewton at netflix.com (Chris Newton) Date: Tue, 5 Jan 2021 13:24:04 +0000 Subject: Remove unnecessary check in ngx_http_stub_status_handler() Message-ID: I was desk checking return codes generated in handlers following calls to ngx_http_send_header(), and noticed what appears to be an unnecessary test in ngx_http_stub_status_handler() -- or rather, I think the test should always evaluate as true, and if somehow it isn't odd things could occur - at least an additional ALERT message would be logged, as well as some unnecessary work performed. As such, I'd like to propose the following change: *--- a/src/http/modules/ngx_http_stub_status_module.c* *+++ b/src/http/modules/ngx_http_stub_status_module.c* @@ -106,11 +106,7 @@ ngx_http_stub_status_handler(ngx_http_request_t *r) if (r->method == NGX_HTTP_HEAD) { r->headers_out.status = NGX_HTTP_OK; - rc = ngx_http_send_header(r); - - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { - return rc; - } + return ngx_http_send_header(r); } size = sizeof("Active connections: \n") + NGX_ATOMIC_T_LEN On a successful call to ngx_http_send_header() I believe that r->header_only will be set true and otherwise I'd expect one of those error checks to evaluate true, so unconditionally returning the value from ngx_http_send_header() seems 'cleaner'. If the test were to somehow fail, then processing would fall through and try the ngx_http_send_header() call again (resulting in the ALERT message), as well as performing other additional work that should be unnecessary when making a HEAD request That test seems to be SOP after calling ngx_http_send_header(), but it seems inappropriate when that function is called within an "r->method == NGX_HTTP_HEAD" block. TIA Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ping.zhao at intel.com Mon Jan 11 07:05:28 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Mon, 11 Jan 2021 07:05:28 +0000 Subject: [PATCH] Use io_uring for async io access In-Reply-To: References: Message-ID: Hello Nginx Developers, This is a patch of Nginx io_uring for async io access. Would like to receive your comments. Thanks, Ping # HG changeset patch # User Ping Zhao > # Date 1610370434 18000 # Mon Jan 11 08:07:14 2021 -0500 # Node ID 3677cf19b98b054614030b80f73728b02fdda832 # Parent 82228f955153527fba12211f52bf102c90f38dfb Use io_uring for async io access. Replace aio with io_uring in async disk io access. Io_uring is a new kernel feature to async io access. Nginx can use it for legacy disk aio access(for example, disk cache file access) Check with iostat that shows nvme disk io has 30%+ performance improvement with 1 thread. Test with wrk with 100 threads 200 connections(-t 100 -c 1000) with 25000 random requests. iostat(B/s) libaio 1.0 GB/s io_uring 1.3+ GB/s Patch contributor: Carter Li, Ping Zhao diff -r 82228f955153 -r 3677cf19b98b auto/unix --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 +++ b/auto/unix Mon Jan 11 08:07:14 2021 -0500 @@ -532,44 +532,23 @@ if [ $ngx_found = no ]; then - ngx_feature="Linux AIO support" + ngx_feature="Linux io_uring support (liburing)" ngx_feature_name="NGX_HAVE_FILE_AIO" ngx_feature_run=no - ngx_feature_incs="#include - #include " + ngx_feature_incs="#include " ngx_feature_path= - ngx_feature_libs= - ngx_feature_test="struct iocb iocb; - iocb.aio_lio_opcode = IOCB_CMD_PREAD; - iocb.aio_flags = IOCB_FLAG_RESFD; - iocb.aio_resfd = -1; - (void) iocb; - (void) eventfd(0, 0)" + ngx_feature_libs="-luring" + ngx_feature_test="struct io_uring ring; + int ret = io_uring_queue_init(64, &ring, 0); + if (ret < 0) return 1; + io_uring_queue_exit(&ring);" . auto/feature if [ $ngx_found = yes ]; then have=NGX_HAVE_EVENTFD . auto/have have=NGX_HAVE_SYS_EVENTFD_H . auto/have CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" - fi - fi - - if [ $ngx_found = no ]; then - - ngx_feature="Linux AIO support (SYS_eventfd)" - ngx_feature_incs="#include - #include " - ngx_feature_test="struct iocb iocb; - iocb.aio_lio_opcode = IOCB_CMD_PREAD; - iocb.aio_flags = IOCB_FLAG_RESFD; - iocb.aio_resfd = -1; - (void) iocb; - (void) SYS_eventfd" - . auto/feature - - if [ $ngx_found = yes ]; then - have=NGX_HAVE_EVENTFD . auto/have - CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" + CORE_LIBS="$CORE_LIBS -luring" fi fi @@ -577,7 +556,7 @@ cat << END $0: no supported file AIO was found -Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ only +Currently file AIO is supported on FreeBSD 4.3+ and Linux 5.1.0+ (requires liburing) only END exit 1 diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/ngx_open_file_cache.c Mon Jan 11 08:07:14 2021 -0500 @@ -869,8 +869,8 @@ if (!of->log) { /* - * Use non-blocking open() not to hang on FIFO files, etc. - * This flag has no effect on a regular files. + * Differs from plain read, IORING_OP_READV with O_NONBLOCK + * will return -EAGAIN if the operation may block. */ fd = ngx_open_file_wrapper(name, of, NGX_FILE_RDONLY|NGX_FILE_NONBLOCK, diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/ngx_output_chain.c Mon Jan 11 08:07:14 2021 -0500 @@ -589,6 +589,20 @@ if (ctx->aio_handler) { n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, src->file_pos, ctx->pool); + + if (n > 0 && n < size) { + ngx_log_error(NGX_LOG_INFO, ctx->pool->log, 0, + ngx_read_file_n " Try again, read only %z of %O from \"%s\"", + n, size, src->file->name.data); + + src->file_pos += n; + dst->last += n; + + n = ngx_file_aio_read(src->file, dst->pos+n, (size_t) size-n, + src->file_pos, ctx->pool); + + } + if (n == NGX_AGAIN) { ctx->aio_handler(ctx, src->file); return NGX_AGAIN; diff -r 82228f955153 -r 3677cf19b98b src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/modules/ngx_epoll_module.c Mon Jan 11 08:07:14 2021 -0500 @@ -9,6 +9,10 @@ #include #include +#if (NGX_HAVE_FILE_AIO) +#include +#endif + #if (NGX_TEST_BUILD_EPOLL) @@ -75,23 +79,6 @@ #define SYS_eventfd 323 #endif -#if (NGX_HAVE_FILE_AIO) - -#define SYS_io_setup 245 -#define SYS_io_destroy 246 -#define SYS_io_getevents 247 - -typedef u_int aio_context_t; - -struct io_event { - uint64_t data; /* the data field from the iocb */ - uint64_t obj; /* what iocb this event came from */ - int64_t res; /* result code for this event */ - int64_t res2; /* secondary result */ -}; - - -#endif #endif /* NGX_TEST_BUILD_EPOLL */ @@ -124,7 +111,7 @@ ngx_uint_t flags); #if (NGX_HAVE_FILE_AIO) -static void ngx_epoll_eventfd_handler(ngx_event_t *ev); +static void ngx_epoll_io_uring_handler(ngx_event_t *ev); #endif static void *ngx_epoll_create_conf(ngx_cycle_t *cycle); @@ -141,13 +128,11 @@ #endif #if (NGX_HAVE_FILE_AIO) - -int ngx_eventfd = -1; -aio_context_t ngx_aio_ctx = 0; +struct io_uring ngx_ring; +struct io_uring_params ngx_ring_params; -static ngx_event_t ngx_eventfd_event; -static ngx_connection_t ngx_eventfd_conn; - +static ngx_event_t ngx_ring_event; +static ngx_connection_t ngx_ring_conn; #endif #if (NGX_HAVE_EPOLLRDHUP) @@ -217,102 +202,40 @@ #if (NGX_HAVE_FILE_AIO) -/* - * We call io_setup(), io_destroy() io_submit(), and io_getevents() directly - * as syscalls instead of libaio usage, because the library header file - * supports eventfd() since 0.3.107 version only. - */ - -static int -io_setup(u_int nr_reqs, aio_context_t *ctx) -{ - return syscall(SYS_io_setup, nr_reqs, ctx); -} - - -static int -io_destroy(aio_context_t ctx) -{ - return syscall(SYS_io_destroy, ctx); -} - - -static int -io_getevents(aio_context_t ctx, long min_nr, long nr, struct io_event *events, - struct timespec *tmo) -{ - return syscall(SYS_io_getevents, ctx, min_nr, nr, events, tmo); -} - - static void ngx_epoll_aio_init(ngx_cycle_t *cycle, ngx_epoll_conf_t *epcf) { - int n; struct epoll_event ee; -#if (NGX_HAVE_SYS_EVENTFD_H) - ngx_eventfd = eventfd(0, 0); -#else - ngx_eventfd = syscall(SYS_eventfd, 0); -#endif - - if (ngx_eventfd == -1) { + if (io_uring_queue_init_params(32763, &ngx_ring, &ngx_ring_params) < 0) { ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "eventfd() failed"); - ngx_file_aio = 0; - return; - } - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "eventfd: %d", ngx_eventfd); - - n = 1; - - if (ioctl(ngx_eventfd, FIONBIO, &n) == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "ioctl(eventfd, FIONBIO) failed"); + "io_uring_queue_init_params() failed"); goto failed; } - if (io_setup(epcf->aio_requests, &ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "io_setup() failed"); - goto failed; - } - - ngx_eventfd_event.data = &ngx_eventfd_conn; - ngx_eventfd_event.handler = ngx_epoll_eventfd_handler; - ngx_eventfd_event.log = cycle->log; - ngx_eventfd_event.active = 1; - ngx_eventfd_conn.fd = ngx_eventfd; - ngx_eventfd_conn.read = &ngx_eventfd_event; - ngx_eventfd_conn.log = cycle->log; + ngx_ring_event.data = &ngx_ring_conn; + ngx_ring_event.handler = ngx_epoll_io_uring_handler; + ngx_ring_event.log = cycle->log; + ngx_ring_event.active = 1; + ngx_ring_conn.fd = ngx_ring.ring_fd; + ngx_ring_conn.read = &ngx_ring_event; + ngx_ring_conn.log = cycle->log; ee.events = EPOLLIN|EPOLLET; - ee.data.ptr = &ngx_eventfd_conn; + ee.data.ptr = &ngx_ring_conn; - if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_eventfd, &ee) != -1) { + if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_ring.ring_fd, &ee) != -1) { return; } ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); - if (io_destroy(ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "io_destroy() failed"); - } + io_uring_queue_exit(&ngx_ring); failed: - if (close(ngx_eventfd) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "eventfd close() failed"); - } - - ngx_eventfd = -1; - ngx_aio_ctx = 0; + ngx_ring.ring_fd = 0; ngx_file_aio = 0; } @@ -549,23 +472,11 @@ #if (NGX_HAVE_FILE_AIO) - if (ngx_eventfd != -1) { - - if (io_destroy(ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "io_destroy() failed"); - } - - if (close(ngx_eventfd) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "eventfd close() failed"); - } - - ngx_eventfd = -1; + if (ngx_ring.ring_fd != 0) { + io_uring_queue_exit(&ngx_ring); + ngx_ring.ring_fd = 0; } - ngx_aio_ctx = 0; - #endif ngx_free(event_list); @@ -939,84 +850,36 @@ #if (NGX_HAVE_FILE_AIO) static void -ngx_epoll_eventfd_handler(ngx_event_t *ev) +ngx_epoll_io_uring_handler(ngx_event_t *ev) { - int n, events; - long i; - uint64_t ready; - ngx_err_t err; ngx_event_t *e; + struct io_uring_cqe *cqe; + unsigned head; + unsigned cqe_count = 0; ngx_event_aio_t *aio; - struct io_event event[64]; - struct timespec ts; - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd handler"); - - n = read(ngx_eventfd, &ready, 8); + ngx_log_debug(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_uring_peek_cqe: START"); - err = ngx_errno; - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd: %d", n); + io_uring_for_each_cqe(&ngx_ring, head, cqe) { + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_event: %p %d %d", + cqe->user_data, cqe->res, cqe->flags); - if (n != 8) { - if (n == -1) { - if (err == NGX_EAGAIN) { - return; - } + e = (ngx_event_t *) io_uring_cqe_get_data(cqe); + e->complete = 1; + e->active = 0; + e->ready = 1; - ngx_log_error(NGX_LOG_ALERT, ev->log, err, "read(eventfd) failed"); - return; - } + aio = e->data; + aio->res = cqe->res; - ngx_log_error(NGX_LOG_ALERT, ev->log, 0, - "read(eventfd) returned only %d bytes", n); - return; + ++cqe_count; + + ngx_post_event(e, &ngx_posted_events); } - ts.tv_sec = 0; - ts.tv_nsec = 0; - - while (ready) { - - events = io_getevents(ngx_aio_ctx, 1, 64, event, &ts); - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, - "io_getevents: %d", events); - - if (events > 0) { - ready -= events; - - for (i = 0; i < events; i++) { - - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, ev->log, 0, - "io_event: %XL %XL %L %L", - event[i].data, event[i].obj, - event[i].res, event[i].res2); - - e = (ngx_event_t *) (uintptr_t) event[i].data; - - e->complete = 1; - e->active = 0; - e->ready = 1; - - aio = e->data; - aio->res = event[i].res; - - ngx_post_event(e, &ngx_posted_events); - } - - continue; - } - - if (events == 0) { - return; - } - - /* events == -1 */ - ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_errno, - "io_getevents() failed"); - return; - } + io_uring_cq_advance(&ngx_ring, cqe_count); } #endif diff -r 82228f955153 -r 3677cf19b98b src/event/ngx_event.h --- a/src/event/ngx_event.h Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/ngx_event.h Mon Jan 11 08:07:14 2021 -0500 @@ -160,7 +160,9 @@ size_t nbytes; #endif - ngx_aiocb_t aiocb; + /* Make sure that this iov has the same lifecycle with its associated aio event */ + struct iovec iov; + ngx_event_t event; }; diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_aio_read.c --- a/src/os/unix/ngx_linux_aio_read.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/os/unix/ngx_linux_aio_read.c Mon Jan 11 08:07:14 2021 -0500 @@ -9,20 +9,16 @@ #include #include +#include -extern int ngx_eventfd; -extern aio_context_t ngx_aio_ctx; + +extern struct io_uring ngx_ring; +extern struct io_uring_params ngx_ring_params; static void ngx_file_aio_event_handler(ngx_event_t *ev); -static int -io_submit(aio_context_t ctx, long n, struct iocb **paiocb) -{ - return syscall(SYS_io_submit, ctx, n, paiocb); -} - ngx_int_t ngx_file_aio_init(ngx_file_t *file, ngx_pool_t *pool) @@ -50,10 +46,10 @@ ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t offset, ngx_pool_t *pool) { - ngx_err_t err; - struct iocb *piocb[1]; - ngx_event_t *ev; - ngx_event_aio_t *aio; + ngx_err_t err; + ngx_event_t *ev; + ngx_event_aio_t *aio; + struct io_uring_sqe *sqe; if (!ngx_file_aio) { return ngx_read_file(file, buf, size, offset); @@ -93,22 +89,41 @@ return NGX_ERROR; } - ngx_memzero(&aio->aiocb, sizeof(struct iocb)); + sqe = io_uring_get_sqe(&ngx_ring); + + if (!sqe) { + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, + "aio no sqe left:%d @%O:%uz %V", + ev->complete, offset, size, &file->name); + return ngx_read_file(file, buf, size, offset); + } - aio->aiocb.aio_data = (uint64_t) (uintptr_t) ev; - aio->aiocb.aio_lio_opcode = IOCB_CMD_PREAD; - aio->aiocb.aio_fildes = file->fd; - aio->aiocb.aio_buf = (uint64_t) (uintptr_t) buf; - aio->aiocb.aio_nbytes = size; - aio->aiocb.aio_offset = offset; - aio->aiocb.aio_flags = IOCB_FLAG_RESFD; - aio->aiocb.aio_resfd = ngx_eventfd; + if (__builtin_expect(!!(ngx_ring_params.features & IORING_FEAT_CUR_PERSONALITY), 1)) { + /* + * `io_uring_prep_read` is faster than `io_uring_prep_readv`, because the kernel + * doesn't need to import iovecs in advance. + * + * If the kernel supports `IORING_FEAT_CUR_PERSONALITY`, it should support + * non-vectored read/write commands too. + * + * It's not perfect, but avoids an extra feature-test syscall. + */ + io_uring_prep_read(sqe, file->fd, buf, size, offset); + } else { + /* + * We must store iov into heap to prevent kernel from returning -EFAULT + * in case `IORING_FEAT_SUBMIT_STABLE` is not supported + */ + aio->iov.iov_base = buf; + aio->iov.iov_len = size; + io_uring_prep_readv(sqe, file->fd, &aio->iov, 1, offset); + } + io_uring_sqe_set_data(sqe, ev); + ev->handler = ngx_file_aio_event_handler; - piocb[0] = &aio->aiocb; - - if (io_submit(ngx_aio_ctx, 1, piocb) == 1) { + if (io_uring_submit(&ngx_ring) == 1) { ev->active = 1; ev->ready = 0; ev->complete = 0; diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Tue Dec 15 17:41:39 2020 +0300 +++ b/src/os/unix/ngx_linux_config.h Mon Jan 11 08:07:14 2021 -0500 @@ -93,10 +93,6 @@ #include #endif #include -#if (NGX_HAVE_FILE_AIO) -#include -typedef struct iocb ngx_aiocb_t; -#endif #if (NGX_HAVE_CAPABILITIES) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 11 19:30:17 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Jan 2021 19:30:17 +0000 Subject: [nginx] Version bump. Message-ID: details: https://hg.nginx.org/nginx/rev/b055bb6ef87e branches: changeset: 7758:b055bb6ef87e user: Maxim Dounin date: Mon Jan 11 22:06:27 2021 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 82228f955153 -r b055bb6ef87e src/core/nginx.h --- a/src/core/nginx.h Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/nginx.h Mon Jan 11 22:06:27 2021 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1019006 -#define NGINX_VERSION "1.19.6" +#define nginx_version 1019007 +#define NGINX_VERSION "1.19.7" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From mdounin at mdounin.ru Mon Jan 11 19:30:20 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Jan 2021 19:30:20 +0000 Subject: [nginx] Contrib: vim syntax, update core and 3rd party module directives. Message-ID: details: https://hg.nginx.org/nginx/rev/a20eef9a1df2 branches: changeset: 7759:a20eef9a1df2 user: Gena Makhomed date: Tue Dec 29 13:13:57 2020 +0200 description: Contrib: vim syntax, update core and 3rd party module directives. diffstat: contrib/vim/syntax/nginx.vim | 43 +++++++++++++++++++++++++++++++++++++------ 1 files changed, 37 insertions(+), 6 deletions(-) diffs (151 lines): diff -r b055bb6ef87e -r a20eef9a1df2 contrib/vim/syntax/nginx.vim --- a/contrib/vim/syntax/nginx.vim Mon Jan 11 22:06:27 2021 +0300 +++ b/contrib/vim/syntax/nginx.vim Tue Dec 29 13:13:57 2020 +0200 @@ -268,6 +268,7 @@ syn keyword ngxDirective contained grpc_ syn keyword ngxDirective contained grpc_ssl_certificate syn keyword ngxDirective contained grpc_ssl_certificate_key syn keyword ngxDirective contained grpc_ssl_ciphers +syn keyword ngxDirective contained grpc_ssl_conf_command syn keyword ngxDirective contained grpc_ssl_crl syn keyword ngxDirective contained grpc_ssl_name syn keyword ngxDirective contained grpc_ssl_password_file @@ -447,6 +448,7 @@ syn keyword ngxDirective contained proxy syn keyword ngxDirective contained proxy_cache_valid syn keyword ngxDirective contained proxy_connect_timeout syn keyword ngxDirective contained proxy_cookie_domain +syn keyword ngxDirective contained proxy_cookie_flags syn keyword ngxDirective contained proxy_cookie_path syn keyword ngxDirective contained proxy_download_rate syn keyword ngxDirective contained proxy_force_ranges @@ -480,11 +482,13 @@ syn keyword ngxDirective contained proxy syn keyword ngxDirective contained proxy_session_drop syn keyword ngxDirective contained proxy_set_body syn keyword ngxDirective contained proxy_set_header +syn keyword ngxDirective contained proxy_smtp_auth syn keyword ngxDirective contained proxy_socket_keepalive syn keyword ngxDirective contained proxy_ssl syn keyword ngxDirective contained proxy_ssl_certificate syn keyword ngxDirective contained proxy_ssl_certificate_key syn keyword ngxDirective contained proxy_ssl_ciphers +syn keyword ngxDirective contained proxy_ssl_conf_command syn keyword ngxDirective contained proxy_ssl_crl syn keyword ngxDirective contained proxy_ssl_name syn keyword ngxDirective contained proxy_ssl_password_file @@ -592,6 +596,7 @@ syn keyword ngxDirective contained ssl_c syn keyword ngxDirective contained ssl_certificate_key syn keyword ngxDirective contained ssl_ciphers syn keyword ngxDirective contained ssl_client_certificate +syn keyword ngxDirective contained ssl_conf_command syn keyword ngxDirective contained ssl_crl syn keyword ngxDirective contained ssl_dhparam syn keyword ngxDirective contained ssl_early_data @@ -605,6 +610,7 @@ syn keyword ngxDirective contained ssl_p syn keyword ngxDirective contained ssl_prefer_server_ciphers syn keyword ngxDirective contained ssl_preread syn keyword ngxDirective contained ssl_protocols +syn keyword ngxDirective contained ssl_reject_handshake syn keyword ngxDirective contained ssl_session_cache syn keyword ngxDirective contained ssl_session_ticket_key syn keyword ngxDirective contained ssl_session_tickets @@ -643,6 +649,7 @@ syn keyword ngxDirective contained user syn keyword ngxDirective contained userid syn keyword ngxDirective contained userid_domain syn keyword ngxDirective contained userid_expires +syn keyword ngxDirective contained userid_flags syn keyword ngxDirective contained userid_mark syn keyword ngxDirective contained userid_name syn keyword ngxDirective contained userid_p3p @@ -693,6 +700,7 @@ syn keyword ngxDirective contained uwsgi syn keyword ngxDirective contained uwsgi_ssl_certificate syn keyword ngxDirective contained uwsgi_ssl_certificate_key syn keyword ngxDirective contained uwsgi_ssl_ciphers +syn keyword ngxDirective contained uwsgi_ssl_conf_command syn keyword ngxDirective contained uwsgi_ssl_crl syn keyword ngxDirective contained uwsgi_ssl_name syn keyword ngxDirective contained uwsgi_ssl_password_file @@ -738,6 +746,7 @@ syn keyword ngxDirective contained zone_ syn keyword ngxDirective contained zone_sync_ssl_certificate syn keyword ngxDirective contained zone_sync_ssl_certificate_key syn keyword ngxDirective contained zone_sync_ssl_ciphers +syn keyword ngxDirective contained zone_sync_ssl_conf_command syn keyword ngxDirective contained zone_sync_ssl_crl syn keyword ngxDirective contained zone_sync_ssl_name syn keyword ngxDirective contained zone_sync_ssl_password_file @@ -1329,6 +1338,8 @@ syn keyword ngxDirectiveThirdParty conta syn keyword ngxDirectiveThirdParty contained content_by_lua syn keyword ngxDirectiveThirdParty contained content_by_lua_block syn keyword ngxDirectiveThirdParty contained content_by_lua_file +syn keyword ngxDirectiveThirdParty contained exit_worker_by_lua_block +syn keyword ngxDirectiveThirdParty contained exit_worker_by_lua_file syn keyword ngxDirectiveThirdParty contained header_filter_by_lua syn keyword ngxDirectiveThirdParty contained header_filter_by_lua_block syn keyword ngxDirectiveThirdParty contained header_filter_by_lua_file @@ -1370,6 +1381,7 @@ syn keyword ngxDirectiveThirdParty conta syn keyword ngxDirectiveThirdParty contained lua_ssl_protocols syn keyword ngxDirectiveThirdParty contained lua_ssl_trusted_certificate syn keyword ngxDirectiveThirdParty contained lua_ssl_verify_depth +syn keyword ngxDirectiveThirdParty contained lua_thread_cache_max_entries syn keyword ngxDirectiveThirdParty contained lua_transform_underscores_in_response_headers syn keyword ngxDirectiveThirdParty contained lua_use_default_type syn keyword ngxDirectiveThirdParty contained rewrite_by_lua @@ -2285,6 +2297,7 @@ syn keyword ngxDirectiveThirdParty conta syn keyword ngxDirectiveThirdParty contained testcookie_refresh_encrypt_cookie_key syn keyword ngxDirectiveThirdParty contained testcookie_refresh_status syn keyword ngxDirectiveThirdParty contained testcookie_refresh_template +syn keyword ngxDirectiveThirdParty contained testcookie_samesite syn keyword ngxDirectiveThirdParty contained testcookie_secret syn keyword ngxDirectiveThirdParty contained testcookie_secure_flag syn keyword ngxDirectiveThirdParty contained testcookie_session @@ -2355,15 +2368,31 @@ syn keyword ngxDirectiveThirdParty conta " IP2Location Nginx " https://github.com/ip2location/ip2location-nginx -syn keyword ngxDirectiveThirdParty contained ip2location -syn keyword ngxDirectiveThirdParty contained ip2location_access_type syn keyword ngxDirectiveThirdParty contained ip2location_proxy syn keyword ngxDirectiveThirdParty contained ip2location_proxy_recursive +syn keyword ngxDirectiveThirdParty contained ip2location_areacode +syn keyword ngxDirectiveThirdParty contained ip2location_city +syn keyword ngxDirectiveThirdParty contained ip2location_country_long +syn keyword ngxDirectiveThirdParty contained ip2location_country_short +syn keyword ngxDirectiveThirdParty contained ip2location_domain +syn keyword ngxDirectiveThirdParty contained ip2location_elevation +syn keyword ngxDirectiveThirdParty contained ip2location_iddcode +syn keyword ngxDirectiveThirdParty contained ip2location_isp +syn keyword ngxDirectiveThirdParty contained ip2location_latitude +syn keyword ngxDirectiveThirdParty contained ip2location_longitude +syn keyword ngxDirectiveThirdParty contained ip2location_mcc +syn keyword ngxDirectiveThirdParty contained ip2location_mnc +syn keyword ngxDirectiveThirdParty contained ip2location_mobilebrand +syn keyword ngxDirectiveThirdParty contained ip2location_netspeed +syn keyword ngxDirectiveThirdParty contained ip2location_region +syn keyword ngxDirectiveThirdParty contained ip2location_timezone +syn keyword ngxDirectiveThirdParty contained ip2location_usagetype +syn keyword ngxDirectiveThirdParty contained ip2location_weatherstationcode +syn keyword ngxDirectiveThirdParty contained ip2location_weatherstationname +syn keyword ngxDirectiveThirdParty contained ip2location_zipcode " IP2Proxy module for Nginx " https://github.com/ip2location/ip2proxy-nginx -syn keyword ngxDirectiveThirdParty contained ip2proxy -syn keyword ngxDirectiveThirdParty contained ip2proxy_access_type syn keyword ngxDirectiveThirdParty contained ip2proxy_as syn keyword ngxDirectiveThirdParty contained ip2proxy_asn syn keyword ngxDirectiveThirdParty contained ip2proxy_city @@ -2371,12 +2400,14 @@ syn keyword ngxDirectiveThirdParty conta syn keyword ngxDirectiveThirdParty contained ip2proxy_country_short syn keyword ngxDirectiveThirdParty contained ip2proxy_database syn keyword ngxDirectiveThirdParty contained ip2proxy_domain +syn keyword ngxDirectiveThirdParty contained ip2proxy_isp syn keyword ngxDirectiveThirdParty contained ip2proxy_is_proxy -syn keyword ngxDirectiveThirdParty contained ip2proxy_isp syn keyword ngxDirectiveThirdParty contained ip2proxy_last_seen +syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy +syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy_recursive syn keyword ngxDirectiveThirdParty contained ip2proxy_proxy_type syn keyword ngxDirectiveThirdParty contained ip2proxy_region -syn keyword ngxDirectiveThirdParty contained ip2proxy_reverse_proxy +syn keyword ngxDirectiveThirdParty contained ip2proxy_threat syn keyword ngxDirectiveThirdParty contained ip2proxy_usage_type From mdounin at mdounin.ru Mon Jan 11 19:30:46 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 Jan 2021 22:30:46 +0300 Subject: [PATCH] Contrib: vim syntax, update core and 3rd party module directives. In-Reply-To: References: Message-ID: <20210111193046.GV1147@mdounin.ru> Hello! On Tue, Dec 29, 2020 at 01:18:44PM +0200, Gena Makhomed wrote: > # HG changeset patch > # User Gena Makhomed > # Date 1609240437 -7200 > # Tue Dec 29 13:13:57 2020 +0200 > # Node ID ed5770c4a49f969949a9b7480af6f75d3aa2eaa0 > # Parent 82228f955153527fba12211f52bf102c90f38dfb > Contrib: vim syntax, update core and 3rd party module directives. Committed, thnx. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Mon Jan 11 19:53:56 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:53:56 +0000 Subject: [njs] 2021 year. Message-ID: details: https://hg.nginx.org/njs/rev/d7ef83814374 branches: changeset: 1587:d7ef83814374 user: Dmitry Volyntsev date: Mon Jan 11 19:53:05 2021 +0000 description: 2021 year. diffstat: LICENSE | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (16 lines): diff -r 40dc1818a485 -r d7ef83814374 LICENSE --- a/LICENSE Thu Dec 24 18:35:18 2020 +0000 +++ b/LICENSE Mon Jan 11 19:53:05 2021 +0000 @@ -1,8 +1,8 @@ /* - * Copyright (C) 2015-2020 NGINX, Inc. - * Copyright (C) 2015-2020 Igor Sysoev - * Copyright (C) 2017-2020 Dmitry Volyntsev - * Copyright (C) 2019-2020 Alexander Borisov + * Copyright (C) 2015-2021 NGINX, Inc. + * Copyright (C) 2015-2021 Igor Sysoev + * Copyright (C) 2017-2021 Dmitry Volyntsev + * Copyright (C) 2019-2021 Alexander Borisov * All rights reserved. * * Redistribution and use in source and binary forms, with or without From xeioex at nginx.com Mon Jan 11 19:53:58 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:53:58 +0000 Subject: [njs] Added njs_vm_value_array_buffer_set(). Message-ID: details: https://hg.nginx.org/njs/rev/59ab52c9700b branches: changeset: 1588:59ab52c9700b user: Dmitry Volyntsev date: Mon Jan 11 19:53:08 2021 +0000 description: Added njs_vm_value_array_buffer_set(). diffstat: src/njs.h | 3 +++ src/njs_vm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+), 0 deletions(-) diffs (43 lines): diff -r d7ef83814374 -r 59ab52c9700b src/njs.h --- a/src/njs.h Mon Jan 11 19:53:05 2021 +0000 +++ b/src/njs.h Mon Jan 11 19:53:08 2021 +0000 @@ -331,6 +331,9 @@ NJS_EXPORT u_char *njs_vm_value_string_a NJS_EXPORT njs_int_t njs_vm_value_string_copy(njs_vm_t *vm, njs_str_t *retval, njs_value_t *value, uintptr_t *next); +NJS_EXPORT njs_int_t njs_vm_value_array_buffer_set(njs_vm_t *vm, + njs_value_t *value, const u_char *start, uint32_t size); + /* * Sets a Buffer value. * start data is not copied and should not be freed. diff -r d7ef83814374 -r 59ab52c9700b src/njs_vm.c --- a/src/njs_vm.c Mon Jan 11 19:53:05 2021 +0000 +++ b/src/njs_vm.c Mon Jan 11 19:53:08 2021 +0000 @@ -733,6 +733,26 @@ njs_vm_value_string_set(njs_vm_t *vm, nj njs_int_t +njs_vm_value_array_buffer_set(njs_vm_t *vm, njs_value_t *value, + const u_char *start, uint32_t size) +{ + njs_array_buffer_t *array; + + array = njs_array_buffer_alloc(vm, 0, 0); + if (njs_slow_path(array == NULL)) { + return NJS_ERROR; + } + + array->u.data = (u_char *) start; + array->size = size; + + njs_set_array_buffer(value, array); + + return NJS_OK; +} + + +njs_int_t njs_vm_value_buffer_set(njs_vm_t *vm, njs_value_t *value, const u_char *start, uint32_t size) { From xeioex at nginx.com Mon Jan 11 19:54:00 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:54:00 +0000 Subject: [njs] Exposing chb API. Message-ID: details: https://hg.nginx.org/njs/rev/6d285a23fcbb branches: changeset: 1589:6d285a23fcbb user: Dmitry Volyntsev date: Mon Jan 11 19:53:08 2021 +0000 description: Exposing chb API. diffstat: src/njs.h | 5 +++++ src/njs_vm.c | 7 +++++++ 2 files changed, 12 insertions(+), 0 deletions(-) diffs (39 lines): diff -r 59ab52c9700b -r 6d285a23fcbb src/njs.h --- a/src/njs.h Mon Jan 11 19:53:08 2021 +0000 +++ b/src/njs.h Mon Jan 11 19:53:08 2021 +0000 @@ -18,6 +18,10 @@ #include #include #include +#include +#include +#include +#include #include #include @@ -317,6 +321,7 @@ NJS_EXPORT njs_function_t *njs_vm_functi NJS_EXPORT njs_value_t *njs_vm_retval(njs_vm_t *vm); NJS_EXPORT void njs_vm_retval_set(njs_vm_t *vm, const njs_value_t *value); +NJS_EXPORT njs_mp_t *njs_vm_memory_pool(njs_vm_t *vm); /* Gets string value, no copy. */ NJS_EXPORT void njs_value_string_get(njs_value_t *value, njs_str_t *dst); diff -r 59ab52c9700b -r 6d285a23fcbb src/njs_vm.c --- a/src/njs_vm.c Mon Jan 11 19:53:08 2021 +0000 +++ b/src/njs_vm.c Mon Jan 11 19:53:08 2021 +0000 @@ -612,6 +612,13 @@ njs_vm_retval(njs_vm_t *vm) } +njs_mp_t * +njs_vm_memory_pool(njs_vm_t *vm) +{ + return vm->mem_pool; +} + + uintptr_t njs_vm_meta(njs_vm_t *vm, njs_uint_t index) { From xeioex at nginx.com Mon Jan 11 19:54:02 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:54:02 +0000 Subject: [njs] Allowing to reserve 0 bytes in njs_chb_reserve() for consistency. Message-ID: details: https://hg.nginx.org/njs/rev/dbc81c9d4e46 branches: changeset: 1590:dbc81c9d4e46 user: Dmitry Volyntsev date: Mon Jan 11 19:53:09 2021 +0000 description: Allowing to reserve 0 bytes in njs_chb_reserve() for consistency. diffstat: src/njs_chb.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diffs (14 lines): diff -r 6d285a23fcbb -r dbc81c9d4e46 src/njs_chb.c --- a/src/njs_chb.c Mon Jan 11 19:53:08 2021 +0000 +++ b/src/njs_chb.c Mon Jan 11 19:53:09 2021 +0000 @@ -34,10 +34,6 @@ njs_chb_reserve(njs_chb_t *chain, size_t { njs_chb_node_t *n; - if (njs_slow_path(size == 0)) { - return NULL; - } - n = chain->last; if (njs_fast_path(n != NULL && njs_chb_node_room(n) >= size)) { From xeioex at nginx.com Mon Jan 11 19:54:04 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:54:04 +0000 Subject: [njs] Added njs_value_null_set(). Message-ID: details: https://hg.nginx.org/njs/rev/1b99785e0711 branches: changeset: 1591:1b99785e0711 user: Dmitry Volyntsev date: Mon Jan 11 19:53:09 2021 +0000 description: Added njs_value_null_set(). diffstat: src/njs.h | 1 + src/njs_value.c | 7 +++++++ 2 files changed, 8 insertions(+), 0 deletions(-) diffs (28 lines): diff -r dbc81c9d4e46 -r 1b99785e0711 src/njs.h --- a/src/njs.h Mon Jan 11 19:53:09 2021 +0000 +++ b/src/njs.h Mon Jan 11 19:53:09 2021 +0000 @@ -375,6 +375,7 @@ NJS_EXPORT void njs_vm_value_error_set(n NJS_EXPORT void njs_vm_memory_error(njs_vm_t *vm); NJS_EXPORT void njs_value_undefined_set(njs_value_t *value); +NJS_EXPORT void njs_value_null_set(njs_value_t *value); NJS_EXPORT void njs_value_boolean_set(njs_value_t *value, int yn); NJS_EXPORT void njs_value_number_set(njs_value_t *value, double num); diff -r dbc81c9d4e46 -r 1b99785e0711 src/njs_value.c --- a/src/njs_value.c Mon Jan 11 19:53:09 2021 +0000 +++ b/src/njs_value.c Mon Jan 11 19:53:09 2021 +0000 @@ -386,6 +386,13 @@ njs_value_undefined_set(njs_value_t *val void +njs_value_null_set(njs_value_t *value) +{ + njs_set_null(value); +} + + +void njs_value_boolean_set(njs_value_t *value, int yn) { njs_set_boolean(value, yn); From xeioex at nginx.com Mon Jan 11 19:54:06 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 11 Jan 2021 19:54:06 +0000 Subject: [njs] Added njs_vm_object_keys(). Message-ID: details: https://hg.nginx.org/njs/rev/dc7d94c05669 branches: changeset: 1592:dc7d94c05669 user: Dmitry Volyntsev date: Mon Jan 11 19:53:10 2021 +0000 description: Added njs_vm_object_keys(). diffstat: src/njs.h | 2 ++ src/njs_vm.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+), 0 deletions(-) diffs (39 lines): diff -r 1b99785e0711 -r dc7d94c05669 src/njs.h --- a/src/njs.h Mon Jan 11 19:53:09 2021 +0000 +++ b/src/njs.h Mon Jan 11 19:53:10 2021 +0000 @@ -402,6 +402,8 @@ NJS_EXPORT njs_int_t njs_value_is_buffer NJS_EXPORT njs_int_t njs_vm_object_alloc(njs_vm_t *vm, njs_value_t *retval, ...); +NJS_EXPORT njs_value_t *njs_vm_object_keys(njs_vm_t *vm, njs_value_t *value, + njs_value_t *retval); NJS_EXPORT njs_value_t *njs_vm_object_prop(njs_vm_t *vm, njs_value_t *value, const njs_str_t *key, njs_opaque_value_t *retval); diff -r 1b99785e0711 -r dc7d94c05669 src/njs_vm.c --- a/src/njs_vm.c Mon Jan 11 19:53:09 2021 +0000 +++ b/src/njs_vm.c Mon Jan 11 19:53:10 2021 +0000 @@ -976,6 +976,23 @@ done: } +njs_value_t * +njs_vm_object_keys(njs_vm_t *vm, njs_value_t *value, njs_value_t *retval) +{ + njs_array_t *keys; + + keys = njs_value_own_enumerate(vm, value, NJS_ENUM_KEYS, + NJS_ENUM_STRING, 0); + if (njs_slow_path(keys == NULL)) { + return NULL; + } + + njs_set_array(retval, keys); + + return retval; +} + + njs_int_t njs_vm_array_alloc(njs_vm_t *vm, njs_value_t *retval, uint32_t spare) { From cnewton at netflix.com Tue Jan 12 00:40:06 2021 From: cnewton at netflix.com (Chris Newton) Date: Tue, 12 Jan 2021 00:40:06 +0000 Subject: Unexpected structure of macros in ngx_string.h Message-ID: Hello I just came across a problem with an innocuous looking line of code in a new module I have been working on: if (tmp_str.len == 0) ngx_str_set(&tmp_str, "/"); The modification of tmp_str.data to '/' was always being made, but the length wasn't if the test failed. This turns out to be caused by the style used in the definition of the ngx_str_set macro. Altering this macro definition (and ngx_str_null which is similar) would protect against this: *--- a/ports/netflix/nginx/files/nginx/src/core/ngx_string.h* *+++ b/ports/netflix/nginx/files/nginx/src/core/ngx_string.h* @@ -40,8 +40,9 @@ typedef struct { #define ngx_string(str) { sizeof(str) - 1, (u_char *) str } #define ngx_null_string { 0, NULL } #define ngx_str_set(str, text) \ - (str)->len = sizeof(text) - 1; (str)->data = (u_char *) text -#define ngx_str_null(str) (str)->len = 0; (str)->data = NULL + do { (str)->len = sizeof(text) - 1; (str)->data = (u_char *) text; } while (0) +#define ngx_str_null(str) \ + do { (str)->len = 0; (str)->data = NULL; } while (0) #define ngx_tolower(c) (u_char) ((c >= 'A' && c <= 'Z') ? (c | 0x20) : c) I haven't looked further to see if others would benefit from such a change; figured I would run by you first TIA Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ping.zhao at intel.com Tue Jan 12 01:32:53 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Tue, 12 Jan 2021 01:32:53 +0000 Subject: [PATCH] Use io_uring for async io access In-Reply-To: References: Message-ID: There's a typo in the mail: Test with wrk with 100 threads 200 connections(-t 100 -c 1000) with 25000 random requests. Should be "-c 200". In fact with -c 1000, io_uring performance gain is even more significant because libaio performance drop more with 1000 connections than 200 connections. Regards, Ping From: nginx-devel On Behalf Of Zhao, Ping Sent: Monday, January 11, 2021 3:05 PM To: nginx-devel at nginx.org Subject: [PATCH] Use io_uring for async io access Hello Nginx Developers, This is a patch of Nginx io_uring for async io access. Would like to receive your comments. Thanks, Ping # HG changeset patch # User Ping Zhao > # Date 1610370434 18000 # Mon Jan 11 08:07:14 2021 -0500 # Node ID 3677cf19b98b054614030b80f73728b02fdda832 # Parent 82228f955153527fba12211f52bf102c90f38dfb Use io_uring for async io access. Replace aio with io_uring in async disk io access. Io_uring is a new kernel feature to async io access. Nginx can use it for legacy disk aio access(for example, disk cache file access) Check with iostat that shows nvme disk io has 30%+ performance improvement with 1 thread. Test with wrk with 100 threads 200 connections(-t 100 -c 1000) with 25000 random requests. iostat(B/s) libaio 1.0 GB/s io_uring 1.3+ GB/s Patch contributor: Carter Li, Ping Zhao diff -r 82228f955153 -r 3677cf19b98b auto/unix --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 +++ b/auto/unix Mon Jan 11 08:07:14 2021 -0500 @@ -532,44 +532,23 @@ if [ $ngx_found = no ]; then - ngx_feature="Linux AIO support" + ngx_feature="Linux io_uring support (liburing)" ngx_feature_name="NGX_HAVE_FILE_AIO" ngx_feature_run=no - ngx_feature_incs="#include - #include " + ngx_feature_incs="#include " ngx_feature_path= - ngx_feature_libs= - ngx_feature_test="struct iocb iocb; - iocb.aio_lio_opcode = IOCB_CMD_PREAD; - iocb.aio_flags = IOCB_FLAG_RESFD; - iocb.aio_resfd = -1; - (void) iocb; - (void) eventfd(0, 0)" + ngx_feature_libs="-luring" + ngx_feature_test="struct io_uring ring; + int ret = io_uring_queue_init(64, &ring, 0); + if (ret < 0) return 1; + io_uring_queue_exit(&ring);" . auto/feature if [ $ngx_found = yes ]; then have=NGX_HAVE_EVENTFD . auto/have have=NGX_HAVE_SYS_EVENTFD_H . auto/have CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" - fi - fi - - if [ $ngx_found = no ]; then - - ngx_feature="Linux AIO support (SYS_eventfd)" - ngx_feature_incs="#include - #include " - ngx_feature_test="struct iocb iocb; - iocb.aio_lio_opcode = IOCB_CMD_PREAD; - iocb.aio_flags = IOCB_FLAG_RESFD; - iocb.aio_resfd = -1; - (void) iocb; - (void) SYS_eventfd" - . auto/feature - - if [ $ngx_found = yes ]; then - have=NGX_HAVE_EVENTFD . auto/have - CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" + CORE_LIBS="$CORE_LIBS -luring" fi fi @@ -577,7 +556,7 @@ cat << END $0: no supported file AIO was found -Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ only +Currently file AIO is supported on FreeBSD 4.3+ and Linux 5.1.0+ (requires liburing) only END exit 1 diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/ngx_open_file_cache.c Mon Jan 11 08:07:14 2021 -0500 @@ -869,8 +869,8 @@ if (!of->log) { /* - * Use non-blocking open() not to hang on FIFO files, etc. - * This flag has no effect on a regular files. + * Differs from plain read, IORING_OP_READV with O_NONBLOCK + * will return -EAGAIN if the operation may block. */ fd = ngx_open_file_wrapper(name, of, NGX_FILE_RDONLY|NGX_FILE_NONBLOCK, diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/core/ngx_output_chain.c Mon Jan 11 08:07:14 2021 -0500 @@ -589,6 +589,20 @@ if (ctx->aio_handler) { n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, src->file_pos, ctx->pool); + + if (n > 0 && n < size) { + ngx_log_error(NGX_LOG_INFO, ctx->pool->log, 0, + ngx_read_file_n " Try again, read only %z of %O from \"%s\"", + n, size, src->file->name.data); + + src->file_pos += n; + dst->last += n; + + n = ngx_file_aio_read(src->file, dst->pos+n, (size_t) size-n, + src->file_pos, ctx->pool); + + } + if (n == NGX_AGAIN) { ctx->aio_handler(ctx, src->file); return NGX_AGAIN; diff -r 82228f955153 -r 3677cf19b98b src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/modules/ngx_epoll_module.c Mon Jan 11 08:07:14 2021 -0500 @@ -9,6 +9,10 @@ #include #include +#if (NGX_HAVE_FILE_AIO) +#include +#endif + #if (NGX_TEST_BUILD_EPOLL) @@ -75,23 +79,6 @@ #define SYS_eventfd 323 #endif -#if (NGX_HAVE_FILE_AIO) - -#define SYS_io_setup 245 -#define SYS_io_destroy 246 -#define SYS_io_getevents 247 - -typedef u_int aio_context_t; - -struct io_event { - uint64_t data; /* the data field from the iocb */ - uint64_t obj; /* what iocb this event came from */ - int64_t res; /* result code for this event */ - int64_t res2; /* secondary result */ -}; - - -#endif #endif /* NGX_TEST_BUILD_EPOLL */ @@ -124,7 +111,7 @@ ngx_uint_t flags); #if (NGX_HAVE_FILE_AIO) -static void ngx_epoll_eventfd_handler(ngx_event_t *ev); +static void ngx_epoll_io_uring_handler(ngx_event_t *ev); #endif static void *ngx_epoll_create_conf(ngx_cycle_t *cycle); @@ -141,13 +128,11 @@ #endif #if (NGX_HAVE_FILE_AIO) - -int ngx_eventfd = -1; -aio_context_t ngx_aio_ctx = 0; +struct io_uring ngx_ring; +struct io_uring_params ngx_ring_params; -static ngx_event_t ngx_eventfd_event; -static ngx_connection_t ngx_eventfd_conn; - +static ngx_event_t ngx_ring_event; +static ngx_connection_t ngx_ring_conn; #endif #if (NGX_HAVE_EPOLLRDHUP) @@ -217,102 +202,40 @@ #if (NGX_HAVE_FILE_AIO) -/* - * We call io_setup(), io_destroy() io_submit(), and io_getevents() directly - * as syscalls instead of libaio usage, because the library header file - * supports eventfd() since 0.3.107 version only. - */ - -static int -io_setup(u_int nr_reqs, aio_context_t *ctx) -{ - return syscall(SYS_io_setup, nr_reqs, ctx); -} - - -static int -io_destroy(aio_context_t ctx) -{ - return syscall(SYS_io_destroy, ctx); -} - - -static int -io_getevents(aio_context_t ctx, long min_nr, long nr, struct io_event *events, - struct timespec *tmo) -{ - return syscall(SYS_io_getevents, ctx, min_nr, nr, events, tmo); -} - - static void ngx_epoll_aio_init(ngx_cycle_t *cycle, ngx_epoll_conf_t *epcf) { - int n; struct epoll_event ee; -#if (NGX_HAVE_SYS_EVENTFD_H) - ngx_eventfd = eventfd(0, 0); -#else - ngx_eventfd = syscall(SYS_eventfd, 0); -#endif - - if (ngx_eventfd == -1) { + if (io_uring_queue_init_params(32763, &ngx_ring, &ngx_ring_params) < 0) { ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "eventfd() failed"); - ngx_file_aio = 0; - return; - } - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "eventfd: %d", ngx_eventfd); - - n = 1; - - if (ioctl(ngx_eventfd, FIONBIO, &n) == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "ioctl(eventfd, FIONBIO) failed"); + "io_uring_queue_init_params() failed"); goto failed; } - if (io_setup(epcf->aio_requests, &ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "io_setup() failed"); - goto failed; - } - - ngx_eventfd_event.data = &ngx_eventfd_conn; - ngx_eventfd_event.handler = ngx_epoll_eventfd_handler; - ngx_eventfd_event.log = cycle->log; - ngx_eventfd_event.active = 1; - ngx_eventfd_conn.fd = ngx_eventfd; - ngx_eventfd_conn.read = &ngx_eventfd_event; - ngx_eventfd_conn.log = cycle->log; + ngx_ring_event.data = &ngx_ring_conn; + ngx_ring_event.handler = ngx_epoll_io_uring_handler; + ngx_ring_event.log = cycle->log; + ngx_ring_event.active = 1; + ngx_ring_conn.fd = ngx_ring.ring_fd; + ngx_ring_conn.read = &ngx_ring_event; + ngx_ring_conn.log = cycle->log; ee.events = EPOLLIN|EPOLLET; - ee.data.ptr = &ngx_eventfd_conn; + ee.data.ptr = &ngx_ring_conn; - if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_eventfd, &ee) != -1) { + if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_ring.ring_fd, &ee) != -1) { return; } ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); - if (io_destroy(ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "io_destroy() failed"); - } + io_uring_queue_exit(&ngx_ring); failed: - if (close(ngx_eventfd) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "eventfd close() failed"); - } - - ngx_eventfd = -1; - ngx_aio_ctx = 0; + ngx_ring.ring_fd = 0; ngx_file_aio = 0; } @@ -549,23 +472,11 @@ #if (NGX_HAVE_FILE_AIO) - if (ngx_eventfd != -1) { - - if (io_destroy(ngx_aio_ctx) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "io_destroy() failed"); - } - - if (close(ngx_eventfd) == -1) { - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, - "eventfd close() failed"); - } - - ngx_eventfd = -1; + if (ngx_ring.ring_fd != 0) { + io_uring_queue_exit(&ngx_ring); + ngx_ring.ring_fd = 0; } - ngx_aio_ctx = 0; - #endif ngx_free(event_list); @@ -939,84 +850,36 @@ #if (NGX_HAVE_FILE_AIO) static void -ngx_epoll_eventfd_handler(ngx_event_t *ev) +ngx_epoll_io_uring_handler(ngx_event_t *ev) { - int n, events; - long i; - uint64_t ready; - ngx_err_t err; ngx_event_t *e; + struct io_uring_cqe *cqe; + unsigned head; + unsigned cqe_count = 0; ngx_event_aio_t *aio; - struct io_event event[64]; - struct timespec ts; - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd handler"); - - n = read(ngx_eventfd, &ready, 8); + ngx_log_debug(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_uring_peek_cqe: START"); - err = ngx_errno; - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd: %d", n); + io_uring_for_each_cqe(&ngx_ring, head, cqe) { + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_event: %p %d %d", + cqe->user_data, cqe->res, cqe->flags); - if (n != 8) { - if (n == -1) { - if (err == NGX_EAGAIN) { - return; - } + e = (ngx_event_t *) io_uring_cqe_get_data(cqe); + e->complete = 1; + e->active = 0; + e->ready = 1; - ngx_log_error(NGX_LOG_ALERT, ev->log, err, "read(eventfd) failed"); - return; - } + aio = e->data; + aio->res = cqe->res; - ngx_log_error(NGX_LOG_ALERT, ev->log, 0, - "read(eventfd) returned only %d bytes", n); - return; + ++cqe_count; + + ngx_post_event(e, &ngx_posted_events); } - ts.tv_sec = 0; - ts.tv_nsec = 0; - - while (ready) { - - events = io_getevents(ngx_aio_ctx, 1, 64, event, &ts); - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, - "io_getevents: %d", events); - - if (events > 0) { - ready -= events; - - for (i = 0; i < events; i++) { - - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, ev->log, 0, - "io_event: %XL %XL %L %L", - event[i].data, event[i].obj, - event[i].res, event[i].res2); - - e = (ngx_event_t *) (uintptr_t) event[i].data; - - e->complete = 1; - e->active = 0; - e->ready = 1; - - aio = e->data; - aio->res = event[i].res; - - ngx_post_event(e, &ngx_posted_events); - } - - continue; - } - - if (events == 0) { - return; - } - - /* events == -1 */ - ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_errno, - "io_getevents() failed"); - return; - } + io_uring_cq_advance(&ngx_ring, cqe_count); } #endif diff -r 82228f955153 -r 3677cf19b98b src/event/ngx_event.h --- a/src/event/ngx_event.h Tue Dec 15 17:41:39 2020 +0300 +++ b/src/event/ngx_event.h Mon Jan 11 08:07:14 2021 -0500 @@ -160,7 +160,9 @@ size_t nbytes; #endif - ngx_aiocb_t aiocb; + /* Make sure that this iov has the same lifecycle with its associated aio event */ + struct iovec iov; + ngx_event_t event; }; diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_aio_read.c --- a/src/os/unix/ngx_linux_aio_read.c Tue Dec 15 17:41:39 2020 +0300 +++ b/src/os/unix/ngx_linux_aio_read.c Mon Jan 11 08:07:14 2021 -0500 @@ -9,20 +9,16 @@ #include #include +#include -extern int ngx_eventfd; -extern aio_context_t ngx_aio_ctx; + +extern struct io_uring ngx_ring; +extern struct io_uring_params ngx_ring_params; static void ngx_file_aio_event_handler(ngx_event_t *ev); -static int -io_submit(aio_context_t ctx, long n, struct iocb **paiocb) -{ - return syscall(SYS_io_submit, ctx, n, paiocb); -} - ngx_int_t ngx_file_aio_init(ngx_file_t *file, ngx_pool_t *pool) @@ -50,10 +46,10 @@ ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t offset, ngx_pool_t *pool) { - ngx_err_t err; - struct iocb *piocb[1]; - ngx_event_t *ev; - ngx_event_aio_t *aio; + ngx_err_t err; + ngx_event_t *ev; + ngx_event_aio_t *aio; + struct io_uring_sqe *sqe; if (!ngx_file_aio) { return ngx_read_file(file, buf, size, offset); @@ -93,22 +89,41 @@ return NGX_ERROR; } - ngx_memzero(&aio->aiocb, sizeof(struct iocb)); + sqe = io_uring_get_sqe(&ngx_ring); + + if (!sqe) { + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, + "aio no sqe left:%d @%O:%uz %V", + ev->complete, offset, size, &file->name); + return ngx_read_file(file, buf, size, offset); + } - aio->aiocb.aio_data = (uint64_t) (uintptr_t) ev; - aio->aiocb.aio_lio_opcode = IOCB_CMD_PREAD; - aio->aiocb.aio_fildes = file->fd; - aio->aiocb.aio_buf = (uint64_t) (uintptr_t) buf; - aio->aiocb.aio_nbytes = size; - aio->aiocb.aio_offset = offset; - aio->aiocb.aio_flags = IOCB_FLAG_RESFD; - aio->aiocb.aio_resfd = ngx_eventfd; + if (__builtin_expect(!!(ngx_ring_params.features & IORING_FEAT_CUR_PERSONALITY), 1)) { + /* + * `io_uring_prep_read` is faster than `io_uring_prep_readv`, because the kernel + * doesn't need to import iovecs in advance. + * + * If the kernel supports `IORING_FEAT_CUR_PERSONALITY`, it should support + * non-vectored read/write commands too. + * + * It's not perfect, but avoids an extra feature-test syscall. + */ + io_uring_prep_read(sqe, file->fd, buf, size, offset); + } else { + /* + * We must store iov into heap to prevent kernel from returning -EFAULT + * in case `IORING_FEAT_SUBMIT_STABLE` is not supported + */ + aio->iov.iov_base = buf; + aio->iov.iov_len = size; + io_uring_prep_readv(sqe, file->fd, &aio->iov, 1, offset); + } + io_uring_sqe_set_data(sqe, ev); + ev->handler = ngx_file_aio_event_handler; - piocb[0] = &aio->aiocb; - - if (io_submit(ngx_aio_ctx, 1, piocb) == 1) { + if (io_uring_submit(&ngx_ring) == 1) { ev->active = 1; ev->ready = 0; ev->complete = 0; diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Tue Dec 15 17:41:39 2020 +0300 +++ b/src/os/unix/ngx_linux_config.h Mon Jan 11 08:07:14 2021 -0500 @@ -93,10 +93,6 @@ #include #endif #include -#if (NGX_HAVE_FILE_AIO) -#include -typedef struct iocb ngx_aiocb_t; -#endif #if (NGX_HAVE_CAPABILITIES) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 12 12:56:08 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2021 15:56:08 +0300 Subject: Unexpected structure of macros in ngx_string.h In-Reply-To: References: Message-ID: <20210112125608.GW1147@mdounin.ru> Hello! On Tue, Jan 12, 2021 at 12:40:06AM +0000, Chris Newton wrote: > I just came across a problem with an innocuous looking line of code in a > new module I have been working on: > > if (tmp_str.len == 0) > > ngx_str_set(&tmp_str, "/"); > > > The modification of tmp_str.data to '/' was always being made, but the > length wasn't if the test failed. This turns out to be caused by the style > used in the definition of the ngx_str_set macro. I always wonder why people intentionally ignore nginx style and then complain something doesn't work for them. Following nginx style and always using curly brackets is how it is expected to work. -- Maxim Dounin http://mdounin.ru/ From vl at nginx.com Tue Jan 12 13:45:35 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 12 Jan 2021 16:45:35 +0300 Subject: [PATCH] Use io_uring for async io access In-Reply-To: References: Message-ID: On Mon, Jan 11, 2021 at 07:05:28AM +0000, Zhao, Ping wrote: > Hello Nginx Developers, > > This is a patch of Nginx io_uring for async io access. Would like to receive your comments. > > Thanks, > Ping Hi Zhao Ping, Unfortunately I was not able to apply the patch properly, it shows a lot of rejections. But nevertheless I took a look and from what I see you are trying to completely replace AIO implementation with some new code. It would be nice to see some modular approach that adds new aio method without breaking existing code. Thank you for sharing! > > # HG changeset patch > # User Ping Zhao > > # Date 1610370434 18000 > # Mon Jan 11 08:07:14 2021 -0500 > # Node ID 3677cf19b98b054614030b80f73728b02fdda832 > # Parent 82228f955153527fba12211f52bf102c90f38dfb > Use io_uring for async io access. > > Replace aio with io_uring in async disk io access. > > Io_uring is a new kernel feature to async io access. Nginx can use it for legacy disk aio access(for example, disk cache file access) > > Check with iostat that shows nvme disk io has 30%+ performance improvement with 1 thread. > Test with wrk with 100 threads 200 connections(-t 100 -c 1000) with 25000 random requests. > > iostat(B/s) > libaio 1.0 GB/s > io_uring 1.3+ GB/s > > Patch contributor: Carter Li, Ping Zhao > > diff -r 82228f955153 -r 3677cf19b98b auto/unix > --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 > +++ b/auto/unix Mon Jan 11 08:07:14 2021 -0500 > @@ -532,44 +532,23 @@ > > if [ $ngx_found = no ]; then > > - ngx_feature="Linux AIO support" > + ngx_feature="Linux io_uring support (liburing)" > ngx_feature_name="NGX_HAVE_FILE_AIO" > ngx_feature_run=no > - ngx_feature_incs="#include > - #include " > + ngx_feature_incs="#include " > ngx_feature_path= > - ngx_feature_libs= > - ngx_feature_test="struct iocb iocb; > - iocb.aio_lio_opcode = IOCB_CMD_PREAD; > - iocb.aio_flags = IOCB_FLAG_RESFD; > - iocb.aio_resfd = -1; > - (void) iocb; > - (void) eventfd(0, 0)" > + ngx_feature_libs="-luring" > + ngx_feature_test="struct io_uring ring; > + int ret = io_uring_queue_init(64, &ring, 0); > + if (ret < 0) return 1; > + io_uring_queue_exit(&ring);" > . auto/feature > > if [ $ngx_found = yes ]; then > have=NGX_HAVE_EVENTFD . auto/have > have=NGX_HAVE_SYS_EVENTFD_H . auto/have > CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" > - fi > - fi > - > - if [ $ngx_found = no ]; then > - > - ngx_feature="Linux AIO support (SYS_eventfd)" > - ngx_feature_incs="#include > - #include " > - ngx_feature_test="struct iocb iocb; > - iocb.aio_lio_opcode = IOCB_CMD_PREAD; > - iocb.aio_flags = IOCB_FLAG_RESFD; > - iocb.aio_resfd = -1; > - (void) iocb; > - (void) SYS_eventfd" > - . auto/feature > - > - if [ $ngx_found = yes ]; then > - have=NGX_HAVE_EVENTFD . auto/have > - CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" > + CORE_LIBS="$CORE_LIBS -luring" > fi > fi > > @@ -577,7 +556,7 @@ > cat << END > > $0: no supported file AIO was found > -Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ only > +Currently file AIO is supported on FreeBSD 4.3+ and Linux 5.1.0+ (requires liburing) only > > END > exit 1 > diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_open_file_cache.c > --- a/src/core/ngx_open_file_cache.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/core/ngx_open_file_cache.c Mon Jan 11 08:07:14 2021 -0500 > @@ -869,8 +869,8 @@ > if (!of->log) { > > /* > - * Use non-blocking open() not to hang on FIFO files, etc. > - * This flag has no effect on a regular files. > + * Differs from plain read, IORING_OP_READV with O_NONBLOCK > + * will return -EAGAIN if the operation may block. > */ > > fd = ngx_open_file_wrapper(name, of, NGX_FILE_RDONLY|NGX_FILE_NONBLOCK, > diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_output_chain.c > --- a/src/core/ngx_output_chain.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/core/ngx_output_chain.c Mon Jan 11 08:07:14 2021 -0500 > @@ -589,6 +589,20 @@ > if (ctx->aio_handler) { > n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, > src->file_pos, ctx->pool); > + > + if (n > 0 && n < size) { > + ngx_log_error(NGX_LOG_INFO, ctx->pool->log, 0, > + ngx_read_file_n " Try again, read only %z of %O from \"%s\"", > + n, size, src->file->name.data); > + > + src->file_pos += n; > + dst->last += n; > + > + n = ngx_file_aio_read(src->file, dst->pos+n, (size_t) size-n, > + src->file_pos, ctx->pool); > + > + } > + > if (n == NGX_AGAIN) { > ctx->aio_handler(ctx, src->file); > return NGX_AGAIN; > diff -r 82228f955153 -r 3677cf19b98b src/event/modules/ngx_epoll_module.c > --- a/src/event/modules/ngx_epoll_module.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/modules/ngx_epoll_module.c Mon Jan 11 08:07:14 2021 -0500 > @@ -9,6 +9,10 @@ > #include > #include > > +#if (NGX_HAVE_FILE_AIO) > +#include > +#endif > + > > #if (NGX_TEST_BUILD_EPOLL) > > @@ -75,23 +79,6 @@ > #define SYS_eventfd 323 > #endif > > -#if (NGX_HAVE_FILE_AIO) > - > -#define SYS_io_setup 245 > -#define SYS_io_destroy 246 > -#define SYS_io_getevents 247 > - > -typedef u_int aio_context_t; > - > -struct io_event { > - uint64_t data; /* the data field from the iocb */ > - uint64_t obj; /* what iocb this event came from */ > - int64_t res; /* result code for this event */ > - int64_t res2; /* secondary result */ > -}; > - > - > -#endif > #endif /* NGX_TEST_BUILD_EPOLL */ > > > @@ -124,7 +111,7 @@ > ngx_uint_t flags); > > #if (NGX_HAVE_FILE_AIO) > -static void ngx_epoll_eventfd_handler(ngx_event_t *ev); > +static void ngx_epoll_io_uring_handler(ngx_event_t *ev); > #endif > > static void *ngx_epoll_create_conf(ngx_cycle_t *cycle); > @@ -141,13 +128,11 @@ > #endif > > #if (NGX_HAVE_FILE_AIO) > - > -int ngx_eventfd = -1; > -aio_context_t ngx_aio_ctx = 0; > +struct io_uring ngx_ring; > +struct io_uring_params ngx_ring_params; > > -static ngx_event_t ngx_eventfd_event; > -static ngx_connection_t ngx_eventfd_conn; > - > +static ngx_event_t ngx_ring_event; > +static ngx_connection_t ngx_ring_conn; > #endif > > #if (NGX_HAVE_EPOLLRDHUP) > @@ -217,102 +202,40 @@ > > #if (NGX_HAVE_FILE_AIO) > > -/* > - * We call io_setup(), io_destroy() io_submit(), and io_getevents() directly > - * as syscalls instead of libaio usage, because the library header file > - * supports eventfd() since 0.3.107 version only. > - */ > - > -static int > -io_setup(u_int nr_reqs, aio_context_t *ctx) > -{ > - return syscall(SYS_io_setup, nr_reqs, ctx); > -} > - > - > -static int > -io_destroy(aio_context_t ctx) > -{ > - return syscall(SYS_io_destroy, ctx); > -} > - > - > -static int > -io_getevents(aio_context_t ctx, long min_nr, long nr, struct io_event *events, > - struct timespec *tmo) > -{ > - return syscall(SYS_io_getevents, ctx, min_nr, nr, events, tmo); > -} > - > - > static void > ngx_epoll_aio_init(ngx_cycle_t *cycle, ngx_epoll_conf_t *epcf) > { > - int n; > struct epoll_event ee; > > -#if (NGX_HAVE_SYS_EVENTFD_H) > - ngx_eventfd = eventfd(0, 0); > -#else > - ngx_eventfd = syscall(SYS_eventfd, 0); > -#endif > - > - if (ngx_eventfd == -1) { > + if (io_uring_queue_init_params(32763, &ngx_ring, &ngx_ring_params) < 0) { > ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "eventfd() failed"); > - ngx_file_aio = 0; > - return; > - } > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, > - "eventfd: %d", ngx_eventfd); > - > - n = 1; > - > - if (ioctl(ngx_eventfd, FIONBIO, &n) == -1) { > - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "ioctl(eventfd, FIONBIO) failed"); > + "io_uring_queue_init_params() failed"); > goto failed; > } > > - if (io_setup(epcf->aio_requests, &ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "io_setup() failed"); > - goto failed; > - } > - > - ngx_eventfd_event.data = &ngx_eventfd_conn; > - ngx_eventfd_event.handler = ngx_epoll_eventfd_handler; > - ngx_eventfd_event.log = cycle->log; > - ngx_eventfd_event.active = 1; > - ngx_eventfd_conn.fd = ngx_eventfd; > - ngx_eventfd_conn.read = &ngx_eventfd_event; > - ngx_eventfd_conn.log = cycle->log; > + ngx_ring_event.data = &ngx_ring_conn; > + ngx_ring_event.handler = ngx_epoll_io_uring_handler; > + ngx_ring_event.log = cycle->log; > + ngx_ring_event.active = 1; > + ngx_ring_conn.fd = ngx_ring.ring_fd; > + ngx_ring_conn.read = &ngx_ring_event; > + ngx_ring_conn.log = cycle->log; > > ee.events = EPOLLIN|EPOLLET; > - ee.data.ptr = &ngx_eventfd_conn; > + ee.data.ptr = &ngx_ring_conn; > > - if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_eventfd, &ee) != -1) { > + if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_ring.ring_fd, &ee) != -1) { > return; > } > > ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); > > - if (io_destroy(ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "io_destroy() failed"); > - } > + io_uring_queue_exit(&ngx_ring); > > failed: > > - if (close(ngx_eventfd) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "eventfd close() failed"); > - } > - > - ngx_eventfd = -1; > - ngx_aio_ctx = 0; > + ngx_ring.ring_fd = 0; > ngx_file_aio = 0; > } > > @@ -549,23 +472,11 @@ > > #if (NGX_HAVE_FILE_AIO) > > - if (ngx_eventfd != -1) { > - > - if (io_destroy(ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "io_destroy() failed"); > - } > - > - if (close(ngx_eventfd) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "eventfd close() failed"); > - } > - > - ngx_eventfd = -1; > + if (ngx_ring.ring_fd != 0) { > + io_uring_queue_exit(&ngx_ring); > + ngx_ring.ring_fd = 0; > } > > - ngx_aio_ctx = 0; > - > #endif > > ngx_free(event_list); > @@ -939,84 +850,36 @@ > #if (NGX_HAVE_FILE_AIO) > > static void > -ngx_epoll_eventfd_handler(ngx_event_t *ev) > +ngx_epoll_io_uring_handler(ngx_event_t *ev) > { > - int n, events; > - long i; > - uint64_t ready; > - ngx_err_t err; > ngx_event_t *e; > + struct io_uring_cqe *cqe; > + unsigned head; > + unsigned cqe_count = 0; > ngx_event_aio_t *aio; > - struct io_event event[64]; > - struct timespec ts; > > - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd handler"); > - > - n = read(ngx_eventfd, &ready, 8); > + ngx_log_debug(NGX_LOG_DEBUG_EVENT, ev->log, 0, > + "io_uring_peek_cqe: START"); > > - err = ngx_errno; > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd: %d", n); > + io_uring_for_each_cqe(&ngx_ring, head, cqe) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, > + "io_event: %p %d %d", > + cqe->user_data, cqe->res, cqe->flags); > > - if (n != 8) { > - if (n == -1) { > - if (err == NGX_EAGAIN) { > - return; > - } > + e = (ngx_event_t *) io_uring_cqe_get_data(cqe); > + e->complete = 1; > + e->active = 0; > + e->ready = 1; > > - ngx_log_error(NGX_LOG_ALERT, ev->log, err, "read(eventfd) failed"); > - return; > - } > + aio = e->data; > + aio->res = cqe->res; > > - ngx_log_error(NGX_LOG_ALERT, ev->log, 0, > - "read(eventfd) returned only %d bytes", n); > - return; > + ++cqe_count; > + > + ngx_post_event(e, &ngx_posted_events); > } > > - ts.tv_sec = 0; > - ts.tv_nsec = 0; > - > - while (ready) { > - > - events = io_getevents(ngx_aio_ctx, 1, 64, event, &ts); > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, > - "io_getevents: %d", events); > - > - if (events > 0) { > - ready -= events; > - > - for (i = 0; i < events; i++) { > - > - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, ev->log, 0, > - "io_event: %XL %XL %L %L", > - event[i].data, event[i].obj, > - event[i].res, event[i].res2); > - > - e = (ngx_event_t *) (uintptr_t) event[i].data; > - > - e->complete = 1; > - e->active = 0; > - e->ready = 1; > - > - aio = e->data; > - aio->res = event[i].res; > - > - ngx_post_event(e, &ngx_posted_events); > - } > - > - continue; > - } > - > - if (events == 0) { > - return; > - } > - > - /* events == -1 */ > - ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_errno, > - "io_getevents() failed"); > - return; > - } > + io_uring_cq_advance(&ngx_ring, cqe_count); > } > > #endif > diff -r 82228f955153 -r 3677cf19b98b src/event/ngx_event.h > --- a/src/event/ngx_event.h Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/ngx_event.h Mon Jan 11 08:07:14 2021 -0500 > @@ -160,7 +160,9 @@ > size_t nbytes; > #endif > > - ngx_aiocb_t aiocb; > + /* Make sure that this iov has the same lifecycle with its associated aio event */ > + struct iovec iov; > + > ngx_event_t event; > }; > > diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_aio_read.c > --- a/src/os/unix/ngx_linux_aio_read.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/os/unix/ngx_linux_aio_read.c Mon Jan 11 08:07:14 2021 -0500 > @@ -9,20 +9,16 @@ > #include > #include > > +#include > > -extern int ngx_eventfd; > -extern aio_context_t ngx_aio_ctx; > + > +extern struct io_uring ngx_ring; > +extern struct io_uring_params ngx_ring_params; > > > static void ngx_file_aio_event_handler(ngx_event_t *ev); > > > -static int > -io_submit(aio_context_t ctx, long n, struct iocb **paiocb) > -{ > - return syscall(SYS_io_submit, ctx, n, paiocb); > -} > - > > ngx_int_t > ngx_file_aio_init(ngx_file_t *file, ngx_pool_t *pool) > @@ -50,10 +46,10 @@ > ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t offset, > ngx_pool_t *pool) > { > - ngx_err_t err; > - struct iocb *piocb[1]; > - ngx_event_t *ev; > - ngx_event_aio_t *aio; > + ngx_err_t err; > + ngx_event_t *ev; > + ngx_event_aio_t *aio; > + struct io_uring_sqe *sqe; > > if (!ngx_file_aio) { > return ngx_read_file(file, buf, size, offset); > @@ -93,22 +89,41 @@ > return NGX_ERROR; > } > > - ngx_memzero(&aio->aiocb, sizeof(struct iocb)); > + sqe = io_uring_get_sqe(&ngx_ring); > + > + if (!sqe) { > + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, > + "aio no sqe left:%d @%O:%uz %V", > + ev->complete, offset, size, &file->name); > + return ngx_read_file(file, buf, size, offset); > + } > > - aio->aiocb.aio_data = (uint64_t) (uintptr_t) ev; > - aio->aiocb.aio_lio_opcode = IOCB_CMD_PREAD; > - aio->aiocb.aio_fildes = file->fd; > - aio->aiocb.aio_buf = (uint64_t) (uintptr_t) buf; > - aio->aiocb.aio_nbytes = size; > - aio->aiocb.aio_offset = offset; > - aio->aiocb.aio_flags = IOCB_FLAG_RESFD; > - aio->aiocb.aio_resfd = ngx_eventfd; > + if (__builtin_expect(!!(ngx_ring_params.features & IORING_FEAT_CUR_PERSONALITY), 1)) { > + /* > + * `io_uring_prep_read` is faster than `io_uring_prep_readv`, because the kernel > + * doesn't need to import iovecs in advance. > + * > + * If the kernel supports `IORING_FEAT_CUR_PERSONALITY`, it should support > + * non-vectored read/write commands too. > + * > + * It's not perfect, but avoids an extra feature-test syscall. > + */ > + io_uring_prep_read(sqe, file->fd, buf, size, offset); > + } else { > + /* > + * We must store iov into heap to prevent kernel from returning -EFAULT > + * in case `IORING_FEAT_SUBMIT_STABLE` is not supported > + */ > + aio->iov.iov_base = buf; > + aio->iov.iov_len = size; > + io_uring_prep_readv(sqe, file->fd, &aio->iov, 1, offset); > + } > + io_uring_sqe_set_data(sqe, ev); > + > > ev->handler = ngx_file_aio_event_handler; > > - piocb[0] = &aio->aiocb; > - > - if (io_submit(ngx_aio_ctx, 1, piocb) == 1) { > + if (io_uring_submit(&ngx_ring) == 1) { > ev->active = 1; > ev->ready = 0; > ev->complete = 0; > diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_config.h > --- a/src/os/unix/ngx_linux_config.h Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/os/unix/ngx_linux_config.h Mon Jan 11 08:07:14 2021 -0500 > @@ -93,10 +93,6 @@ > #include > #endif > #include > -#if (NGX_HAVE_FILE_AIO) > -#include > -typedef struct iocb ngx_aiocb_t; > -#endif > > > #if (NGX_HAVE_CAPABILITIES) > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Jan 12 18:00:57 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2021 18:00:57 +0000 Subject: [nginx] Upstream: fixed zero size buf alerts on extra data (ticket #2117). Message-ID: details: https://hg.nginx.org/nginx/rev/83c4622053b0 branches: changeset: 7760:83c4622053b0 user: Maxim Dounin date: Tue Jan 12 16:59:31 2021 +0300 description: Upstream: fixed zero size buf alerts on extra data (ticket #2117). After 7675:9afa45068b8f and 7678:bffcc5af1d72 (1.19.1), during non-buffered simple proxying, responses with extra data might result in zero size buffers being generated and "zero size buf" alerts in writer. This bug is similar to the one with FastCGI proxying fixed in 7689:da8d758aabeb. In non-buffered mode, normally the filter function is not called if u->length is already 0, since u->length is checked after each call of the filter function. There is a case when this can happen though: if the response length is 0, and there are pre-read response body data left after reading response headers. As such, a check for u->length is needed at the start of non-buffered filter functions, similar to the one for p->length present in buffered filter functions. Appropriate checks added to the existing non-buffered copy filters in the upstream (used by scgi and uwsgi proxying) and proxy modules. diffstat: src/http/modules/ngx_http_proxy_module.c | 7 +++++++ src/http/ngx_http_upstream.c | 7 +++++++ 2 files changed, 14 insertions(+), 0 deletions(-) diffs (34 lines): diff -r a20eef9a1df2 -r 83c4622053b0 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Tue Dec 29 13:13:57 2020 +0200 +++ b/src/http/modules/ngx_http_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 @@ -2334,6 +2334,13 @@ ngx_http_proxy_non_buffered_copy_filter( u = r->upstream; + if (u->length == 0) { + ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, + "upstream sent more data than specified in " + "\"Content-Length\" header"); + return NGX_OK; + } + for (cl = u->out_bufs, ll = &u->out_bufs; cl; cl = cl->next) { ll = &cl->next; } diff -r a20eef9a1df2 -r 83c4622053b0 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Dec 29 13:13:57 2020 +0200 +++ b/src/http/ngx_http_upstream.c Tue Jan 12 16:59:31 2021 +0300 @@ -3721,6 +3721,13 @@ ngx_http_upstream_non_buffered_filter(vo u = r->upstream; + if (u->length == 0) { + ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, + "upstream sent more data than specified in " + "\"Content-Length\" header"); + return NGX_OK; + } + for (cl = u->out_bufs, ll = &u->out_bufs; cl; cl = cl->next) { ll = &cl->next; } From ping.zhao at intel.com Wed Jan 13 04:46:29 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Wed, 13 Jan 2021 04:46:29 +0000 Subject: [PATCH] Use io_uring for async io access In-Reply-To: References: Message-ID: Hi Vladimir, It could because some new delivery on the tree. I developed the patch based on 1.19.6 tag(-r 83c4622053b0), I saw several new patches merged at Jan. 12th. It's ok, I'll submit a new patch with both legacy libaio and io_uring. So user can still work on libaio if they want. Hope you can review the new one when it's ready. Thanks, Ping -----Original Message----- From: nginx-devel On Behalf Of Vladimir Homutov Sent: Tuesday, January 12, 2021 9:46 PM To: nginx-devel at nginx.org Subject: Re: [PATCH] Use io_uring for async io access On Mon, Jan 11, 2021 at 07:05:28AM +0000, Zhao, Ping wrote: > Hello Nginx Developers, > > This is a patch of Nginx io_uring for async io access. Would like to receive your comments. > > Thanks, > Ping Hi Zhao Ping, Unfortunately I was not able to apply the patch properly, it shows a lot of rejections. But nevertheless I took a look and from what I see you are trying to completely replace AIO implementation with some new code. It would be nice to see some modular approach that adds new aio method without breaking existing code. Thank you for sharing! > > # HG changeset patch > # User Ping Zhao > > # Date 1610370434 18000 > # Mon Jan 11 08:07:14 2021 -0500 > # Node ID 3677cf19b98b054614030b80f73728b02fdda832 > # Parent 82228f955153527fba12211f52bf102c90f38dfb > Use io_uring for async io access. > > Replace aio with io_uring in async disk io access. > > Io_uring is a new kernel feature to async io access. Nginx can use it > for legacy disk aio access(for example, disk cache file access) > > Check with iostat that shows nvme disk io has 30%+ performance improvement with 1 thread. > Test with wrk with 100 threads 200 connections(-t 100 -c 1000) with 25000 random requests. > > iostat(B/s) > libaio 1.0 GB/s > io_uring 1.3+ GB/s > > Patch contributor: Carter Li, Ping Zhao > > diff -r 82228f955153 -r 3677cf19b98b auto/unix > --- a/auto/unix Tue Dec 15 17:41:39 2020 +0300 > +++ b/auto/unix Mon Jan 11 08:07:14 2021 -0500 > @@ -532,44 +532,23 @@ > > if [ $ngx_found = no ]; then > > - ngx_feature="Linux AIO support" > + ngx_feature="Linux io_uring support (liburing)" > ngx_feature_name="NGX_HAVE_FILE_AIO" > ngx_feature_run=no > - ngx_feature_incs="#include > - #include " > + ngx_feature_incs="#include " > ngx_feature_path= > - ngx_feature_libs= > - ngx_feature_test="struct iocb iocb; > - iocb.aio_lio_opcode = IOCB_CMD_PREAD; > - iocb.aio_flags = IOCB_FLAG_RESFD; > - iocb.aio_resfd = -1; > - (void) iocb; > - (void) eventfd(0, 0)" > + ngx_feature_libs="-luring" > + ngx_feature_test="struct io_uring ring; > + int ret = io_uring_queue_init(64, &ring, 0); > + if (ret < 0) return 1; > + io_uring_queue_exit(&ring);" > . auto/feature > > if [ $ngx_found = yes ]; then > have=NGX_HAVE_EVENTFD . auto/have > have=NGX_HAVE_SYS_EVENTFD_H . auto/have > CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" > - fi > - fi > - > - if [ $ngx_found = no ]; then > - > - ngx_feature="Linux AIO support (SYS_eventfd)" > - ngx_feature_incs="#include > - #include " > - ngx_feature_test="struct iocb iocb; > - iocb.aio_lio_opcode = IOCB_CMD_PREAD; > - iocb.aio_flags = IOCB_FLAG_RESFD; > - iocb.aio_resfd = -1; > - (void) iocb; > - (void) SYS_eventfd" > - . auto/feature > - > - if [ $ngx_found = yes ]; then > - have=NGX_HAVE_EVENTFD . auto/have > - CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" > + CORE_LIBS="$CORE_LIBS -luring" > fi > fi > > @@ -577,7 +556,7 @@ > cat << END > > $0: no supported file AIO was found > -Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ > only > +Currently file AIO is supported on FreeBSD 4.3+ and Linux 5.1.0+ > +(requires liburing) only > > END > exit 1 > diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_open_file_cache.c > --- a/src/core/ngx_open_file_cache.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/core/ngx_open_file_cache.c Mon Jan 11 08:07:14 2021 -0500 > @@ -869,8 +869,8 @@ > if (!of->log) { > > /* > - * Use non-blocking open() not to hang on FIFO files, etc. > - * This flag has no effect on a regular files. > + * Differs from plain read, IORING_OP_READV with O_NONBLOCK > + * will return -EAGAIN if the operation may block. > */ > > fd = ngx_open_file_wrapper(name, of, > NGX_FILE_RDONLY|NGX_FILE_NONBLOCK, > diff -r 82228f955153 -r 3677cf19b98b src/core/ngx_output_chain.c > --- a/src/core/ngx_output_chain.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/core/ngx_output_chain.c Mon Jan 11 08:07:14 2021 -0500 > @@ -589,6 +589,20 @@ > if (ctx->aio_handler) { > n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, > src->file_pos, ctx->pool); > + > + if (n > 0 && n < size) { > + ngx_log_error(NGX_LOG_INFO, ctx->pool->log, 0, > + ngx_read_file_n " Try again, read only %z of %O from \"%s\"", > + n, size, src->file->name.data); > + > + src->file_pos += n; > + dst->last += n; > + > + n = ngx_file_aio_read(src->file, dst->pos+n, (size_t) size-n, > + src->file_pos, ctx->pool); > + > + } > + > if (n == NGX_AGAIN) { > ctx->aio_handler(ctx, src->file); > return NGX_AGAIN; > diff -r 82228f955153 -r 3677cf19b98b src/event/modules/ngx_epoll_module.c > --- a/src/event/modules/ngx_epoll_module.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/modules/ngx_epoll_module.c Mon Jan 11 08:07:14 2021 > +++ -0500 > @@ -9,6 +9,10 @@ > #include > #include > > +#if (NGX_HAVE_FILE_AIO) > +#include > +#endif > + > > #if (NGX_TEST_BUILD_EPOLL) > > @@ -75,23 +79,6 @@ > #define SYS_eventfd 323 > #endif > > -#if (NGX_HAVE_FILE_AIO) > - > -#define SYS_io_setup 245 > -#define SYS_io_destroy 246 > -#define SYS_io_getevents 247 > - > -typedef u_int aio_context_t; > - > -struct io_event { > - uint64_t data; /* the data field from the iocb */ > - uint64_t obj; /* what iocb this event came from */ > - int64_t res; /* result code for this event */ > - int64_t res2; /* secondary result */ > -}; > - > - > -#endif > #endif /* NGX_TEST_BUILD_EPOLL */ > > > @@ -124,7 +111,7 @@ > ngx_uint_t flags); > > #if (NGX_HAVE_FILE_AIO) > -static void ngx_epoll_eventfd_handler(ngx_event_t *ev); > +static void ngx_epoll_io_uring_handler(ngx_event_t *ev); > #endif > > static void *ngx_epoll_create_conf(ngx_cycle_t *cycle); @@ -141,13 > +128,11 @@ #endif > > #if (NGX_HAVE_FILE_AIO) > - > -int ngx_eventfd = -1; > -aio_context_t ngx_aio_ctx = 0; > +struct io_uring ngx_ring; > +struct io_uring_params ngx_ring_params; > > -static ngx_event_t ngx_eventfd_event; > -static ngx_connection_t ngx_eventfd_conn; > - > +static ngx_event_t ngx_ring_event; > +static ngx_connection_t ngx_ring_conn; > #endif > > #if (NGX_HAVE_EPOLLRDHUP) > @@ -217,102 +202,40 @@ > > #if (NGX_HAVE_FILE_AIO) > > -/* > - * We call io_setup(), io_destroy() io_submit(), and io_getevents() > directly > - * as syscalls instead of libaio usage, because the library header > file > - * supports eventfd() since 0.3.107 version only. > - */ > - > -static int > -io_setup(u_int nr_reqs, aio_context_t *ctx) -{ > - return syscall(SYS_io_setup, nr_reqs, ctx); > -} > - > - > -static int > -io_destroy(aio_context_t ctx) > -{ > - return syscall(SYS_io_destroy, ctx); > -} > - > - > -static int > -io_getevents(aio_context_t ctx, long min_nr, long nr, struct io_event *events, > - struct timespec *tmo) > -{ > - return syscall(SYS_io_getevents, ctx, min_nr, nr, events, tmo); > -} > - > - > static void > ngx_epoll_aio_init(ngx_cycle_t *cycle, ngx_epoll_conf_t *epcf) { > - int n; > struct epoll_event ee; > > -#if (NGX_HAVE_SYS_EVENTFD_H) > - ngx_eventfd = eventfd(0, 0); > -#else > - ngx_eventfd = syscall(SYS_eventfd, 0); > -#endif > - > - if (ngx_eventfd == -1) { > + if (io_uring_queue_init_params(32763, &ngx_ring, > + &ngx_ring_params) < 0) { > ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "eventfd() failed"); > - ngx_file_aio = 0; > - return; > - } > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, > - "eventfd: %d", ngx_eventfd); > - > - n = 1; > - > - if (ioctl(ngx_eventfd, FIONBIO, &n) == -1) { > - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "ioctl(eventfd, FIONBIO) failed"); > + "io_uring_queue_init_params() failed"); > goto failed; > } > > - if (io_setup(epcf->aio_requests, &ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > - "io_setup() failed"); > - goto failed; > - } > - > - ngx_eventfd_event.data = &ngx_eventfd_conn; > - ngx_eventfd_event.handler = ngx_epoll_eventfd_handler; > - ngx_eventfd_event.log = cycle->log; > - ngx_eventfd_event.active = 1; > - ngx_eventfd_conn.fd = ngx_eventfd; > - ngx_eventfd_conn.read = &ngx_eventfd_event; > - ngx_eventfd_conn.log = cycle->log; > + ngx_ring_event.data = &ngx_ring_conn; > + ngx_ring_event.handler = ngx_epoll_io_uring_handler; > + ngx_ring_event.log = cycle->log; > + ngx_ring_event.active = 1; > + ngx_ring_conn.fd = ngx_ring.ring_fd; > + ngx_ring_conn.read = &ngx_ring_event; > + ngx_ring_conn.log = cycle->log; > > ee.events = EPOLLIN|EPOLLET; > - ee.data.ptr = &ngx_eventfd_conn; > + ee.data.ptr = &ngx_ring_conn; > > - if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_eventfd, &ee) != -1) { > + if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_ring.ring_fd, &ee) != -1) { > return; > } > > ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, > "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); > > - if (io_destroy(ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "io_destroy() failed"); > - } > + io_uring_queue_exit(&ngx_ring); > > failed: > > - if (close(ngx_eventfd) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "eventfd close() failed"); > - } > - > - ngx_eventfd = -1; > - ngx_aio_ctx = 0; > + ngx_ring.ring_fd = 0; > ngx_file_aio = 0; > } > > @@ -549,23 +472,11 @@ > > #if (NGX_HAVE_FILE_AIO) > > - if (ngx_eventfd != -1) { > - > - if (io_destroy(ngx_aio_ctx) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "io_destroy() failed"); > - } > - > - if (close(ngx_eventfd) == -1) { > - ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > - "eventfd close() failed"); > - } > - > - ngx_eventfd = -1; > + if (ngx_ring.ring_fd != 0) { > + io_uring_queue_exit(&ngx_ring); > + ngx_ring.ring_fd = 0; > } > > - ngx_aio_ctx = 0; > - > #endif > > ngx_free(event_list); > @@ -939,84 +850,36 @@ > #if (NGX_HAVE_FILE_AIO) > > static void > -ngx_epoll_eventfd_handler(ngx_event_t *ev) > +ngx_epoll_io_uring_handler(ngx_event_t *ev) > { > - int n, events; > - long i; > - uint64_t ready; > - ngx_err_t err; > ngx_event_t *e; > + struct io_uring_cqe *cqe; > + unsigned head; > + unsigned cqe_count = 0; > ngx_event_aio_t *aio; > - struct io_event event[64]; > - struct timespec ts; > > - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd handler"); > - > - n = read(ngx_eventfd, &ready, 8); > + ngx_log_debug(NGX_LOG_DEBUG_EVENT, ev->log, 0, > + "io_uring_peek_cqe: START"); > > - err = ngx_errno; > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, "eventfd: %d", n); > + io_uring_for_each_cqe(&ngx_ring, head, cqe) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, > + "io_event: %p %d %d", > + cqe->user_data, cqe->res, cqe->flags); > > - if (n != 8) { > - if (n == -1) { > - if (err == NGX_EAGAIN) { > - return; > - } > + e = (ngx_event_t *) io_uring_cqe_get_data(cqe); > + e->complete = 1; > + e->active = 0; > + e->ready = 1; > > - ngx_log_error(NGX_LOG_ALERT, ev->log, err, "read(eventfd) failed"); > - return; > - } > + aio = e->data; > + aio->res = cqe->res; > > - ngx_log_error(NGX_LOG_ALERT, ev->log, 0, > - "read(eventfd) returned only %d bytes", n); > - return; > + ++cqe_count; > + > + ngx_post_event(e, &ngx_posted_events); > } > > - ts.tv_sec = 0; > - ts.tv_nsec = 0; > - > - while (ready) { > - > - events = io_getevents(ngx_aio_ctx, 1, 64, event, &ts); > - > - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, ev->log, 0, > - "io_getevents: %d", events); > - > - if (events > 0) { > - ready -= events; > - > - for (i = 0; i < events; i++) { > - > - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, ev->log, 0, > - "io_event: %XL %XL %L %L", > - event[i].data, event[i].obj, > - event[i].res, event[i].res2); > - > - e = (ngx_event_t *) (uintptr_t) event[i].data; > - > - e->complete = 1; > - e->active = 0; > - e->ready = 1; > - > - aio = e->data; > - aio->res = event[i].res; > - > - ngx_post_event(e, &ngx_posted_events); > - } > - > - continue; > - } > - > - if (events == 0) { > - return; > - } > - > - /* events == -1 */ > - ngx_log_error(NGX_LOG_ALERT, ev->log, ngx_errno, > - "io_getevents() failed"); > - return; > - } > + io_uring_cq_advance(&ngx_ring, cqe_count); > } > > #endif > diff -r 82228f955153 -r 3677cf19b98b src/event/ngx_event.h > --- a/src/event/ngx_event.h Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/event/ngx_event.h Mon Jan 11 08:07:14 2021 -0500 > @@ -160,7 +160,9 @@ > size_t nbytes; > #endif > > - ngx_aiocb_t aiocb; > + /* Make sure that this iov has the same lifecycle with its associated aio event */ > + struct iovec iov; > + > ngx_event_t event; > }; > > diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_aio_read.c > --- a/src/os/unix/ngx_linux_aio_read.c Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/os/unix/ngx_linux_aio_read.c Mon Jan 11 08:07:14 2021 -0500 > @@ -9,20 +9,16 @@ > #include > #include > > +#include > > -extern int ngx_eventfd; > -extern aio_context_t ngx_aio_ctx; > + > +extern struct io_uring ngx_ring; > +extern struct io_uring_params ngx_ring_params; > > > static void ngx_file_aio_event_handler(ngx_event_t *ev); > > > -static int > -io_submit(aio_context_t ctx, long n, struct iocb **paiocb) -{ > - return syscall(SYS_io_submit, ctx, n, paiocb); > -} > - > > ngx_int_t > ngx_file_aio_init(ngx_file_t *file, ngx_pool_t *pool) @@ -50,10 +46,10 > @@ ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t > offset, > ngx_pool_t *pool) > { > - ngx_err_t err; > - struct iocb *piocb[1]; > - ngx_event_t *ev; > - ngx_event_aio_t *aio; > + ngx_err_t err; > + ngx_event_t *ev; > + ngx_event_aio_t *aio; > + struct io_uring_sqe *sqe; > > if (!ngx_file_aio) { > return ngx_read_file(file, buf, size, offset); @@ -93,22 > +89,41 @@ > return NGX_ERROR; > } > > - ngx_memzero(&aio->aiocb, sizeof(struct iocb)); > + sqe = io_uring_get_sqe(&ngx_ring); > + > + if (!sqe) { > + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, > + "aio no sqe left:%d @%O:%uz %V", > + ev->complete, offset, size, &file->name); > + return ngx_read_file(file, buf, size, offset); > + } > > - aio->aiocb.aio_data = (uint64_t) (uintptr_t) ev; > - aio->aiocb.aio_lio_opcode = IOCB_CMD_PREAD; > - aio->aiocb.aio_fildes = file->fd; > - aio->aiocb.aio_buf = (uint64_t) (uintptr_t) buf; > - aio->aiocb.aio_nbytes = size; > - aio->aiocb.aio_offset = offset; > - aio->aiocb.aio_flags = IOCB_FLAG_RESFD; > - aio->aiocb.aio_resfd = ngx_eventfd; > + if (__builtin_expect(!!(ngx_ring_params.features & IORING_FEAT_CUR_PERSONALITY), 1)) { > + /* > + * `io_uring_prep_read` is faster than `io_uring_prep_readv`, because the kernel > + * doesn't need to import iovecs in advance. > + * > + * If the kernel supports `IORING_FEAT_CUR_PERSONALITY`, it should support > + * non-vectored read/write commands too. > + * > + * It's not perfect, but avoids an extra feature-test syscall. > + */ > + io_uring_prep_read(sqe, file->fd, buf, size, offset); > + } else { > + /* > + * We must store iov into heap to prevent kernel from returning -EFAULT > + * in case `IORING_FEAT_SUBMIT_STABLE` is not supported > + */ > + aio->iov.iov_base = buf; > + aio->iov.iov_len = size; > + io_uring_prep_readv(sqe, file->fd, &aio->iov, 1, offset); > + } > + io_uring_sqe_set_data(sqe, ev); > + > > ev->handler = ngx_file_aio_event_handler; > > - piocb[0] = &aio->aiocb; > - > - if (io_submit(ngx_aio_ctx, 1, piocb) == 1) { > + if (io_uring_submit(&ngx_ring) == 1) { > ev->active = 1; > ev->ready = 0; > ev->complete = 0; > diff -r 82228f955153 -r 3677cf19b98b src/os/unix/ngx_linux_config.h > --- a/src/os/unix/ngx_linux_config.h Tue Dec 15 17:41:39 2020 +0300 > +++ b/src/os/unix/ngx_linux_config.h Mon Jan 11 08:07:14 2021 -0500 > @@ -93,10 +93,6 @@ > #include > #endif > #include > -#if (NGX_HAVE_FILE_AIO) > -#include > -typedef struct iocb ngx_aiocb_t; > -#endif > > > #if (NGX_HAVE_CAPABILITIES) > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From hans at guardianproject.info Wed Jan 13 09:27:42 2021 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 13 Jan 2021 10:27:42 +0100 Subject: [PATCH] conf/nginx.conf: add example "privacy" log_format Message-ID: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> # HG changeset patch # User Hans-Christoph Steiner # Date 1609333908 -3600 # Wed Dec 30 14:11:48 2020 +0100 # Node ID 0e6fb2161806a4c4e3df54e2ed6523aca7c70e23 # Parent 82228f955153527fba12211f52bf102c90f38dfb conf/nginx.conf: add example "privacy" log_format The standard log_formats store detailed information which falls under data regulations like the EU's GDPR and California's CCPA. This merge request adds a suggested "privacy" log_format that generates logs that cannot be used to identify users. This has been developed and used by Tor Project, Guardian Project, and F-Droid. * https://guardianproject.info/2017/06/08/tracking-usage-without-tracking-people * https://gitweb.torproject.org/webstats.git/tree/src/sanitize.py * https://f-droid.org/2019/04/15/privacy-preserving-analytics.html diff -r 82228f955153 -r 0e6fb2161806 conf/nginx.conf --- a/conf/nginx.conf Tue Dec 15 17:41:39 2020 +0300 +++ b/conf/nginx.conf Wed Dec 30 14:11:48 2020 +0100 @@ -21,6 +21,8 @@ #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; + #log_format privacy '0.0.0.0 - - [$time_local] "$request" ' + # '$status $body_bytes_sent "$http_referer" "-"'; #access_log logs/access.log main; -- PGP fingerprint: EE66 20C7 136B 0D2C 456C 0A4D E9E2 8DEA 00AA 5556 https://pgp.mit.edu/pks/lookup?op=vindex&search=0xE9E28DEA00AA5556 From hans at guardianproject.info Wed Jan 13 11:06:14 2021 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 13 Jan 2021 12:06:14 +0100 Subject: [PATCH] conf/nginx.conf: add example "privacy" log_format In-Reply-To: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> References: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> Message-ID: Quick update: I now realize that this proposed format matched the Apache standard format, but the nginx "main" format is different in that it has one extra column for "$http_x_forwarded_for". I updated the patch to make the "privacy" format have the same number of columns as the "main" format. This makes it possible to freely switch between "main" and "privacy" and the logs will retain the same columns/format. The downside is that this means nginx's "privacy" format is not strictly compatible with Apache's "privacy" format. # HG changeset patch # User Hans-Christoph Steiner # Date 1609333908 -3600 # Wed Jan 13 14:11:48 202 +0100 # Node ID 0e6fb2161806a4c4e3df54e2ed6523aca7c70e23 # Parent 82228f955153527fba12211f52bf102c90f38dfb conf/nginx.conf: add example "privacy" log_format The standard log_formats store detailed information which falls under data regulations like the EU's GDPR and California's CCPA. This merge request adds a suggested "privacy" log_format that generates logs that cannot be used to identify users. This has been developed and used by Tor Project, Guardian Project, and F-Droid. * https://guardianproject.info/2017/06/08/tracking-usage-without-tracking-people * https://gitweb.torproject.org/webstats.git/tree/src/sanitize.py * https://f-droid.org/2019/04/15/privacy-preserving-analytics.html diff -r 82228f955153 -r 0e6fb2161806 conf/nginx.conf --- a/conf/nginx.conf Tue Dec 15 17:41:39 2020 +0300 +++ b/conf/nginx.conf Wed Dec 30 14:11:48 2020 +0100 @@ -21,6 +21,8 @@ #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; + #log_format privacy '0.0.0.0 - - [$time_local] "$request" ' + # '$status $body_bytes_sent "$http_referer" ' + # '"-" "-"'; #access_log logs/access.log main; From anton at sijanec.eu Wed Jan 13 11:37:36 2021 From: anton at sijanec.eu (Anton Luka =?UTF-8?Q?=C5=A0ijanec?=) Date: Wed, 13 Jan 2021 12:37:36 +0100 Subject: [PATCH] conf/nginx.conf: add example "privacy" log_format In-Reply-To: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> References: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> Message-ID: <20210113123736.b8b3aa9793d50608ca572bfd@sijanec.eu> Hans-Christoph Steiner @ Wed, 13 Jan 2021 10:27:42 +0100: > The standard log_formats store detailed information which falls under > data regulations like the EU's GDPR and California's CCPA. This merge > request adds a suggested "privacy" log_format that generates logs that > cannot be used to identify users. This has been developed and used by > Tor Project, Guardian Project, and F-Droid. IANAL, so: Are there any exceptions in EU's GDPR that allow short-stored logs of user-identifiable information? That would seem useful, as *some* logging is useful when detecting and reporting fraudalent activities and for detecting spam. Logs are rotated and are sometimes useful when a data breach happens. I've also seen some examples of ISPs having to store info, that would be classified as user data, for 6 months for detecting illegal activities. See [1]. Again, IANAL, but [0] describes some allowances regarding log data. I agree with adding the privacy option, but is that really a must when dealing with EU customers? Regards! [0] https://www.termsfeed.com/blog/gdpr-log-data/#Storage_Limitation [1] https://en.wikipedia.org/wiki/Data_retention#European_Union From hans at guardianproject.info Wed Jan 13 11:50:31 2021 From: hans at guardianproject.info (Hans-Christoph Steiner) Date: Wed, 13 Jan 2021 12:50:31 +0100 Subject: [PATCH] conf/nginx.conf: add example "privacy" log_format In-Reply-To: <20210113123736.b8b3aa9793d50608ca572bfd@sijanec.eu> References: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> <20210113123736.b8b3aa9793d50608ca572bfd@sijanec.eu> Message-ID: Anton Luka ?ijanec: > Hans-Christoph Steiner @ Wed, 13 Jan 2021 10:27:42 +0100: >> The standard log_formats store detailed information which falls under >> data regulations like the EU's GDPR and California's CCPA. This merge >> request adds a suggested "privacy" log_format that generates logs that >> cannot be used to identify users. This has been developed and used by >> Tor Project, Guardian Project, and F-Droid. > > IANAL, so: Are there any exceptions in EU's GDPR that allow short-stored logs of user-identifiable information? That would seem useful, as *some* logging is useful when detecting and reporting fraudalent activities and for detecting spam. Logs are rotated and are sometimes useful when a data breach happens. > > I've also seen some examples of ISPs having to store info, that would be classified as user data, for 6 months for detecting illegal activities. See [1]. > > Again, IANAL, but [0] describes some allowances regarding log data. I agree with adding the privacy option, but is that really a must when dealing with EU customers? Both GDPR and CCPA allow log data to be gathered, stored, and used. Those are regulated though, that means they must be considered when a user requests you give them their data, to delete all references to a user, etc. You must also consider the legal definition of "for no longer than is necessary for the purposes for which the personal data are processed" in the context of your business activities and data you're gathering. These are all non-trivial. The goal of the "privacy" log mode is to guarantee that the log files do not fall under GPDR/CCPA regulation, but still provide useful information. Then those log files can remain outside of GDPR/CCPA reviews. IANAL, I am a researcher focused on privacy and metadata. Those log files do not contain Personally Identifying Information (PII) and also do not contain enough info to identify someone. They might contain enough data to identify someone in combination with other large data sets, like all of a user's browsing data. .hc -- PGP fingerprint: EE66 20C7 136B 0D2C 456C 0A4D E9E2 8DEA 00AA 5556 https://pgp.mit.edu/pks/lookup?op=vindex&search=0xE9E28DEA00AA5556 From mdounin at mdounin.ru Wed Jan 13 17:47:03 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jan 2021 20:47:03 +0300 Subject: [PATCH] conf/nginx.conf: add example "privacy" log_format In-Reply-To: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> References: <5c73eebe-eda5-c223-8d4f-791fa2d0c813@guardianproject.info> Message-ID: <20210113174703.GE1147@mdounin.ru> Hello! On Wed, Jan 13, 2021 at 10:27:42AM +0100, Hans-Christoph Steiner wrote: > # HG changeset patch > # User Hans-Christoph Steiner > # Date 1609333908 -3600 > # Wed Dec 30 14:11:48 2020 +0100 > # Node ID 0e6fb2161806a4c4e3df54e2ed6523aca7c70e23 > # Parent 82228f955153527fba12211f52bf102c90f38dfb > conf/nginx.conf: add example "privacy" log_format > > The standard log_formats store detailed information which falls under > data regulations like the EU's GDPR and California's CCPA. This merge > request adds a suggested "privacy" log_format that generates logs that > cannot be used to identify users. This has been developed and used by > Tor Project, Guardian Project, and F-Droid. > > * > https://guardianproject.info/2017/06/08/tracking-usage-without-tracking-people > * https://gitweb.torproject.org/webstats.git/tree/src/sanitize.py > * https://f-droid.org/2019/04/15/privacy-preserving-analytics.html > > diff -r 82228f955153 -r 0e6fb2161806 conf/nginx.conf > --- a/conf/nginx.conf Tue Dec 15 17:41:39 2020 +0300 > +++ b/conf/nginx.conf Wed Dec 30 14:11:48 2020 +0100 > @@ -21,6 +21,8 @@ > #log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > # '$status $body_bytes_sent "$http_referer" ' > # '"$http_user_agent" "$http_x_forwarded_for"'; > + #log_format privacy '0.0.0.0 - - [$time_local] "$request" ' > + # '$status $body_bytes_sent "$http_referer" "-"'; > > #access_log logs/access.log main; Thank you for your suggestion. It is believed that existing examples on how to configure log format are enough. -- Maxim Dounin http://mdounin.ru/ From ping.zhao at intel.com Thu Jan 14 05:53:17 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Thu, 14 Jan 2021 05:53:17 +0000 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> Message-ID: # HG changeset patch # User Ping Zhao # Date 1610554205 18000 # Wed Jan 13 11:10:05 2021 -0500 # Node ID 95886c3353dc80a3da215027c1e0f2141e47e911 # Parent b055bb6ef87e49232a7fcb4e5334b8efda3b6499 Add io_uring support in AIO(async io) module. Hello, This is a patch to support io_uring in AIO(async io) module. Basically you don't need change your configurations. If you're using new kernel(above v5.1) which supports io_uring, and you have "aio on" in your configuration. Nginx will use io_uring for FILE_AIO access which can achieve performance improvement than legacy libaio. Checked with iostat which shows nvme disk io has 30%+ performance improvement with 1 thread. Use wrk with 100 threads 200 connections(-t 100 -c 200) with 25000 random requests. iostat(B/s) libaio ~1.0 GB/s io_uring 1.3+ GB/s diff -r b055bb6ef87e -r 95886c3353dc auto/unix --- a/auto/unix Mon Jan 11 22:06:27 2021 +0300 +++ b/auto/unix Wed Jan 13 11:10:05 2021 -0500 @@ -531,6 +531,30 @@ fi if [ $ngx_found = no ]; then + ngx_feature="Linux AIO support(IO_URING)" + ngx_feature_name="NGX_HAVE_FILE_AIO" + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs="-luring" + ngx_feature_test="struct io_uring ring; + struct io_uring_params params; + int ret; + memset(¶ms, 0, sizeof(params)); + ret = io_uring_queue_init_params(64, &ring, ¶ms); + if (ret < 0) return 1; + if (!(params.features & IORING_FEAT_FAST_POLL)) return 1; + io_uring_queue_exit(&ring)" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_HAVE_EVENTFD . auto/have + have=NGX_HAVE_FILE_IOURING . auto/have + CORE_LIBS="$CORE_LIBS -luring" + CORE_SRCS="$CORE_SRCS $LINUX_AIO_SRCS" + fi + fi + + if [ $ngx_found = no ]; then ngx_feature="Linux AIO support" ngx_feature_name="NGX_HAVE_FILE_AIO" diff -r b055bb6ef87e -r 95886c3353dc src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Mon Jan 11 22:06:27 2021 +0300 +++ b/src/core/ngx_output_chain.c Wed Jan 13 11:10:05 2021 -0500 @@ -589,6 +589,20 @@ if (ctx->aio_handler) { n = ngx_file_aio_read(src->file, dst->pos, (size_t) size, src->file_pos, ctx->pool); +#if (NGX_HAVE_FILE_IOURING) + if (n > 0 && n < size) { + ngx_log_error(NGX_LOG_INFO, ctx->pool->log, 0, + ngx_read_file_n " Try again, only read %z of %O from \"%s\"", + n, size, src->file->name.data); + + src->file_pos += n; + dst->last += n; + + n = ngx_file_aio_read(src->file, dst->pos+n, (size_t) size-n, + src->file_pos, ctx->pool); + + } +#endif if (n == NGX_AGAIN) { ctx->aio_handler(ctx, src->file); return NGX_AGAIN; diff -r b055bb6ef87e -r 95886c3353dc src/event/modules/ngx_epoll_module.c --- a/src/event/modules/ngx_epoll_module.c Mon Jan 11 22:06:27 2021 +0300 +++ b/src/event/modules/ngx_epoll_module.c Wed Jan 13 11:10:05 2021 -0500 @@ -9,6 +9,9 @@ #include #include +#if (NGX_HAVE_FILE_IOURING) +#include +#endif #if (NGX_TEST_BUILD_EPOLL) @@ -77,6 +80,9 @@ #if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) +#else + #define SYS_io_setup 245 #define SYS_io_destroy 246 #define SYS_io_getevents 247 @@ -89,9 +95,9 @@ int64_t res; /* result code for this event */ int64_t res2; /* secondary result */ }; - +#endif /* NGX_HAVE_FILE_IOURING */ +#endif /* NGX_HAVE_FILE_AIO */ -#endif #endif /* NGX_TEST_BUILD_EPOLL */ @@ -124,8 +130,25 @@ ngx_uint_t flags); #if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) +static void ngx_epoll_io_uring_handler(ngx_event_t *ev); + +struct io_uring ngx_ring; +struct io_uring_params ngx_ring_params; + +static ngx_event_t ngx_ring_event; +static ngx_connection_t ngx_ring_conn; + +#else static void ngx_epoll_eventfd_handler(ngx_event_t *ev); -#endif + +int ngx_eventfd = -1; +aio_context_t ngx_aio_ctx = 0; + +static ngx_event_t ngx_eventfd_event; +static ngx_connection_t ngx_eventfd_conn; +#endif /* NGX_HAVE_FILE_IOURING */ +#endif /* NGX_HAVE_FILE_AIO */ static void *ngx_epoll_create_conf(ngx_cycle_t *cycle); static char *ngx_epoll_init_conf(ngx_cycle_t *cycle, void *conf); @@ -140,16 +163,6 @@ static ngx_connection_t notify_conn; #endif -#if (NGX_HAVE_FILE_AIO) - -int ngx_eventfd = -1; -aio_context_t ngx_aio_ctx = 0; - -static ngx_event_t ngx_eventfd_event; -static ngx_connection_t ngx_eventfd_conn; - -#endif - #if (NGX_HAVE_EPOLLRDHUP) ngx_uint_t ngx_use_epoll_rdhup; #endif @@ -217,6 +230,47 @@ #if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) + +static void +ngx_epoll_aio_init(ngx_cycle_t *cycle, ngx_epoll_conf_t *epcf) +{ + struct epoll_event ee; + + if (io_uring_queue_init_params(32763, &ngx_ring, &ngx_ring_params) < 0) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "io_uring_queue_init_params() failed"); + goto failed; + } + + ngx_ring_event.data = &ngx_ring_conn; + ngx_ring_event.handler = ngx_epoll_io_uring_handler; + ngx_ring_event.log = cycle->log; + ngx_ring_event.active = 1; + ngx_ring_conn.fd = ngx_ring.ring_fd; + ngx_ring_conn.read = &ngx_ring_event; + ngx_ring_conn.log = cycle->log; + + ee.events = EPOLLIN|EPOLLET; + ee.data.ptr = &ngx_ring_conn; + + if (epoll_ctl(ep, EPOLL_CTL_ADD, ngx_ring.ring_fd, &ee) != -1) { + return; + } + + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "epoll_ctl(EPOLL_CTL_ADD, eventfd) failed"); + + io_uring_queue_exit(&ngx_ring); + +failed: + + ngx_ring.ring_fd = 0; + ngx_file_aio = 0; +} + +#else + /* * We call io_setup(), io_destroy() io_submit(), and io_getevents() directly * as syscalls instead of libaio usage, because the library header file @@ -316,8 +370,8 @@ ngx_file_aio = 0; } -#endif - +#endif /*NGX_HAVE_FILE_IOURING*/ +#endif /*NGX_HAVE_FILE_AIO*/ static ngx_int_t ngx_epoll_init(ngx_cycle_t *cycle, ngx_msec_t timer) @@ -548,6 +602,13 @@ #endif #if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) + if (ngx_ring.ring_fd != 0) { + io_uring_queue_exit(&ngx_ring); + ngx_ring.ring_fd = 0; + } + +#else if (ngx_eventfd != -1) { @@ -566,7 +627,8 @@ ngx_aio_ctx = 0; -#endif +#endif /*NGX_HAVE_FILE_IOURING*/ +#endif /*NGX_HAVE_FILE_AIO*/ ngx_free(event_list); @@ -935,8 +997,42 @@ return NGX_OK; } +#if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) +static void +ngx_epoll_io_uring_handler(ngx_event_t *ev) +{ + ngx_event_t *e; + struct io_uring_cqe *cqe; + unsigned head; + unsigned cqe_count = 0; + ngx_event_aio_t *aio; -#if (NGX_HAVE_FILE_AIO) + ngx_log_debug(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_uring_peek_cqe: START"); + + io_uring_for_each_cqe(&ngx_ring, head, cqe) { + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, + "io_event: %p %d %d", + cqe->user_data, cqe->res, cqe->flags); + + e = (ngx_event_t *) io_uring_cqe_get_data(cqe); + e->complete = 1; + e->active = 0; + e->ready = 1; + + aio = e->data; + aio->res = cqe->res; + + ++cqe_count; + + ngx_post_event(e, &ngx_posted_events); + } + + io_uring_cq_advance(&ngx_ring, cqe_count); +} + +#else static void ngx_epoll_eventfd_handler(ngx_event_t *ev) @@ -1019,8 +1115,8 @@ } } -#endif - +#endif /*NGX_HAVE_FILE_IOURING*/ +#endif /*NGX_HAVE_FILE_AIO*/ static void * ngx_epoll_create_conf(ngx_cycle_t *cycle) diff -r b055bb6ef87e -r 95886c3353dc src/event/ngx_event.h --- a/src/event/ngx_event.h Mon Jan 11 22:06:27 2021 +0300 +++ b/src/event/ngx_event.h Wed Jan 13 11:10:05 2021 -0500 @@ -160,7 +160,11 @@ size_t nbytes; #endif +#if (NGX_HAVE_FILE_IOURING) + struct iovec iov; +#else ngx_aiocb_t aiocb; +#endif ngx_event_t event; }; diff -r b055bb6ef87e -r 95886c3353dc src/os/unix/ngx_linux_aio_read.c --- a/src/os/unix/ngx_linux_aio_read.c Mon Jan 11 22:06:27 2021 +0300 +++ b/src/os/unix/ngx_linux_aio_read.c Wed Jan 13 11:10:05 2021 -0500 @@ -9,20 +9,24 @@ #include #include +#if (NGX_HAVE_FILE_IOURING) +#include +extern struct io_uring ngx_ring; +extern struct io_uring_params ngx_ring_params; + +#else extern int ngx_eventfd; extern aio_context_t ngx_aio_ctx; - -static void ngx_file_aio_event_handler(ngx_event_t *ev); - - static int io_submit(aio_context_t ctx, long n, struct iocb **paiocb) { return syscall(SYS_io_submit, ctx, n, paiocb); } +#endif +static void ngx_file_aio_event_handler(ngx_event_t *ev); ngx_int_t ngx_file_aio_init(ngx_file_t *file, ngx_pool_t *pool) @@ -45,7 +49,114 @@ return NGX_OK; } +#if (NGX_HAVE_FILE_IOURING) +ssize_t +ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t offset, + ngx_pool_t *pool) +{ + ngx_err_t err; + ngx_event_t *ev; + ngx_event_aio_t *aio; + struct io_uring_sqe *sqe; + if (!ngx_file_aio) { + return ngx_read_file(file, buf, size, offset); + } + + if (file->aio == NULL && ngx_file_aio_init(file, pool) != NGX_OK) { + return NGX_ERROR; + } + + aio = file->aio; + ev = &aio->event; + + if (!ev->ready) { + ngx_log_error(NGX_LOG_ALERT, file->log, 0, + "second aio post for \"%V\"", &file->name); + return NGX_AGAIN; + } + + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, + "aio complete:%d @%O:%uz %V", + ev->complete, offset, size, &file->name); + + if (ev->complete) { + ev->active = 0; + ev->complete = 0; + + if (aio->res >= 0) { + ngx_set_errno(0); + return aio->res; + } + + ngx_set_errno(-aio->res); + + ngx_log_error(NGX_LOG_CRIT, file->log, ngx_errno, + "aio read \"%s\" failed", file->name.data); + + return NGX_ERROR; + } + + sqe = io_uring_get_sqe(&ngx_ring); + + if (!sqe) { + ngx_log_debug4(NGX_LOG_DEBUG_CORE, file->log, 0, + "aio no sqe left:%d @%O:%uz %V", + ev->complete, offset, size, &file->name); + return ngx_read_file(file, buf, size, offset); + } + + if (__builtin_expect(!!(ngx_ring_params.features & IORING_FEAT_CUR_PERSONALITY), 1)) { + /* + * `io_uring_prep_read` is faster than `io_uring_prep_readv`, because the kernel + * doesn't need to import iovecs in advance. + * + * If the kernel supports `IORING_FEAT_CUR_PERSONALITY`, it should support + * non-vectored read/write commands too. + * + * It's not perfect, but avoids an extra feature-test syscall. + */ + io_uring_prep_read(sqe, file->fd, buf, size, offset); + } else { + /* + * We must store iov into heap to prevent kernel from returning -EFAULT + * in case `IORING_FEAT_SUBMIT_STABLE` is not supported + */ + aio->iov.iov_base = buf; + aio->iov.iov_len = size; + io_uring_prep_readv(sqe, file->fd, &aio->iov, 1, offset); + } + io_uring_sqe_set_data(sqe, ev); + + + ev->handler = ngx_file_aio_event_handler; + + if (io_uring_submit(&ngx_ring) == 1) { + ev->active = 1; + ev->ready = 0; + ev->complete = 0; + + return NGX_AGAIN; + } + + err = ngx_errno; + + if (err == NGX_EAGAIN) { + return ngx_read_file(file, buf, size, offset); + } + + ngx_log_error(NGX_LOG_CRIT, file->log, err, + "io_submit(\"%V\") failed", &file->name); + + if (err == NGX_ENOSYS) { + ngx_file_aio = 0; + return ngx_read_file(file, buf, size, offset); + } + + return NGX_ERROR; +} + +#else ssize_t ngx_file_aio_read(ngx_file_t *file, u_char *buf, size_t size, off_t offset, ngx_pool_t *pool) @@ -132,7 +243,7 @@ return NGX_ERROR; } - +#endif static void ngx_file_aio_event_handler(ngx_event_t *ev) diff -r b055bb6ef87e -r 95886c3353dc src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Mon Jan 11 22:06:27 2021 +0300 +++ b/src/os/unix/ngx_linux_config.h Wed Jan 13 11:10:05 2021 -0500 @@ -93,11 +93,15 @@ #include #endif #include + #if (NGX_HAVE_FILE_AIO) +#if (NGX_HAVE_FILE_IOURING) + +#else #include typedef struct iocb ngx_aiocb_t; #endif - +#endif #if (NGX_HAVE_CAPABILITIES) #include From cnewton at netflix.com Thu Jan 14 12:43:22 2021 From: cnewton at netflix.com (Chris Newton) Date: Thu, 14 Jan 2021 12:43:22 +0000 Subject: Remove unnecessary check in ngx_http_stub_status_handler() In-Reply-To: References: Message-ID: any thoughts on this? TIA Chris On Tue, Jan 5, 2021 at 1:24 PM Chris Newton wrote: > > I was desk checking return codes generated in handlers following calls to > ngx_http_send_header(), and noticed what appears to be an unnecessary test > in ngx_http_stub_status_handler() -- or rather, I think the test should > always evaluate as true, and if somehow it isn't odd things could occur - > at least an additional ALERT message would be logged, as well as some > unnecessary work performed. > > As such, I'd like to propose the following change: > > *--- a/src/http/modules/ngx_http_stub_status_module.c* > > *+++ b/src/http/modules/ngx_http_stub_status_module.c* > > @@ -106,11 +106,7 @@ ngx_http_stub_status_handler(ngx_http_request_t *r) > > if (r->method == NGX_HTTP_HEAD) { > > r->headers_out.status = NGX_HTTP_OK; > > > > - rc = ngx_http_send_header(r); > > - > > - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { > > - return rc; > > - } > > + return ngx_http_send_header(r); > > } > > > > size = sizeof("Active connections: \n") + NGX_ATOMIC_T_LEN > > > On a successful call to ngx_http_send_header() I believe that > r->header_only will be set true and otherwise I'd expect one of those error > checks to evaluate true, so unconditionally returning the value from > ngx_http_send_header() seems 'cleaner'. > > If the test were to somehow fail, then processing would fall through and > try the ngx_http_send_header() call again (resulting in the ALERT message), > as well as performing other additional work that should be unnecessary when > making a HEAD request > > That test seems to be SOP after calling ngx_http_send_header(), but it > seems inappropriate when that function is called within an "r->method == > NGX_HTTP_HEAD" block. > > TIA > > Chris > > -- *Chris Newton* Director Of Engineering | Content Delivery Architecture M: 805.444.0573 | cnewton at netflix.com 111 Albright Way | Los Gatos, CA 95032 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Jan 14 16:28:44 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 14 Jan 2021 19:28:44 +0300 Subject: Remove unnecessary check in ngx_http_stub_status_handler() In-Reply-To: References: Message-ID: <996A8174-1914-469C-8410-153D01A0A2BB@nginx.com> > On 5 Jan 2021, at 16:24, Chris Newton wrote: > > > I was desk checking return codes generated in handlers following calls to ngx_http_send_header(), and noticed what appears to be an unnecessary test in ngx_http_stub_status_handler() -- or rather, I think the test should always evaluate as true, and if somehow it isn't odd things could occur - at least an additional ALERT message would be logged, as well as some unnecessary work performed. > > As such, I'd like to propose the following change: > > --- a/src/http/modules/ngx_http_stub_status_module.c > +++ b/src/http/modules/ngx_http_stub_status_module.c > @@ -106,11 +106,7 @@ ngx_http_stub_status_handler(ngx_http_request_t *r) > if (r->method == NGX_HTTP_HEAD) { > r->headers_out.status = NGX_HTTP_OK; > > - rc = ngx_http_send_header(r); > - > - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { > - return rc; > - } > + return ngx_http_send_header(r); > } > > size = sizeof("Active connections: \n") + NGX_ATOMIC_T_LEN > > On a successful call to ngx_http_send_header() I believe that r->header_only will be set true and otherwise I'd expect one of those error checks to evaluate true, so unconditionally returning the value from ngx_http_send_header() seems 'cleaner'. > Your analysis looks correct to me. Noteworthy, empty_gif have such change in fc73de3e8df0, with similar reason. > If the test were to somehow fail, then processing would fall through and try the ngx_http_send_header() call again (resulting in the ALERT message), as well as performing other additional work that should be unnecessary when making a HEAD request > > That test seems to be SOP after calling ngx_http_send_header(), but it seems inappropriate when that function is called within an "r->method == NGX_HTTP_HEAD" block. -- Sergey Kandaurov From mail at muradm.net Sat Jan 16 11:38:03 2021 From: mail at muradm.net (Murad Mamedov) Date: Sat, 16 Jan 2021 14:38:03 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support Message-ID: <20210116113803.v3efrkabkbkecbyl@muradm-aln1> Hi, We are trying to use Nginx as frontend for SMTP and IMAP servers inside docker swarm cluster. Nginx allows us to validate clients and TLS client certificates confidently. More over, we are able to post validate client certificates with "auth_http" and "auth_http_pass_client_cert" options. Everything working fine, except the fact that Nginx it self resides behind load balancer. The only reliable way to acquire client and real server addresses is to support PROXY PROTOCOL header coming from load balancer. Below patch adds PROXY PROTOCOL support for both downstream (header received from load balancer) and upstream (header sent to real servers like Postfix, Exim, Dovecot). Why this is done: - while there seems to be support for "XCLIENT" in SMTP, it first works only for SMTP protocol and not for others, second it is meaningless when Nginx itself within docker swarm container and network mesh. More over, configuring "xclient" for SMTP even, requires to use "trusted" mode of configuration in Postfix at least, to accept XCLIENT, which complicates the things. - Mail software like Postfix, Exim, Dovecot (probably others) support PROXY PROTOCOL out of the box. How it is done: - using Nginx core existing "ngx_proxy_protocol_read" and "ngx_proxy_protocol_write" functions. - if configured, first thing done is header reading, remaining processing is untouched. For upstream, if configured, first thing done is header sending, remaining processing in untouched. Tests are passing, and in reply to this email I will be sending another patch with more tests that cover testing this patch. -- muradm # HG changeset patch # User muradm # Date 1610795056 -10800 # Sat Jan 16 14:04:16 2021 +0300 # Node ID 5f6f4a627b889e5c6438601d8ff07a940f12df3c # Parent 83c4622053b02821a12d522d08eaff3ac27e65e3 Mail: added PROXY PROTOCOL support. This implements propxy protocol support for both upstream and downstream. Downstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; } } This will properly handle incoming connections from load balancer sending PROXY protocol header. Without this, it is impossible to run nginx mail proxy behind such balancer. Header reading is done with existing function "ngx_proxy_protocol_read", so it should support both v1 and v2 headers. This will also set "sockaddr" and "local_sockaddr" addresses from received header, mimicing "set_realip". While "realip_module" deals with variables etc., which is necessary for HTTP protocol, mail protocols are pretty strict, so there is no need for flexible handling of real addresses received. Upstream proxy protocol support: mail { server { listen [ssl]; protocol ; proxy_protocol on; } } With this, upstream server (like Postfix, Exim, Dovecot) will have PROXY protocol header. Mentioned programs do support proxy protocol out of the box. Header is written with existing function "ngx_proxy_protocol_write" which supports only v1 header writing. Contents of header are written from "sockaddr" and "local_sockaddr". Downstream and upstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; proxy_protocol on; } } This will combine both receiving PROXY header and sending PROXY header. With this, upstream server (like Postfix, Exim, Dovecot) will receive the same header as was sent by downstream load balancer. Above configurations work for SSL as well and should be transparent to other mail related configurations. diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.c Sat Jan 16 14:04:16 2021 +0300 @@ -402,6 +402,7 @@ addrs[i].addr = sin->sin_addr.s_addr; addrs[i].conf.ctx = addr[i].opt.ctx; + addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs[i].conf.ssl = addr[i].opt.ssl; #endif @@ -436,6 +437,7 @@ addrs6[i].addr6 = sin6->sin6_addr; addrs6[i].conf.ctx = addr[i].opt.ctx; + addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs6[i].conf.ssl = addr[i].opt.ssl; #endif diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.h Sat Jan 16 14:04:16 2021 +0300 @@ -37,6 +37,7 @@ unsigned bind:1; unsigned wildcard:1; unsigned ssl:1; + unsigned proxy_protocol:1; #if (NGX_HAVE_INET6) unsigned ipv6only:1; #endif @@ -56,6 +57,7 @@ ngx_mail_conf_ctx_t *ctx; ngx_str_t addr_text; ngx_uint_t ssl; /* unsigned ssl:1; */ + unsigned proxy_protocol:1; } ngx_mail_addr_conf_t; typedef struct { @@ -190,6 +192,7 @@ void **ctx; void **main_conf; void **srv_conf; + ngx_mail_addr_conf_t *addr_conf; ngx_resolver_ctx_t *resolver_ctx; @@ -197,6 +200,7 @@ ngx_uint_t mail_state; + unsigned proxy_protocol:1; unsigned protocol:3; unsigned blocked:1; unsigned quit:1; diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_core_module.c Sat Jan 16 14:04:16 2021 +0300 @@ -548,6 +548,11 @@ #endif } + if (ngx_strcmp(value[i].data, "proxy_protocol") == 0) { + ls->proxy_protocol = 1; + continue; + } + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "the invalid \"%V\" parameter", &value[i]); return NGX_CONF_ERROR; diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_handler.c Sat Jan 16 14:04:16 2021 +0300 @@ -12,6 +12,8 @@ static void ngx_mail_init_session(ngx_connection_t *c); +static void ngx_mail_init_connection_complete(ngx_connection_t *c); +static void ngx_mail_proxy_protocol_handler(ngx_event_t *rev); #if (NGX_MAIL_SSL) static void ngx_mail_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); @@ -128,6 +130,7 @@ s->main_conf = addr_conf->ctx->main_conf; s->srv_conf = addr_conf->ctx->srv_conf; + s->addr_conf = addr_conf; s->addr_text = &addr_conf->addr_text; @@ -159,13 +162,161 @@ c->log_error = NGX_ERROR_INFO; + /* + * Before all process proxy protocol + */ + + if (addr_conf->proxy_protocol) { + s->proxy_protocol = 1; + c->log->action = "reading PROXY protocol header"; + c->read->handler = ngx_mail_proxy_protocol_handler; + + ngx_add_timer(c->read, cscf->timeout); + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + ngx_mail_close_connection(c); + } + + return; + } + + ngx_mail_init_connection_complete(c); +} + + +ngx_int_t +ngx_mail_proxy_protoco_set_addrs(ngx_connection_t *c) +{ + ngx_addr_t addr_peer, addr_local; + u_char *p, text[NGX_SOCKADDR_STRLEN]; + size_t len; + + if (ngx_parse_addr(c->pool, &addr_peer, + c->proxy_protocol->src_addr.data, + c->proxy_protocol->src_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_peer.sockaddr, c->proxy_protocol->src_port); + + if (ngx_parse_addr(c->pool, &addr_local, + c->proxy_protocol->dst_addr.data, + c->proxy_protocol->dst_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_local.sockaddr, c->proxy_protocol->dst_port); + + len = ngx_sock_ntop(addr_peer.sockaddr, addr_peer.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->sockaddr = addr_peer.sockaddr; + c->socklen = addr_peer.socklen; + c->addr_text.len = len; + c->addr_text.data = p; + + len = ngx_sock_ntop(addr_local.sockaddr, addr_local.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->local_sockaddr = addr_local.sockaddr; + c->local_socklen = addr_local.socklen; + + return NGX_OK; +} + + +void +ngx_mail_proxy_protocol_handler(ngx_event_t *rev) +{ + ngx_connection_t *c; + u_char *p, buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + size_t size; + ssize_t n; + + c = rev->data; + + if (rev->timedout) { + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, + "mail PROXY protocol header timed out"); + c->timedout = 1; + ngx_mail_close_connection(c); + return; + } + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail PROXY protocol handler"); + + size = NGX_PROXY_PROTOCOL_MAX_HEADER; + + n = recv(c->fd, (char *) buf, size, MSG_PEEK); + + ngx_log_debug1(NGX_LOG_DEBUG, c->log, 0, "mail recv(): %z", n); + + p = ngx_proxy_protocol_read(c, buf, buf + n); + + if (p == NULL) { + ngx_mail_close_connection(c); + return; + } + + ngx_log_error(NGX_LOG_NOTICE, c->log, 0, + "PROXY protocol %V:%d => %V:%d", + &c->proxy_protocol->src_addr, + c->proxy_protocol->src_port, + &c->proxy_protocol->dst_addr, + c->proxy_protocol->dst_port); + + size = p - buf; + + if (c->recv(c, buf, size) != (ssize_t) size) { + ngx_mail_close_connection(c); + return; + } + + if (ngx_mail_proxy_protoco_set_addrs(c) != NGX_OK) { + ngx_mail_close_connection(c); + return; + } + + ngx_mail_init_connection_complete(c); +} + + +void +ngx_mail_init_connection_complete(ngx_connection_t *c) +{ #if (NGX_MAIL_SSL) { - ngx_mail_ssl_conf_t *sslcf; + ngx_mail_session_t *s; + ngx_mail_ssl_conf_t *sslcf; + + s = c->data; sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); - if (sslcf->enable || addr_conf->ssl) { + if (sslcf->enable || s->addr_conf->ssl) { c->log->action = "SSL handshaking"; ngx_mail_ssl_init_connection(&sslcf->ssl, c); @@ -348,6 +499,7 @@ return; } + c->log->action = "sending client greeting line"; c->write->handler = ngx_mail_send; cscf->protocol->init_session(s, c); diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_proxy_module.c Sat Jan 16 14:04:16 2021 +0300 @@ -19,6 +19,7 @@ ngx_flag_t smtp_auth; size_t buffer_size; ngx_msec_t timeout; + ngx_flag_t proxy_protocol; } ngx_mail_proxy_conf_t; @@ -36,7 +37,7 @@ static void *ngx_mail_proxy_create_conf(ngx_conf_t *cf); static char *ngx_mail_proxy_merge_conf(ngx_conf_t *cf, void *parent, void *child); - +static ngx_int_t ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s); static ngx_command_t ngx_mail_proxy_commands[] = { @@ -82,6 +83,13 @@ offsetof(ngx_mail_proxy_conf_t, smtp_auth), NULL }, + { ngx_string("proxy_protocol"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, proxy_protocol), + NULL }, + ngx_null_command }; @@ -169,6 +177,12 @@ s->out.len = 0; + if (pcf->proxy_protocol == 1) { + if (ngx_mail_proxy_send_proxy_protocol(s) != NGX_OK) { + ngx_mail_proxy_internal_server_error(s); + } + } + switch (s->protocol) { case NGX_MAIL_POP3_PROTOCOL: @@ -189,6 +203,60 @@ } +ngx_int_t +ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s) +{ + u_char *p; + ssize_t n, size; + ngx_connection_t *c, *pc; + ngx_peer_connection_t *u; + u_char buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + + c = s->connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy send PROXY protocol header"); + + p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); + if (p == NULL) { + ngx_mail_proxy_internal_server_error(s); + return NGX_ERROR; + } + + u = &s->proxy->upstream; + + pc = u->connection; + + size = p - buf; + + n = pc->send(pc, buf, size); + + if (n < NGX_OK) { + ngx_mail_proxy_internal_server_error(s); + return NGX_ERROR; + } + + if (n != size) { + + /* + * PROXY protocol specification: + * The sender must always ensure that the header + * is sent at once, so that the transport layer + * maintains atomicity along the path to the receiver. + */ + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "could not send PROXY protocol header at once"); + + ngx_mail_proxy_internal_server_error(s); + + return NGX_ERROR; + } + + return NGX_OK; +} + + static void ngx_mail_proxy_block_read(ngx_event_t *rev) { @@ -1184,6 +1252,7 @@ pcf->smtp_auth = NGX_CONF_UNSET; pcf->buffer_size = NGX_CONF_UNSET_SIZE; pcf->timeout = NGX_CONF_UNSET_MSEC; + pcf->proxy_protocol = NGX_CONF_UNSET; return pcf; } From mail at muradm.net Sat Jan 16 11:56:16 2021 From: mail at muradm.net (=?iso-8859-1?q?muradm?=) Date: Sat, 16 Jan 2021 14:56:16 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support Message-ID: <5f6f4a627b889e5c6438.1610798176@muradm-aln1> # HG changeset patch # User muradm # Date 1610795056 -10800 # Sat Jan 16 14:04:16 2021 +0300 # Node ID 5f6f4a627b889e5c6438601d8ff07a940f12df3c # Parent 83c4622053b02821a12d522d08eaff3ac27e65e3 Mail: added PROXY PROTOCOL support. This implements propxy protocol support for both upstream and downstream. Downstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; } } This will properly handle incoming connections from load balancer sending PROXY protocol header. Without this, it is impossible to run nginx mail proxy behind such balancer. Header reading is done with existing function "ngx_proxy_protocol_read", so it should support both v1 and v2 headers. This will also set "sockaddr" and "local_sockaddr" addresses from received header, mimicing "set_realip". While "realip_module" deals with variables etc., which is necessary for HTTP protocol, mail protocols are pretty strict, so there is no need for flexible handling of real addresses received. Upstream proxy protocol support: mail { server { listen [ssl]; protocol ; proxy_protocol on; } } With this, upstream server (like Postfix, Exim, Dovecot) will have PROXY protocol header. Mentioned programs do support proxy protocol out of the box. Header is written with existing function "ngx_proxy_protocol_write" which supports only v1 header writing. Contents of header are written from "sockaddr" and "local_sockaddr". Downstream and upstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; proxy_protocol on; } } This will combine both receiving PROXY header and sending PROXY header. With this, upstream server (like Postfix, Exim, Dovecot) will receive the same header as was sent by downstream load balancer. Above configurations work for SSL as well and should be transparent to other mail related configurations. diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.c Sat Jan 16 14:04:16 2021 +0300 @@ -402,6 +402,7 @@ addrs[i].addr = sin->sin_addr.s_addr; addrs[i].conf.ctx = addr[i].opt.ctx; + addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs[i].conf.ssl = addr[i].opt.ssl; #endif @@ -436,6 +437,7 @@ addrs6[i].addr6 = sin6->sin6_addr; addrs6[i].conf.ctx = addr[i].opt.ctx; + addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs6[i].conf.ssl = addr[i].opt.ssl; #endif diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.h Sat Jan 16 14:04:16 2021 +0300 @@ -37,6 +37,7 @@ unsigned bind:1; unsigned wildcard:1; unsigned ssl:1; + unsigned proxy_protocol:1; #if (NGX_HAVE_INET6) unsigned ipv6only:1; #endif @@ -56,6 +57,7 @@ ngx_mail_conf_ctx_t *ctx; ngx_str_t addr_text; ngx_uint_t ssl; /* unsigned ssl:1; */ + unsigned proxy_protocol:1; } ngx_mail_addr_conf_t; typedef struct { @@ -190,6 +192,7 @@ void **ctx; void **main_conf; void **srv_conf; + ngx_mail_addr_conf_t *addr_conf; ngx_resolver_ctx_t *resolver_ctx; @@ -197,6 +200,7 @@ ngx_uint_t mail_state; + unsigned proxy_protocol:1; unsigned protocol:3; unsigned blocked:1; unsigned quit:1; diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_core_module.c Sat Jan 16 14:04:16 2021 +0300 @@ -548,6 +548,11 @@ #endif } + if (ngx_strcmp(value[i].data, "proxy_protocol") == 0) { + ls->proxy_protocol = 1; + continue; + } + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "the invalid \"%V\" parameter", &value[i]); return NGX_CONF_ERROR; diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_handler.c Sat Jan 16 14:04:16 2021 +0300 @@ -12,6 +12,8 @@ static void ngx_mail_init_session(ngx_connection_t *c); +static void ngx_mail_init_connection_complete(ngx_connection_t *c); +static void ngx_mail_proxy_protocol_handler(ngx_event_t *rev); #if (NGX_MAIL_SSL) static void ngx_mail_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); @@ -128,6 +130,7 @@ s->main_conf = addr_conf->ctx->main_conf; s->srv_conf = addr_conf->ctx->srv_conf; + s->addr_conf = addr_conf; s->addr_text = &addr_conf->addr_text; @@ -159,13 +162,161 @@ c->log_error = NGX_ERROR_INFO; + /* + * Before all process proxy protocol + */ + + if (addr_conf->proxy_protocol) { + s->proxy_protocol = 1; + c->log->action = "reading PROXY protocol header"; + c->read->handler = ngx_mail_proxy_protocol_handler; + + ngx_add_timer(c->read, cscf->timeout); + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + ngx_mail_close_connection(c); + } + + return; + } + + ngx_mail_init_connection_complete(c); +} + + +ngx_int_t +ngx_mail_proxy_protoco_set_addrs(ngx_connection_t *c) +{ + ngx_addr_t addr_peer, addr_local; + u_char *p, text[NGX_SOCKADDR_STRLEN]; + size_t len; + + if (ngx_parse_addr(c->pool, &addr_peer, + c->proxy_protocol->src_addr.data, + c->proxy_protocol->src_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_peer.sockaddr, c->proxy_protocol->src_port); + + if (ngx_parse_addr(c->pool, &addr_local, + c->proxy_protocol->dst_addr.data, + c->proxy_protocol->dst_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_local.sockaddr, c->proxy_protocol->dst_port); + + len = ngx_sock_ntop(addr_peer.sockaddr, addr_peer.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->sockaddr = addr_peer.sockaddr; + c->socklen = addr_peer.socklen; + c->addr_text.len = len; + c->addr_text.data = p; + + len = ngx_sock_ntop(addr_local.sockaddr, addr_local.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->local_sockaddr = addr_local.sockaddr; + c->local_socklen = addr_local.socklen; + + return NGX_OK; +} + + +void +ngx_mail_proxy_protocol_handler(ngx_event_t *rev) +{ + ngx_connection_t *c; + u_char *p, buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + size_t size; + ssize_t n; + + c = rev->data; + + if (rev->timedout) { + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, + "mail PROXY protocol header timed out"); + c->timedout = 1; + ngx_mail_close_connection(c); + return; + } + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail PROXY protocol handler"); + + size = NGX_PROXY_PROTOCOL_MAX_HEADER; + + n = recv(c->fd, (char *) buf, size, MSG_PEEK); + + ngx_log_debug1(NGX_LOG_DEBUG, c->log, 0, "mail recv(): %z", n); + + p = ngx_proxy_protocol_read(c, buf, buf + n); + + if (p == NULL) { + ngx_mail_close_connection(c); + return; + } + + ngx_log_error(NGX_LOG_NOTICE, c->log, 0, + "PROXY protocol %V:%d => %V:%d", + &c->proxy_protocol->src_addr, + c->proxy_protocol->src_port, + &c->proxy_protocol->dst_addr, + c->proxy_protocol->dst_port); + + size = p - buf; + + if (c->recv(c, buf, size) != (ssize_t) size) { + ngx_mail_close_connection(c); + return; + } + + if (ngx_mail_proxy_protoco_set_addrs(c) != NGX_OK) { + ngx_mail_close_connection(c); + return; + } + + ngx_mail_init_connection_complete(c); +} + + +void +ngx_mail_init_connection_complete(ngx_connection_t *c) +{ #if (NGX_MAIL_SSL) { - ngx_mail_ssl_conf_t *sslcf; + ngx_mail_session_t *s; + ngx_mail_ssl_conf_t *sslcf; + + s = c->data; sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); - if (sslcf->enable || addr_conf->ssl) { + if (sslcf->enable || s->addr_conf->ssl) { c->log->action = "SSL handshaking"; ngx_mail_ssl_init_connection(&sslcf->ssl, c); @@ -348,6 +499,7 @@ return; } + c->log->action = "sending client greeting line"; c->write->handler = ngx_mail_send; cscf->protocol->init_session(s, c); diff -r 83c4622053b0 -r 5f6f4a627b88 src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_proxy_module.c Sat Jan 16 14:04:16 2021 +0300 @@ -19,6 +19,7 @@ ngx_flag_t smtp_auth; size_t buffer_size; ngx_msec_t timeout; + ngx_flag_t proxy_protocol; } ngx_mail_proxy_conf_t; @@ -36,7 +37,7 @@ static void *ngx_mail_proxy_create_conf(ngx_conf_t *cf); static char *ngx_mail_proxy_merge_conf(ngx_conf_t *cf, void *parent, void *child); - +static ngx_int_t ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s); static ngx_command_t ngx_mail_proxy_commands[] = { @@ -82,6 +83,13 @@ offsetof(ngx_mail_proxy_conf_t, smtp_auth), NULL }, + { ngx_string("proxy_protocol"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, proxy_protocol), + NULL }, + ngx_null_command }; @@ -169,6 +177,12 @@ s->out.len = 0; + if (pcf->proxy_protocol == 1) { + if (ngx_mail_proxy_send_proxy_protocol(s) != NGX_OK) { + ngx_mail_proxy_internal_server_error(s); + } + } + switch (s->protocol) { case NGX_MAIL_POP3_PROTOCOL: @@ -189,6 +203,60 @@ } +ngx_int_t +ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s) +{ + u_char *p; + ssize_t n, size; + ngx_connection_t *c, *pc; + ngx_peer_connection_t *u; + u_char buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + + c = s->connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy send PROXY protocol header"); + + p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); + if (p == NULL) { + ngx_mail_proxy_internal_server_error(s); + return NGX_ERROR; + } + + u = &s->proxy->upstream; + + pc = u->connection; + + size = p - buf; + + n = pc->send(pc, buf, size); + + if (n < NGX_OK) { + ngx_mail_proxy_internal_server_error(s); + return NGX_ERROR; + } + + if (n != size) { + + /* + * PROXY protocol specification: + * The sender must always ensure that the header + * is sent at once, so that the transport layer + * maintains atomicity along the path to the receiver. + */ + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "could not send PROXY protocol header at once"); + + ngx_mail_proxy_internal_server_error(s); + + return NGX_ERROR; + } + + return NGX_OK; +} + + static void ngx_mail_proxy_block_read(ngx_event_t *rev) { @@ -1184,6 +1252,7 @@ pcf->smtp_auth = NGX_CONF_UNSET; pcf->buffer_size = NGX_CONF_UNSET_SIZE; pcf->timeout = NGX_CONF_UNSET_MSEC; + pcf->proxy_protocol = NGX_CONF_UNSET; return pcf; } From mail at muradm.net Sat Jan 16 11:58:07 2021 From: mail at muradm.net (=?iso-8859-1?q?muradm?=) Date: Sat, 16 Jan 2021 14:58:07 +0300 Subject: [PATCH] Mail: tests for PROXY PROTOCOL support Message-ID: <76e7def783657962088e.1610798287@muradm-aln1> # HG changeset patch # User muradm # Date 1610797338 -10800 # Sat Jan 16 14:42:18 2021 +0300 # Node ID 76e7def783657962088ec2e8346b839cda744efb # Parent 6c323c672a8678b7cff4c0ccc7b303ef2e477f7c Mail: tests for PROXY PROTOCOL support diff -r 6c323c672a86 -r 76e7def78365 mail_proxy_protocol_handle.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mail_proxy_protocol_handle.t Sat Jan 16 14:42:18 2021 +0300 @@ -0,0 +1,112 @@ +#!/usr/bin/perl + +# Tests for imap/pop3/smtp proxy protocol handling. +# Note: testing only v1 protocol here with hope that v2 is tested by core + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; +use Test::Nginx::IMAP; +use Test::Nginx::POP3; +use Test::Nginx::SMTP; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +my $t = Test::Nginx->new()->has(qw/mail imap pop3 smtp/)->plan(7); + +$t->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +mail { + auth_http http://127.0.0.1:8080; # unused + + server { + listen 127.0.0.1:8143 proxy_protocol; + protocol imap; + } + + server { + listen 127.0.0.1:8110 proxy_protocol; + protocol pop3; + } + + server { + listen 127.0.0.1:8025 proxy_protocol; + protocol smtp; + } +} + +EOF + +$t->run(); + +############################################################################### + +# imap, proxy protocol handler + +my $s = Test::Nginx::IMAP->new(PeerAddr => '127.0.0.1:' . port(8143)); +$s->send('PROXY TCP4 192.168.1.10 192.168.1.1 18143 8143'); +$s->read(); + +$s->send('1 CAPABILITY'); +$s->check(qr/^\* CAPABILITY IMAP4 IMAP4rev1 UIDPLUS AUTH=PLAIN/, 'imap proxy protocol'); +$s->ok('imap proxy protocol handler'); + +############################################################################### + +# pop3, proxy protocol handler + +$s = Test::Nginx::POP3->new(PeerAddr => '127.0.0.1:' . port(8110)); +$s->send('PROXY TCP4 192.168.1.10 192.168.1.1 18143 8110'); +$s->read(); + +$s->send('CAPA'); +$s->ok('pop3 capa'); + +my $caps = get_auth_caps($s); +like($caps, qr/USER/, 'pop3 - user'); +like($caps, qr/TOP:USER:UIDL:SASL PLAIN LOGIN/, 'pop3 - methods'); +unlike($caps, qr/STLS/, 'pop3 - no stls'); + +############################################################################### + +# smtp, proxy protocol handler + +$s = Test::Nginx::SMTP->new(PeerAddr => '127.0.0.1:' . port(8025)); +$s->send('PROXY TCP4 192.168.1.10 192.168.1.1 18143 8110'); +$s->read(); + +$s->send('EHLO example.com'); +$s->check(qr/^250 AUTH PLAIN LOGIN\x0d\x0a?/, 'smtp ehlo'); + +############################################################################### + +sub get_auth_caps { + my ($s) = @_; + my @meth; + + while ($s->read()) { + last if /^\./; + push @meth, $1 if /(.*?)\x0d\x0a?/ms; + } + join ':', @meth; +} + +############################################################################### diff -r 6c323c672a86 -r 76e7def78365 mail_proxy_protocol_handle_ssl.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mail_proxy_protocol_handle_ssl.t Sat Jan 16 14:42:18 2021 +0300 @@ -0,0 +1,156 @@ +#!/usr/bin/perl + +# Tests for mail proxy protocol handler with ssl. +# Note: testing only v1 protocol here with hope that v2 is tested by core + +############################################################################### + +use warnings; +use strict; + +use Socket qw/ CRLF /; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +eval { + require Net::SSLeay; + Net::SSLeay::load_error_strings(); + Net::SSLeay::SSLeay_add_ssl_algorithms(); + Net::SSLeay::randomize(); +}; +plan(skip_all => 'Net::SSLeay not installed') if $@; + +my $t = Test::Nginx->new()->has(qw/mail mail_ssl imap pop3 smtp/) + ->has_daemon('openssl')->plan(6); + +$t->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +mail { + auth_http http://127.0.0.1:8080; # unused + + ssl_certificate_key localhost.key; + ssl_certificate localhost.crt; + ssl_session_tickets off; + + ssl_password_file password; + + ssl_session_cache none; + + server { + listen 127.0.0.1:8993 ssl; + protocol imap; + } + + server { + listen 127.0.0.1:8994 ssl proxy_protocol; + protocol imap; + } + + server { + listen 127.0.0.1:8995 ssl; + protocol pop3; + } + + server { + listen 127.0.0.1:8996 ssl proxy_protocol; + protocol pop3; + } + + server { + listen 127.0.0.1:8465 ssl; + protocol smtp; + } + + server { + listen 127.0.0.1:8466 ssl proxy_protocol; + protocol smtp; + } +} + +EOF + +$t->write_file('openssl.conf', <testdir(); + +foreach my $name ('localhost', 'inherits') { + system("openssl genrsa -out $d/$name.key -passout pass:localhost " + . "-aes128 2048 >>$d/openssl.out 2>&1") == 0 + or die "Can't create private key: $!\n"; + system('openssl req -x509 -new ' + . "-config $d/openssl.conf -subj /CN=$name/ " + . "-out $d/$name.crt " + . "-key $d/$name.key -passin pass:localhost" + . ">>$d/openssl.out 2>&1") == 0 + or die "Can't create certificate for $name: $!\n"; +} + +my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); +$t->write_file('password', 'localhost'); + +open OLDERR, ">&", \*STDERR; close STDERR; +$t->run(); +open STDERR, ">&", \*OLDERR; + +############################################################################### + +my @list = (qw(8993 8994 8995 8996 8465 8466)); + +while (my ($p1, $p2) = splice (@list,0,2)) { + my ($s, $ssl, $ses); + + $s = get_socket($p1); + + $ssl = make_ssl_socket($s); + $ses = Net::SSLeay::get_session($ssl); + like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=localhost/, 'CN'); + + $s = get_socket($p2); + $s->print('PROXY TCP4 192.168.1.10 192.168.1.1 18143 8110' . CRLF); + + $ssl = make_ssl_socket($s); + $ses = Net::SSLeay::get_session($ssl); + like(Net::SSLeay::dump_peer_certificate($ssl), qr/CN=localhost/, 'CN'); +} + +############################################################################### + +sub get_socket { + my ($port) = @_; + return IO::Socket::INET->new('127.0.0.1:' . port($port)); +} + +sub make_ssl_socket { + my ($socket, $ses) = @_; + + my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); + Net::SSLeay::set_session($ssl, $ses) if defined $ses; + Net::SSLeay::set_fd($ssl, fileno($socket)); + Net::SSLeay::connect($ssl) or die("ssl connect"); + return $ssl; +} + +############################################################################### diff -r 6c323c672a86 -r 76e7def78365 mail_proxy_proxy_protocol.t --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/mail_proxy_proxy_protocol.t Sat Jan 16 14:42:18 2021 +0300 @@ -0,0 +1,199 @@ +#!/usr/bin/perl + +# Tests for nginx mail proxy module, the proxy_protocol directive. + +############################################################################### + +use warnings; +use strict; + +use Socket qw/ CRLF /; + +use Test::More; + +use MIME::Base64; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; +use Test::Nginx::SMTP; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +local $SIG{PIPE} = 'IGNORE'; + +my $t = Test::Nginx->new()->has(qw/mail smtp http rewrite/)->plan(9); + +$t->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { + worker_connections 48; +} + +mail { + auth_http http://127.0.0.1:8080/mail/auth; + smtp_auth login plain external; + + server { + listen 127.0.0.1:8025; + protocol smtp; + xclient off; + } + + server { + listen 127.0.0.1:8027; + protocol smtp; + xclient off; + proxy_protocol on; + } + + server { + listen 127.0.0.1:8029 proxy_protocol; + protocol smtp; + xclient off; + proxy_protocol on; + proxy_smtp_auth on; + } +} + +http { + %%TEST_GLOBALS_HTTP%% + + server { + listen 127.0.0.1:8080; + server_name localhost; + + location = /mail/auth { + add_header Auth-Status OK; + add_header Auth-Server 127.0.0.1; + add_header Auth-Port %%PORT_8026%%; + add_header Auth-User test at example.com; + add_header Auth-Pass test at example.com; + return 204; + } + } +} + +EOF + +$t->run(); + +############################################################################### + +my ($s, $pp_data); + +# no proxy_protocol in or out + +$t->run_daemon(\&smtp_test_listener, port(8026)); +$t->waitforsocket('127.0.0.1:' . port(8026)); + +$s = Test::Nginx::SMTP->new(PeerAddr => '127.0.0.1:' . port(8025)); +$s->check(qr/ESMTP ready/); +$s->send('EHLO example.com'); +$s->check(qr/250 AUTH PLAIN LOGIN EXTERNAL/); +$s->send('AUTH PLAIN ' . encode_base64("\0test\@example.com\0secret", '')); +$s->authok('ehlo, auth'); +$t->stop_daemons(); + +# proxy_protocol only out + +$pp_data = 'PROXY TCP4 192.168.1.10 192.168.1.11'; +$t->run_daemon(\&smtp_test_listener, port(8026), $pp_data); +$t->waitforsocket('127.0.0.1:' . port(8026)); + +$s = Test::Nginx::SMTP->new(PeerAddr => '127.0.0.1:' . port(8027)); +$s->check(qr/ESMTP ready/); +$s->send('EHLO example.com'); +$s->check(qr/250 AUTH PLAIN LOGIN EXTERNAL/); +$s->send('AUTH PLAIN ' . encode_base64("\0test\@example.com\0secret", '')); +$s->authok('ehlo, auth'); +$t->stop_daemons(); + +# proxy_protocol only out and in +$pp_data = 'PROXY TCP4 192.168.1.10 192.168.1.11'; +$t->run_daemon(\&smtp_test_listener, port(8026), $pp_data); +$t->waitforsocket('127.0.0.1:' . port(8026)); + +$s = Test::Nginx::SMTP->new(PeerAddr => '127.0.0.1:' . port(8029)); +$s->send($pp_data . ' 51298 8027'); +$s->check(qr/ESMTP ready/); +$s->send('EHLO example.com'); +$s->check(qr/250 AUTH PLAIN LOGIN EXTERNAL/); +$s->send('AUTH PLAIN ' . encode_base64("\0test\@example.com\0secret", '')); +$s->authok('ehlo, auth'); +$t->stop_daemons(); + + +############################################################################### + +sub smtp_test_listener { + my ($port, $expected) = @_; + my $server = IO::Socket::INET->new( + Proto => 'tcp', + LocalAddr => '127.0.0.1:' . ($port || port(8026)), + Listen => 5, + Reuse => 1 + ) + or die "Can't create listening socket: $!\n"; + + while (my $client = $server->accept()) { + $client->autoflush(1); + + if (defined($expected)) { + $expected = $expected . CRLF; + while (<$client>) { + if (/^proxy/i) { + Test::Nginx::log_core('||>>', $_); + last; + } + } + } + + sub send_client { + my ($c, $d) = @_; + Test::Nginx::log_core('||<<', $d); + print $c $d . CRLF; + } + + print $client "220 fake esmtp server ready" . CRLF; + + while (<$client>) { + Test::Nginx::log_core('||>>', $_); + + my $res = ''; + + if (/^quit/i) { + send_client($client, '221 quit ok'); + } elsif (/^(ehlo|helo)/i) { + send_client($client, '250-ok'); + send_client($client, '250 AUTH PLAIN LOGIN EXTERNAL'); + } elsif (/^rset/i) { + send_client($client, '250 rset ok'); + } elsif (/^auth plain/i) { + send_client($client, '235 auth ok'); + } elsif (/^mail from:[^@]+$/i) { + send_client($client, '500 mail from error'); + } elsif (/^mail from:/i) { + send_client($client, '250 mail from ok'); + } elsif (/^rcpt to:[^@]+$/i) { + send_client($client, '500 rcpt to error'); + } elsif (/^rcpt to:/i) { + send_client($client, '250 rcpt to ok'); + } else { + send_client($client, '500 unknown command'); + } + } + + close $client; + } +} + +############################################################################### From mail at muradm.net Sat Jan 16 16:38:49 2021 From: mail at muradm.net (Murad Mamedov) Date: Sat, 16 Jan 2021 19:38:49 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support Message-ID: <20210116163849.cmyhguhgmj5e2rtf@muradm-aln1> First of all, ignore patch in first mail, I don't use mercurial on daily basis, and my neomutt screwed the patch. Second mail in thread contains just patch and it seems to be correct. I wanted to address few other things on the subject. I started my way from http://mailman.nginx.org/pipermail/nginx-devel/2016-November/009083.html, however decisions done there I found incorrect. Author tried to jump right into XCLIENT from PROXY PROTOCOL. In proposed implementation proxy protocol is handled in the begining of connections. Since XCLIENT gets its address from ngx_connection_s, it will get automatically downstream provided address of client. In the same thread, there was questions on how to deal with "real_ip_header" and "set_real_ip_from". As I mentioned in the original description to the patch, one may need these in case of HTTP protocol, which is very flexible, with tons of applications behind that may demand presense of real ip address in different places/headers. For ancient mail protocols, it is not the case. They are very strict, very few applications that implement it, probably Postfix, Exim and Dovecot be the only practical implementations. And they do support proxy protocol out of the box. So I could not find real reason to apply "real_ip" thing. With proposed implementation, it just worked out of the box, with minimum configuration. The only thing which could be added if need is the overriding of "destination address" of proxy protocol (i.e. address which client reached). For now I didn't see where it could be useful in mentioned above mail applications. Client address, yes, we do pass, server address ?\_(?)_/?, who cares. -- muradm From triptothefuture.cs at gmail.com Sat Jan 16 17:12:57 2021 From: triptothefuture.cs at gmail.com (M L) Date: Sat, 16 Jan 2021 23:12:57 +0600 Subject: Request Counter Clarification In-Reply-To: <20201225144716.GM1147@mdounin.ru> References: <20201225144716.GM1147@mdounin.ru> Message-ID: Dear NGINX community, I had some questions regarding the module development. The module I am developing processes the request and sometimes (especially when there is a large body) the request processing takes long enough to disrupt Nginx lifecycle. To handle this problem I've added a feature of adding a posted event if the processing exceeds given time. To read the body of the request, I use "read client req body" function, and after its execution, I was advised to finalize the request with ngx_http_finalize_request(r, NGX_DONE). But won't it disrupt the working of the posted event? I post it with ngx_post_event function. Or maybe there is some else solution, and I don't even need to add a posted event? For example, I could make smth like a for loop in the post handler of the ngx_http_read_client_request_body which would check if the request processing finished by my handler, and exit if it did, or some error occurred. With best regards, doughnut On Fri, Dec 25, 2020 at 8:47 PM Maxim Dounin wrote: > Hello! > > On Mon, Dec 21, 2020 at 08:54:54PM +0600, M L wrote: > > > I am developing an NGINX module which would check the contents of the > > request and if the key components found, would block it. Currently, it > > seems to be working correctly, but I would like to clarify some parts and > > make sure that I am not hard-coding anything. So, the question is mainly > > about the request counter. > > During the execution of the request handler (which is registered on the > > HTTP_REWRITE_PHASE), the request counter is kept as it is. But once the > > handler finishes the request processing, the counter is changed to 1. But > > changing the counter to 1 does not seem like a right decision, as many > > other modules more often decrease it in the post_handler or call the > > "finalize request" function. However, the use of "finalize" cannot be > > implemented, as neither connection, nor request should not be finalized > > after the handler execution. Instead, the request needs to be handed over > > to the other phase handlers (return NGX_DECLINED). As for the > decrementing > > in the post_handler of the ngx_http_read_client_request_body function, on > > the heavy loads, it results in the segfaults. Finally, leaving the > counter > > unchanged throughout the process leads to memory leaks. Therefore, the > > above-described value assignment was implemented, but, perhaps, there are > > better ways of handling the request counter issue? And why the change in > > the request counter can cause a segfault in the first place? > > In general, you shouldn't touch the request counter yourself > unless you really understand what you are doing. Instead, you > should correctly call ngx_http_finalize_request() to decrease it > (or make sure to return correct return code if the phase handler > does this for you, this will properly decrement). Increasing the > request counter in most cases is handled by nginx core as well. > > In no cases you are expected to set the request counter to a > specific value. It is something to be done only during forced > request termination. Any attempt to do this in your own module is > certainly a bug. > > Incorrectly adjusting request counter can lead to segfaults or to > connection/memory leaks, depending on the exact code path. > > In the particular module you've described it looks like the > problem is that you are trying to read the request body from early > request processing phases (that is, before the content phase), and > do this incorrectly. For a correct example see the mirror module > ( > http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_mirror_module.c > ). > > In particular, to start reading the request body, do something > like this (note ngx_http_finalize_request(NGX_DONE) call to > decrement reference counter, and NGX_DONE return code to stop > further processing of the request with the phase handlers): > > rc = ngx_http_read_client_request_body(r, > ngx_http_mirror_body_handler); > if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { > return rc; > } > > ngx_http_finalize_request(r, NGX_DONE); > return NGX_DONE; > > And to continue processing with other phase handlers you should > do something like this in the body handler: > > r->write_event_handler = ngx_http_core_run_phases; > ngx_http_core_run_phases(r); > > This ensures that appropriate write event handler is set (as it is > removed by the request body reading code) and resumes phase > handling by calling ngx_http_core_run_phases(). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at muradm.net Sun Jan 17 17:02:45 2021 From: mail at muradm.net (=?iso-8859-1?q?muradm?=) Date: Sun, 17 Jan 2021 20:02:45 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support Message-ID: <74562c5f22f8e03be55d.1610902965@muradm-aln1> # HG changeset patch # User muradm # Date 1610902507 -10800 # Sun Jan 17 19:55:07 2021 +0300 # Node ID 74562c5f22f8e03be55d412649c00d1ef2ca6811 # Parent 83c4622053b02821a12d522d08eaff3ac27e65e3 Mail: added PROXY PROTOCOL support. This implements propxy protocol support for both upstream and downstream. Downstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; } } This will properly handle incoming connections from load balancer sending PROXY protocol header. Without this, it is impossible to run nginx mail proxy behind such balancer. Header reading is done with existing function "ngx_proxy_protocol_read", so it should support both v1 and v2 headers. This will also set "sockaddr" and "local_sockaddr" addresses from received header, mimicing "set_realip". While "realip_module" deals with variables etc., which is necessary for HTTP protocol, mail protocols are pretty strict, so there is no need for flexible handling of real addresses received. Upstream proxy protocol support: mail { server { listen [ssl]; protocol ; proxy_protocol on; } } With this, upstream server (like Postfix, Exim, Dovecot) will have PROXY protocol header. Mentioned programs do support proxy protocol out of the box. Header is written with existing function "ngx_proxy_protocol_write" which supports only v1 header writing. Contents of header are written from "sockaddr" and "local_sockaddr". Downstream and upstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; proxy_protocol on; } } This will combine both receiving PROXY header and sending PROXY header. With this, upstream server (like Postfix, Exim, Dovecot) will receive the same header as was sent by downstream load balancer. Above configurations work for SSL as well and should be transparent to other mail related configurations. Added "connect_timeout". diff -r 83c4622053b0 -r 74562c5f22f8 src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.c Sun Jan 17 19:55:07 2021 +0300 @@ -402,6 +402,7 @@ addrs[i].addr = sin->sin_addr.s_addr; addrs[i].conf.ctx = addr[i].opt.ctx; + addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs[i].conf.ssl = addr[i].opt.ssl; #endif @@ -436,6 +437,7 @@ addrs6[i].addr6 = sin6->sin6_addr; addrs6[i].conf.ctx = addr[i].opt.ctx; + addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs6[i].conf.ssl = addr[i].opt.ssl; #endif diff -r 83c4622053b0 -r 74562c5f22f8 src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.h Sun Jan 17 19:55:07 2021 +0300 @@ -37,6 +37,7 @@ unsigned bind:1; unsigned wildcard:1; unsigned ssl:1; + unsigned proxy_protocol:1; #if (NGX_HAVE_INET6) unsigned ipv6only:1; #endif @@ -56,6 +57,7 @@ ngx_mail_conf_ctx_t *ctx; ngx_str_t addr_text; ngx_uint_t ssl; /* unsigned ssl:1; */ + unsigned proxy_protocol:1; } ngx_mail_addr_conf_t; typedef struct { @@ -190,6 +192,7 @@ void **ctx; void **main_conf; void **srv_conf; + ngx_mail_addr_conf_t *addr_conf; ngx_resolver_ctx_t *resolver_ctx; @@ -197,6 +200,7 @@ ngx_uint_t mail_state; + unsigned proxy_protocol:1; unsigned protocol:3; unsigned blocked:1; unsigned quit:1; diff -r 83c4622053b0 -r 74562c5f22f8 src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_core_module.c Sun Jan 17 19:55:07 2021 +0300 @@ -548,6 +548,11 @@ #endif } + if (ngx_strcmp(value[i].data, "proxy_protocol") == 0) { + ls->proxy_protocol = 1; + continue; + } + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "the invalid \"%V\" parameter", &value[i]); return NGX_CONF_ERROR; diff -r 83c4622053b0 -r 74562c5f22f8 src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_handler.c Sun Jan 17 19:55:07 2021 +0300 @@ -12,6 +12,8 @@ static void ngx_mail_init_session(ngx_connection_t *c); +static void ngx_mail_init_connection_complete(ngx_connection_t *c); +static void ngx_mail_proxy_protocol_handler(ngx_event_t *rev); #if (NGX_MAIL_SSL) static void ngx_mail_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); @@ -128,6 +130,7 @@ s->main_conf = addr_conf->ctx->main_conf; s->srv_conf = addr_conf->ctx->srv_conf; + s->addr_conf = addr_conf; s->addr_text = &addr_conf->addr_text; @@ -159,13 +162,161 @@ c->log_error = NGX_ERROR_INFO; + /* + * Before all process proxy protocol + */ + + if (addr_conf->proxy_protocol) { + s->proxy_protocol = 1; + c->log->action = "reading PROXY protocol header"; + c->read->handler = ngx_mail_proxy_protocol_handler; + + ngx_add_timer(c->read, cscf->timeout); + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + ngx_mail_close_connection(c); + } + + return; + } + + ngx_mail_init_connection_complete(c); +} + + +ngx_int_t +ngx_mail_proxy_protoco_set_addrs(ngx_connection_t *c) +{ + ngx_addr_t addr_peer, addr_local; + u_char *p, text[NGX_SOCKADDR_STRLEN]; + size_t len; + + if (ngx_parse_addr(c->pool, &addr_peer, + c->proxy_protocol->src_addr.data, + c->proxy_protocol->src_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_peer.sockaddr, c->proxy_protocol->src_port); + + if (ngx_parse_addr(c->pool, &addr_local, + c->proxy_protocol->dst_addr.data, + c->proxy_protocol->dst_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_local.sockaddr, c->proxy_protocol->dst_port); + + len = ngx_sock_ntop(addr_peer.sockaddr, addr_peer.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->sockaddr = addr_peer.sockaddr; + c->socklen = addr_peer.socklen; + c->addr_text.len = len; + c->addr_text.data = p; + + len = ngx_sock_ntop(addr_local.sockaddr, addr_local.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->local_sockaddr = addr_local.sockaddr; + c->local_socklen = addr_local.socklen; + + return NGX_OK; +} + + +void +ngx_mail_proxy_protocol_handler(ngx_event_t *rev) +{ + ngx_connection_t *c; + u_char *p, buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + size_t size; + ssize_t n; + + c = rev->data; + + if (rev->timedout) { + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, + "mail PROXY protocol header timed out"); + c->timedout = 1; + ngx_mail_close_connection(c); + return; + } + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail PROXY protocol handler"); + + size = NGX_PROXY_PROTOCOL_MAX_HEADER; + + n = recv(c->fd, (char *) buf, size, MSG_PEEK); + + ngx_log_debug1(NGX_LOG_DEBUG, c->log, 0, "mail recv(): %z", n); + + p = ngx_proxy_protocol_read(c, buf, buf + n); + + if (p == NULL) { + ngx_mail_close_connection(c); + return; + } + + ngx_log_error(NGX_LOG_NOTICE, c->log, 0, + "PROXY protocol %V:%d => %V:%d", + &c->proxy_protocol->src_addr, + c->proxy_protocol->src_port, + &c->proxy_protocol->dst_addr, + c->proxy_protocol->dst_port); + + size = p - buf; + + if (c->recv(c, buf, size) != (ssize_t) size) { + ngx_mail_close_connection(c); + return; + } + + if (ngx_mail_proxy_protoco_set_addrs(c) != NGX_OK) { + ngx_mail_close_connection(c); + return; + } + + ngx_mail_init_connection_complete(c); +} + + +void +ngx_mail_init_connection_complete(ngx_connection_t *c) +{ #if (NGX_MAIL_SSL) { - ngx_mail_ssl_conf_t *sslcf; + ngx_mail_session_t *s; + ngx_mail_ssl_conf_t *sslcf; + + s = c->data; sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); - if (sslcf->enable || addr_conf->ssl) { + if (sslcf->enable || s->addr_conf->ssl) { c->log->action = "SSL handshaking"; ngx_mail_ssl_init_connection(&sslcf->ssl, c); @@ -348,6 +499,7 @@ return; } + c->log->action = "sending client greeting line"; c->write->handler = ngx_mail_send; cscf->protocol->init_session(s, c); diff -r 83c4622053b0 -r 74562c5f22f8 src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_proxy_module.c Sun Jan 17 19:55:07 2021 +0300 @@ -19,6 +19,8 @@ ngx_flag_t smtp_auth; size_t buffer_size; ngx_msec_t timeout; + ngx_msec_t connect_timeout; + ngx_flag_t proxy_protocol; } ngx_mail_proxy_conf_t; @@ -36,7 +38,9 @@ static void *ngx_mail_proxy_create_conf(ngx_conf_t *cf); static char *ngx_mail_proxy_merge_conf(ngx_conf_t *cf, void *parent, void *child); - +static void ngx_mail_proxy_connect_handler(ngx_event_t *ev); +static void ngx_mail_proxy_start(ngx_mail_session_t *s); +static void ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s); static ngx_command_t ngx_mail_proxy_commands[] = { @@ -61,6 +65,13 @@ offsetof(ngx_mail_proxy_conf_t, timeout), NULL }, + { ngx_string("connect_timeout"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, connect_timeout), + NULL }, + { ngx_string("proxy_pass_error_message"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -82,6 +93,13 @@ offsetof(ngx_mail_proxy_conf_t, smtp_auth), NULL }, + { ngx_string("proxy_protocol"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, proxy_protocol), + NULL }, + ngx_null_command }; @@ -156,7 +174,6 @@ p->upstream.connection->pool = s->connection->pool; s->connection->read->handler = ngx_mail_proxy_block_read; - p->upstream.connection->write->handler = ngx_mail_proxy_dummy_handler; pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); @@ -169,23 +186,139 @@ s->out.len = 0; + if (rc == NGX_AGAIN) { + p->upstream.connection->write->handler = ngx_mail_proxy_connect_handler; + p->upstream.connection->read->handler = ngx_mail_proxy_connect_handler; + + ngx_add_timer(p->upstream.connection->write, pcf->connect_timeout); + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, "mail proxy delay connect"); + return; + } + + if (pcf->proxy_protocol) { + ngx_mail_proxy_send_proxy_protocol(s); + return; + } + + ngx_mail_proxy_start(s); +} + + +void +ngx_mail_proxy_connect_handler(ngx_event_t *ev) +{ + ngx_connection_t *c; + ngx_mail_session_t *s; + ngx_mail_proxy_conf_t *pcf; + + c = ev->data; + s = c->data; + + if (ev->timedout) { + ngx_log_error(NGX_LOG_ERR, c->log, NGX_ETIMEDOUT, "upstream timed out"); + ngx_mail_session_internal_server_error(s); + return; + } + + ngx_del_timer(c->write); + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy connect upstream"); + + pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); + + if (pcf->proxy_protocol) { + ngx_mail_proxy_send_proxy_protocol(s); + return; + } + + ngx_mail_proxy_start(s); +} + + +void +ngx_mail_proxy_start(ngx_mail_session_t *s) +{ + ngx_connection_t *pc; + + pc = s->proxy->upstream.connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, + "mail proxy starting"); + + pc->write->handler = ngx_mail_proxy_dummy_handler; + switch (s->protocol) { case NGX_MAIL_POP3_PROTOCOL: - p->upstream.connection->read->handler = ngx_mail_proxy_pop3_handler; + pc->read->handler = ngx_mail_proxy_pop3_handler; s->mail_state = ngx_pop3_start; break; case NGX_MAIL_IMAP_PROTOCOL: - p->upstream.connection->read->handler = ngx_mail_proxy_imap_handler; + pc->read->handler = ngx_mail_proxy_imap_handler; s->mail_state = ngx_imap_start; break; default: /* NGX_MAIL_SMTP_PROTOCOL */ - p->upstream.connection->read->handler = ngx_mail_proxy_smtp_handler; + pc->read->handler = ngx_mail_proxy_smtp_handler; s->mail_state = ngx_smtp_start; break; } + + if (pc->read->ready) { + ngx_post_event(pc->read, &ngx_posted_events); + } +} + + +void +ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s) +{ + u_char *p; + ssize_t n, size; + ngx_connection_t *c, *pc; + ngx_peer_connection_t *u; + u_char buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + + c = s->connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy send PROXY protocol header"); + + p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); + if (p == NULL) { + ngx_mail_proxy_internal_server_error(s); + return; + } + + u = &s->proxy->upstream; + + pc = u->connection; + + size = p - buf; + + n = pc->send(pc, buf, size); + + if (n != size) { + + /* + * PROXY protocol specification: + * The sender must always ensure that the header + * is sent at once, so that the transport layer + * maintains atomicity along the path to the receiver. + */ + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "could not send PROXY protocol header at once (%z)", n); + + ngx_mail_proxy_internal_server_error(s); + + return; + } + + ngx_mail_proxy_start(s); } @@ -1184,6 +1317,8 @@ pcf->smtp_auth = NGX_CONF_UNSET; pcf->buffer_size = NGX_CONF_UNSET_SIZE; pcf->timeout = NGX_CONF_UNSET_MSEC; + pcf->connect_timeout = NGX_CONF_UNSET_MSEC; + pcf->proxy_protocol = NGX_CONF_UNSET; return pcf; } @@ -1202,6 +1337,8 @@ ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, (size_t) ngx_pagesize); ngx_conf_merge_msec_value(conf->timeout, prev->timeout, 24 * 60 * 60000); + ngx_conf_merge_msec_value(conf->connect_timeout, prev->connect_timeout, 1000); + ngx_conf_merge_value(conf->proxy_protocol, prev->proxy_protocol, 0); return NGX_CONF_OK; } -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.patch Type: text/x-patch Size: 15801 bytes Desc: not available URL: From mail at muradm.net Sun Jan 17 17:07:07 2021 From: mail at muradm.net (Murad Mamedov) Date: Sun, 17 Jan 2021 20:07:07 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support Message-ID: <20210117170707.uwvgozkcz6pojuu7@muradm-aln1> Updated patch includes "connect_timeout" configuration and adds connect phase for NGX_AGAIN. -- muradm From vl at nginx.com Mon Jan 18 07:28:12 2021 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 18 Jan 2021 10:28:12 +0300 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> Message-ID: On Thu, Jan 14, 2021 at 05:53:17AM +0000, Zhao, Ping wrote: > # HG changeset patch > # User Ping Zhao > # Date 1610554205 18000 > # Wed Jan 13 11:10:05 2021 -0500 > # Node ID 95886c3353dc80a3da215027c1e0f2141e47e911 > # Parent b055bb6ef87e49232a7fcb4e5334b8efda3b6499 > Add io_uring support in AIO(async io) module. > > Hello, This is a patch to support io_uring in AIO(async io) module. > Basically you don't need change your configurations. If you're using new kernel(above v5.1) which supports io_uring, and you have "aio on" in your configuration. Nginx will use io_uring for FILE_AIO access which can achieve performance improvement than legacy libaio. > > Checked with iostat which shows nvme disk io has 30%+ performance improvement with 1 thread. > Use wrk with 100 threads 200 connections(-t 100 -c 200) with 25000 random requests. > > iostat(B/s) > libaio ~1.0 GB/s > io_uring 1.3+ GB/s Hello, what size of request did you use in your testing? The previous attempt to use uring (http://mailman.nginx.org/pipermail/nginx-devel/2020-November/013632.html) seem to have issues with big requests and fallback to sendfile in such cases. Note that from the standpoint of HTTP server, most requests are usually larger than 4Kb. From ping.zhao at intel.com Mon Jan 18 08:24:58 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Mon, 18 Jan 2021 08:24:58 +0000 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> Message-ID: Hi Vladimir, I tested with response from 4KB to 1MB length which are ok. The procedure is first storing all the cache files on a nvme disk(~20T), then check the iostat & NIC bandwidth since then Nginx will use the cache files on disk with io_uring or libaio. So my patch didn't impact sendfile procedure, it provides another implementation of legacy libaio. Regards, Ping -----Original Message----- From: nginx-devel On Behalf Of Vladimir Homutov Sent: Monday, January 18, 2021 3:28 PM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add io_uring support in AIO(async io) module On Thu, Jan 14, 2021 at 05:53:17AM +0000, Zhao, Ping wrote: > # HG changeset patch > # User Ping Zhao # Date 1610554205 18000 > # Wed Jan 13 11:10:05 2021 -0500 > # Node ID 95886c3353dc80a3da215027c1e0f2141e47e911 > # Parent b055bb6ef87e49232a7fcb4e5334b8efda3b6499 > Add io_uring support in AIO(async io) module. > > Hello, This is a patch to support io_uring in AIO(async io) module. > Basically you don't need change your configurations. If you're using new kernel(above v5.1) which supports io_uring, and you have "aio on" in your configuration. Nginx will use io_uring for FILE_AIO access which can achieve performance improvement than legacy libaio. > > Checked with iostat which shows nvme disk io has 30%+ performance improvement with 1 thread. > Use wrk with 100 threads 200 connections(-t 100 -c 200) with 25000 random requests. > > iostat(B/s) > libaio ~1.0 GB/s > io_uring 1.3+ GB/s Hello, what size of request did you use in your testing? The previous attempt to use uring (http://mailman.nginx.org/pipermail/nginx-devel/2020-November/013632.html) seem to have issues with big requests and fallback to sendfile in such cases. Note that from the standpoint of HTTP server, most requests are usually larger than 4Kb. _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From vl at nginx.com Mon Jan 18 14:10:58 2021 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 18 Jan 2021 17:10:58 +0300 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> Message-ID: <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> 18.01.2021 11:24, Zhao, Ping ?????: > Hi Vladimir, > > I tested with response from 4KB to 1MB length which are ok. The procedure is first storing all the cache files on a nvme disk(~20T), then check the iostat & NIC bandwidth since then Nginx will use the cache files on disk with io_uring or libaio. So my patch didn't impact sendfile procedure, it provides another implementation of legacy libaio. > > Regards, > Ping yes, I see that your implementation is different. I wonder, if you can see any difference in performance depending on request size? Is it always constant? > > -----Original Message----- > From: nginx-devel On Behalf Of Vladimir Homutov > Sent: Monday, January 18, 2021 3:28 PM > To: nginx-devel at nginx.org > Subject: Re: [PATCH] Add io_uring support in AIO(async io) module > > On Thu, Jan 14, 2021 at 05:53:17AM +0000, Zhao, Ping wrote: >> # HG changeset patch >> # User Ping Zhao # Date 1610554205 18000 >> # Wed Jan 13 11:10:05 2021 -0500 >> # Node ID 95886c3353dc80a3da215027c1e0f2141e47e911 >> # Parent b055bb6ef87e49232a7fcb4e5334b8efda3b6499 >> Add io_uring support in AIO(async io) module. >> >> Hello, This is a patch to support io_uring in AIO(async io) module. >> Basically you don't need change your configurations. If you're using new kernel(above v5.1) which supports io_uring, and you have "aio on" in your configuration. Nginx will use io_uring for FILE_AIO access which can achieve performance improvement than legacy libaio. >> >> Checked with iostat which shows nvme disk io has 30%+ performance improvement with 1 thread. >> Use wrk with 100 threads 200 connections(-t 100 -c 200) with 25000 random requests. >> >> iostat(B/s) >> libaio ~1.0 GB/s >> io_uring 1.3+ GB/s > > Hello, > > what size of request did you use in your testing? > The previous attempt to use uring > (http://mailman.nginx.org/pipermail/nginx-devel/2020-November/013632.html) > seem to have issues with big requests and fallback to sendfile in such cases. Note that from the standpoint of HTTP server, most requests are usually larger than 4Kb. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > From mdounin at mdounin.ru Mon Jan 18 16:01:28 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jan 2021 19:01:28 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support In-Reply-To: <20210116163849.cmyhguhgmj5e2rtf@muradm-aln1> References: <20210116163849.cmyhguhgmj5e2rtf@muradm-aln1> Message-ID: <20210118160128.GO1147@mdounin.ru> Hello! On Sat, Jan 16, 2021 at 07:38:49PM +0300, Murad Mamedov wrote: > First of all, ignore patch in first mail, I don't use mercurial on > daily basis, and my neomutt screwed the patch. Second mail in thread > contains just patch and it seems to be correct. > > I wanted to address few other things on the subject. I started my way > from > http://mailman.nginx.org/pipermail/nginx-devel/2016-November/009083.html, > however decisions done there I found incorrect. Author tried to jump > right into XCLIENT from PROXY PROTOCOL. In proposed implementation proxy > protocol is handled in the begining of connections. Since XCLIENT gets > its address from ngx_connection_s, it will get automatically downstream > provided address of client. > > In the same thread, there was questions on how to deal with > "real_ip_header" and "set_real_ip_from". As I mentioned in the original > description to the patch, one may need these in case of HTTP protocol, > which is very flexible, with tons of applications behind that may demand > presense of real ip address in different places/headers. For ancient mail > protocols, it is not the case. They are very strict, very few > applications that implement it, probably Postfix, Exim and Dovecot be > the only practical implementations. And they do support proxy protocol > out of the box. So I could not find real reason to apply "real_ip" > thing. With proposed implementation, it just worked out of the box, with > minimum configuration. The only thing which could be added if need is > the overriding of "destination address" of proxy protocol (i.e. address > which client reached). For now I didn't see where it could be useful in > mentioned above mail applications. Client address, yes, we do pass, > server address ?\_(?)_/?, who cares. The main problem the "real_ip thing" is expected to address is that whithout set_real_ip_from restricted set of addresses to accept IP address from you are open to IP spoofing. Most recent relevant discussion about "why one should never trust X-Forwarded-For addresses and what to do with the fact that many don't understand it" can be found in the thread here: http://mailman.nginx.org/pipermail/nginx/2021-January/060319.html Mail proxying is indeed much simplier than http, yet the question is still here: how to ensure that "listen ... proxy_protocol;" is only configured to accept client IP addresses from trusted sources. Blindly assuming that anyone who used "listen ... proxy_protocol;" in the configuration took care and restricted access to the listening socket in question on the network level might not be a good idea. Especially given it works differently in both http and stream modules. Rather, I would like to see it equivalent to how it is implemented in the stream module with "set_real_ip_from" (http://nginx.org/en/docs/stream/ngx_stream_realip_module.html). -- Maxim Dounin http://mdounin.ru/ From mail at muradm.net Mon Jan 18 18:08:36 2021 From: mail at muradm.net (Murad Mamedov) Date: Mon, 18 Jan 2021 21:08:36 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support In-Reply-To: <20210118160128.GO1147@mdounin.ru> References: <20210116163849.cmyhguhgmj5e2rtf@muradm-aln1> <20210118160128.GO1147@mdounin.ru> Message-ID: <20210118180836.7ftvm7ts3rpdf3ym@muradm-aln1> Hi, Yes as security requirement one has to think about where do you get PROXY header from. HAProxy clearly states in spec that, if receiver is configured to have PROXY header, it has to expect it, if such header is missing the connection should be closed immediatly. When such configuration is introduced, one generally puts and application behind a load balancer providing PROXY header. I.e. the PROXY header provider is not an any system in the wild. As being part of network configuration, generally it should be handled by the network configuration. PROXY header introduced in the first place to mitigate the network routing issues. For network configuration it is as simple as having a private network between PROXY header provider and consumer (most common case I beleive), at worst iptables rules. I.e. one, who introduces PROXY header support, naturally thinks why it is done, and what are the consequences. On the other hand, for dynamic environments like running in Kubernetes, Docker Swarm, a) tons of security is outsourced to them already; b) it is realy hard to know addresses in advance; c) even if it is some how queried, there is no precondition that dynamic infrastructure will not change at any time (imagine PROXY header provider service is upgraded, as result all containers recreated, with new IP addresses, even networks, do one has to also update/reconfigure/restart all other parts of application? This is definetly far from microservices approach :) ). Does above make sense? If not, considering all of the above, I can add support as following: 1) I think that "real_ip thing" can be implemented as an optional feature. I.e. if "set_real_ip_from" is present, then it has to be consulted, other wise just let proxy_protocol do its thing. 2) ngx_*_realip_module is very heavy, it also requires ngx_*_variables. If implemented in mail, I see it as simple array directive of addresses checked as described in 1) above. I.e. without variables etc. On 2021.01.18 19:01, Maxim Dounin wrote: >Hello! > >On Sat, Jan 16, 2021 at 07:38:49PM +0300, Murad Mamedov wrote: > >> First of all, ignore patch in first mail, I don't use mercurial on >> daily basis, and my neomutt screwed the patch. Second mail in thread >> contains just patch and it seems to be correct. >> >> I wanted to address few other things on the subject. I started my way >> from >> http://mailman.nginx.org/pipermail/nginx-devel/2016-November/009083.html, >> however decisions done there I found incorrect. Author tried to jump >> right into XCLIENT from PROXY PROTOCOL. In proposed implementation proxy >> protocol is handled in the begining of connections. Since XCLIENT gets >> its address from ngx_connection_s, it will get automatically downstream >> provided address of client. >> >> In the same thread, there was questions on how to deal with >> "real_ip_header" and "set_real_ip_from". As I mentioned in the original >> description to the patch, one may need these in case of HTTP protocol, >> which is very flexible, with tons of applications behind that may demand >> presense of real ip address in different places/headers. For ancient mail >> protocols, it is not the case. They are very strict, very few >> applications that implement it, probably Postfix, Exim and Dovecot be >> the only practical implementations. And they do support proxy protocol >> out of the box. So I could not find real reason to apply "real_ip" >> thing. With proposed implementation, it just worked out of the box, with >> minimum configuration. The only thing which could be added if need is >> the overriding of "destination address" of proxy protocol (i.e. address >> which client reached). For now I didn't see where it could be useful in >> mentioned above mail applications. Client address, yes, we do pass, >> server address ?\_(?)_/?, who cares. > >The main problem the "real_ip thing" is expected to address is >that whithout set_real_ip_from restricted set of addresses to >accept IP address from you are open to IP spoofing. > >Most recent relevant discussion about "why one should never trust >X-Forwarded-For addresses and what to do with the fact that many >don't understand it" can be found in the thread here: > >http://mailman.nginx.org/pipermail/nginx/2021-January/060319.html > >Mail proxying is indeed much simplier than http, yet the question >is still here: how to ensure that "listen ... proxy_protocol;" is >only configured to accept client IP addresses from trusted >sources. > >Blindly assuming that anyone who used "listen ... proxy_protocol;" >in the configuration took care and restricted access to the >listening socket in question on the network level might not be a >good idea. Especially given it works differently in both http and >stream modules. Rather, I would like to see it equivalent to how >it is implemented in the stream module with "set_real_ip_from" >(http://nginx.org/en/docs/stream/ngx_stream_realip_module.html). > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel -- muradm From ping.zhao at intel.com Tue Jan 19 03:32:30 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Tue, 19 Jan 2021 03:32:30 +0000 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> Message-ID: It depends on if disk io is the performance hot spot or not. If yes, io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx thread it's hard to see performance difference because iostat is only around ~10MB/100MB per second. Disk io is not the performance bottle neck, both libaio and io_uring have the same performance. If you increase request size or Nginx threads number, for example 1MB length or Nginx thread number 4. In this case, disk io became the performance bottle neck, you will see io_uring performance improvement. -----Original Message----- From: nginx-devel On Behalf Of Vladimir Homutov Sent: Monday, January 18, 2021 10:11 PM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add io_uring support in AIO(async io) module 18.01.2021 11:24, Zhao, Ping ?????: > Hi Vladimir, > > I tested with response from 4KB to 1MB length which are ok. The procedure is first storing all the cache files on a nvme disk(~20T), then check the iostat & NIC bandwidth since then Nginx will use the cache files on disk with io_uring or libaio. So my patch didn't impact sendfile procedure, it provides another implementation of legacy libaio. > > Regards, > Ping yes, I see that your implementation is different. I wonder, if you can see any difference in performance depending on request size? Is it always constant? > > -----Original Message----- > From: nginx-devel On Behalf Of > Vladimir Homutov > Sent: Monday, January 18, 2021 3:28 PM > To: nginx-devel at nginx.org > Subject: Re: [PATCH] Add io_uring support in AIO(async io) module > > On Thu, Jan 14, 2021 at 05:53:17AM +0000, Zhao, Ping wrote: >> # HG changeset patch >> # User Ping Zhao # Date 1610554205 18000 >> # Wed Jan 13 11:10:05 2021 -0500 >> # Node ID 95886c3353dc80a3da215027c1e0f2141e47e911 >> # Parent b055bb6ef87e49232a7fcb4e5334b8efda3b6499 >> Add io_uring support in AIO(async io) module. >> >> Hello, This is a patch to support io_uring in AIO(async io) module. >> Basically you don't need change your configurations. If you're using new kernel(above v5.1) which supports io_uring, and you have "aio on" in your configuration. Nginx will use io_uring for FILE_AIO access which can achieve performance improvement than legacy libaio. >> >> Checked with iostat which shows nvme disk io has 30%+ performance improvement with 1 thread. >> Use wrk with 100 threads 200 connections(-t 100 -c 200) with 25000 random requests. >> >> iostat(B/s) >> libaio ~1.0 GB/s >> io_uring 1.3+ GB/s > > Hello, > > what size of request did you use in your testing? > The previous attempt to use uring > (http://mailman.nginx.org/pipermail/nginx-devel/2020-November/013632.h > tml) seem to have issues with big requests and fallback to sendfile in > such cases. Note that from the standpoint of HTTP server, most requests are usually larger than 4Kb. > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From mail at muradm.net Tue Jan 19 15:33:31 2021 From: mail at muradm.net (=?iso-8859-1?q?muradm?=) Date: Tue, 19 Jan 2021 18:33:31 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support In-Reply-To: <20210118160128.GO1147@mdounin.ru> References: <20210118160128.GO1147@mdounin.ru> Message-ID: <4618e767b84c5b3a7712.1611070411@muradm-aln1> A non-text attachment was scrubbed... Name: nginx.patch Type: text/x-patch Size: 21412 bytes Desc: not available URL: From mail at muradm.net Tue Jan 19 15:34:30 2021 From: mail at muradm.net (=?iso-8859-1?q?muradm?=) Date: Tue, 19 Jan 2021 18:34:30 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support In-Reply-To: <20210118160128.GO1147@mdounin.ru> References: <20210118160128.GO1147@mdounin.ru> Message-ID: <4618e767b84c5b3a7712.1611070470@muradm-aln1> # HG changeset patch # User muradm # Date 1611069863 -10800 # Tue Jan 19 18:24:23 2021 +0300 # Node ID 4618e767b84c5b3a7712466edb5bf37e3f0294ed # Parent 83c4622053b02821a12d522d08eaff3ac27e65e3 Mail: added PROXY PROTOCOL support. This implements propxy protocol support for both upstream and downstream. Downstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; } } This will properly handle incoming connections from load balancer sending PROXY protocol header. Without this, it is impossible to run nginx mail proxy behind such balancer. Header reading is done with existing function "ngx_proxy_protocol_read", so it should support both v1 and v2 headers. This will also set "sockaddr" and "local_sockaddr" addresses from received header, mimicing "set_realip". While "realip_module" deals with variables etc., which is necessary for HTTP protocol, mail protocols are pretty strict, so there is no need for flexible handling of real addresses received. Upstream proxy protocol support: mail { server { listen [ssl]; protocol ; proxy_protocol on; } } With this, upstream server (like Postfix, Exim, Dovecot) will have PROXY protocol header. Mentioned programs do support proxy protocol out of the box. Header is written with existing function "ngx_proxy_protocol_write" which supports only v1 header writing. Contents of header are written from "sockaddr" and "local_sockaddr". Downstream and upstream proxy protocol support: mail { server { listen [ssl] proxy_protocol; protocol ; proxy_protocol on; } } This will combine both receiving PROXY header and sending PROXY header. With this, upstream server (like Postfix, Exim, Dovecot) will receive the same header as was sent by downstream load balancer. Above configurations work for SSL as well and should be transparent to other mail related configurations. Added upstream server "connect_timeout" which defaults to 1 second. Server configurations enabling proxy_protocol in listen directive, require "set_real_ip_from" configuration. Like the following: mail { # ... server { listen 587 proxy_protocol; set_real_ip_from "192.168.1.1"; set_real_ip_from "10.10.0.0/16"; set_real_ip_from "0.0.0.0/0"; } } With enabled "proxy_protocol" and missing at least one "set_real_ip_from", all connections will be dropped and at startup user will see in error_log: using PROXY protocol without set_real_ip_from \ while reading PROXY protocol header When "set_real_ip_from" is provided, but remote address on physical connection does not satisfy any address criteria, at "notice" level, in error_log, user will see: UNTRUSTED PROXY protocol provider: 127.0.0.1 \ while reading PROXY protocol header, \ client: 127.0.0.1, server: 127.0.0.1:8143 diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail.c --- a/src/mail/ngx_mail.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.c Tue Jan 19 18:24:23 2021 +0300 @@ -402,6 +402,7 @@ addrs[i].addr = sin->sin_addr.s_addr; addrs[i].conf.ctx = addr[i].opt.ctx; + addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs[i].conf.ssl = addr[i].opt.ssl; #endif @@ -436,6 +437,7 @@ addrs6[i].addr6 = sin6->sin6_addr; addrs6[i].conf.ctx = addr[i].opt.ctx; + addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; #if (NGX_MAIL_SSL) addrs6[i].conf.ssl = addr[i].opt.ssl; #endif diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail.h --- a/src/mail/ngx_mail.h Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail.h Tue Jan 19 18:24:23 2021 +0300 @@ -37,6 +37,7 @@ unsigned bind:1; unsigned wildcard:1; unsigned ssl:1; + unsigned proxy_protocol:1; #if (NGX_HAVE_INET6) unsigned ipv6only:1; #endif @@ -56,6 +57,7 @@ ngx_mail_conf_ctx_t *ctx; ngx_str_t addr_text; ngx_uint_t ssl; /* unsigned ssl:1; */ + unsigned proxy_protocol:1; } ngx_mail_addr_conf_t; typedef struct { @@ -125,6 +127,8 @@ ngx_mail_conf_ctx_t *ctx; ngx_uint_t listen; /* unsigned listen:1; */ + + ngx_array_t *realip_from; /* array of ngx_cidr_t */ } ngx_mail_core_srv_conf_t; @@ -190,6 +194,7 @@ void **ctx; void **main_conf; void **srv_conf; + ngx_mail_addr_conf_t *addr_conf; ngx_resolver_ctx_t *resolver_ctx; @@ -197,6 +202,7 @@ ngx_uint_t mail_state; + unsigned proxy_protocol:1; unsigned protocol:3; unsigned blocked:1; unsigned quit:1; diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_core_module.c Tue Jan 19 18:24:23 2021 +0300 @@ -25,7 +25,7 @@ void *conf); static char *ngx_mail_core_resolver(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); - +static char *ngx_mail_core_realip_from(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static ngx_command_t ngx_mail_core_commands[] = { @@ -85,6 +85,13 @@ offsetof(ngx_mail_core_srv_conf_t, resolver_timeout), NULL }, + { ngx_string("set_real_ip_from"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_mail_core_realip_from, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_core_srv_conf_t, realip_from), + NULL }, + ngx_null_command }; @@ -165,6 +172,8 @@ cscf->resolver = NGX_CONF_UNSET_PTR; + cscf->realip_from = NGX_CONF_UNSET_PTR; + cscf->file_name = cf->conf_file->file.name.data; cscf->line = cf->conf_file->line; @@ -206,6 +215,10 @@ ngx_conf_merge_ptr_value(conf->resolver, prev->resolver, NULL); + ngx_conf_merge_ptr_value(conf->realip_from, + prev->realip_from, + NGX_CONF_UNSET_PTR); + return NGX_CONF_OK; } @@ -548,6 +561,11 @@ #endif } + if (ngx_strcmp(value[i].data, "proxy_protocol") == 0) { + ls->proxy_protocol = 1; + continue; + } + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "the invalid \"%V\" parameter", &value[i]); return NGX_CONF_ERROR; @@ -676,3 +694,104 @@ return NGX_CONF_OK; } + +char * +ngx_mail_core_realip_from(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_mail_core_srv_conf_t *cscf = conf; + + ngx_int_t rc; + ngx_str_t *value; + ngx_url_t u; + ngx_cidr_t c, *cidr; + ngx_uint_t i; + struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif + + value = cf->args->elts; + + if (cscf->realip_from == NGX_CONF_UNSET_PTR) { + cscf->realip_from = ngx_array_create(cf->pool, 2, sizeof(ngx_cidr_t)); + if (cscf->realip_from == NULL) { + return NGX_CONF_ERROR; + } + } + +#if (NGX_HAVE_UNIX_DOMAIN) + + if (ngx_strcmp(value[1].data, "unix:") == 0) { + cidr = ngx_array_push(cscf->realip_from); + if (cidr == NULL) { + return NGX_CONF_ERROR; + } + + cidr->family = AF_UNIX; + return NGX_CONF_OK; + } + +#endif + + rc = ngx_ptocidr(&value[1], &c); + + if (rc != NGX_ERROR) { + if (rc == NGX_DONE) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "low address bits of %V are meaningless", + &value[1]); + } + + cidr = ngx_array_push(cscf->realip_from); + if (cidr == NULL) { + return NGX_CONF_ERROR; + } + + *cidr = c; + + return NGX_CONF_OK; + } + + ngx_memzero(&u, sizeof(ngx_url_t)); + u.host = value[1]; + + if (ngx_inet_resolve_host(cf->pool, &u) != NGX_OK) { + if (u.err) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "%s in set_real_ip_from \"%V\"", + u.err, &u.host); + } + + return NGX_CONF_ERROR; + } + + cidr = ngx_array_push_n(cscf->realip_from, u.naddrs); + if (cidr == NULL) { + return NGX_CONF_ERROR; + } + + ngx_memzero(cidr, u.naddrs * sizeof(ngx_cidr_t)); + + for (i = 0; i < u.naddrs; i++) { + cidr[i].family = u.addrs[i].sockaddr->sa_family; + + switch (cidr[i].family) { + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = (struct sockaddr_in6 *) u.addrs[i].sockaddr; + cidr[i].u.in6.addr = sin6->sin6_addr; + ngx_memset(cidr[i].u.in6.mask.s6_addr, 0xff, 16); + break; +#endif + + default: /* AF_INET */ + sin = (struct sockaddr_in *) u.addrs[i].sockaddr; + cidr[i].u.in.addr = sin->sin_addr.s_addr; + cidr[i].u.in.mask = 0xffffffff; + break; + } + } + + return NGX_CONF_OK; +} diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_handler.c --- a/src/mail/ngx_mail_handler.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_handler.c Tue Jan 19 18:24:23 2021 +0300 @@ -12,6 +12,8 @@ static void ngx_mail_init_session(ngx_connection_t *c); +static void ngx_mail_init_connection_complete(ngx_connection_t *c); +static void ngx_mail_proxy_protocol_handler(ngx_event_t *rev); #if (NGX_MAIL_SSL) static void ngx_mail_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); @@ -128,6 +130,7 @@ s->main_conf = addr_conf->ctx->main_conf; s->srv_conf = addr_conf->ctx->srv_conf; + s->addr_conf = addr_conf; s->addr_text = &addr_conf->addr_text; @@ -159,13 +162,181 @@ c->log_error = NGX_ERROR_INFO; + /* + * Before all process proxy protocol + */ + + if (addr_conf->proxy_protocol) { + s->proxy_protocol = 1; + c->log->action = "reading PROXY protocol header"; + c->read->handler = ngx_mail_proxy_protocol_handler; + + ngx_add_timer(c->read, cscf->timeout); + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + ngx_mail_close_connection(c); + } + + return; + } + + ngx_mail_init_connection_complete(c); +} + + +ngx_int_t +ngx_mail_proxy_protoco_set_addrs(ngx_connection_t *c) +{ + ngx_addr_t addr_peer, addr_local; + u_char *p, text[NGX_SOCKADDR_STRLEN]; + size_t len; + + if (ngx_parse_addr(c->pool, &addr_peer, + c->proxy_protocol->src_addr.data, + c->proxy_protocol->src_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_peer.sockaddr, c->proxy_protocol->src_port); + + if (ngx_parse_addr(c->pool, &addr_local, + c->proxy_protocol->dst_addr.data, + c->proxy_protocol->dst_addr.len) != NGX_OK) + { + return NGX_ERROR; + } + + ngx_inet_set_port(addr_local.sockaddr, c->proxy_protocol->dst_port); + + len = ngx_sock_ntop(addr_peer.sockaddr, addr_peer.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->sockaddr = addr_peer.sockaddr; + c->socklen = addr_peer.socklen; + c->addr_text.len = len; + c->addr_text.data = p; + + len = ngx_sock_ntop(addr_local.sockaddr, addr_local.socklen, text, + NGX_SOCKADDR_STRLEN, 0); + if (len == 0) { + return NGX_ERROR; + } + + p = ngx_pnalloc(c->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + ngx_memcpy(p, text, len); + + c->local_sockaddr = addr_local.sockaddr; + c->local_socklen = addr_local.socklen; + + return NGX_OK; +} + + +void +ngx_mail_proxy_protocol_handler(ngx_event_t *rev) +{ + ngx_mail_core_srv_conf_t *cscf; + ngx_mail_session_t *s; + ngx_connection_t *c; + u_char *p, buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + size_t size; + ssize_t n; + + c = rev->data; + s = c->data; + + if (rev->timedout) { + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, + "mail PROXY protocol header timed out"); + c->timedout = 1; + ngx_mail_close_connection(c); + return; + } + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail PROXY protocol handler"); + + cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); + + if (cscf->realip_from == NGX_CONF_UNSET_PTR) { + ngx_log_error(NGX_LOG_WARN, c->log, 0, + "using PROXY protocol without set_real_ip_from"); + ngx_mail_close_connection(c); + return; + } + + if (ngx_cidr_match(c->sockaddr, cscf->realip_from) != NGX_OK) { + ngx_log_error(NGX_LOG_NOTICE, c->log, 0, + "UNTRUSTED PROXY protocol provider: %V", + &c->addr_text); + ngx_mail_close_connection(c); + return; + } + + size = NGX_PROXY_PROTOCOL_MAX_HEADER; + + n = recv(c->fd, (char *) buf, size, MSG_PEEK); + + ngx_log_debug1(NGX_LOG_DEBUG, c->log, 0, "mail recv(): %z", n); + + p = ngx_proxy_protocol_read(c, buf, buf + n); + + if (p == NULL) { + ngx_mail_close_connection(c); + return; + } + + ngx_log_error(NGX_LOG_NOTICE, c->log, 0, + "PROXY protocol %V:%d => %V:%d", + &c->proxy_protocol->src_addr, + c->proxy_protocol->src_port, + &c->proxy_protocol->dst_addr, + c->proxy_protocol->dst_port); + + size = p - buf; + + if (c->recv(c, buf, size) != (ssize_t) size) { + ngx_mail_close_connection(c); + return; + } + + if (ngx_mail_proxy_protoco_set_addrs(c) != NGX_OK) { + ngx_mail_close_connection(c); + return; + } + + ngx_mail_init_connection_complete(c); +} + + +void +ngx_mail_init_connection_complete(ngx_connection_t *c) +{ #if (NGX_MAIL_SSL) { - ngx_mail_ssl_conf_t *sslcf; + ngx_mail_session_t *s; + ngx_mail_ssl_conf_t *sslcf; + + s = c->data; sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); - if (sslcf->enable || addr_conf->ssl) { + if (sslcf->enable || s->addr_conf->ssl) { c->log->action = "SSL handshaking"; ngx_mail_ssl_init_connection(&sslcf->ssl, c); @@ -348,6 +519,7 @@ return; } + c->log->action = "sending client greeting line"; c->write->handler = ngx_mail_send; cscf->protocol->init_session(s, c); diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_proxy_module.c --- a/src/mail/ngx_mail_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/mail/ngx_mail_proxy_module.c Tue Jan 19 18:24:23 2021 +0300 @@ -19,6 +19,8 @@ ngx_flag_t smtp_auth; size_t buffer_size; ngx_msec_t timeout; + ngx_msec_t connect_timeout; + ngx_flag_t proxy_protocol; } ngx_mail_proxy_conf_t; @@ -36,7 +38,9 @@ static void *ngx_mail_proxy_create_conf(ngx_conf_t *cf); static char *ngx_mail_proxy_merge_conf(ngx_conf_t *cf, void *parent, void *child); - +static void ngx_mail_proxy_connect_handler(ngx_event_t *ev); +static void ngx_mail_proxy_start(ngx_mail_session_t *s); +static void ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s); static ngx_command_t ngx_mail_proxy_commands[] = { @@ -61,6 +65,13 @@ offsetof(ngx_mail_proxy_conf_t, timeout), NULL }, + { ngx_string("connect_timeout"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, connect_timeout), + NULL }, + { ngx_string("proxy_pass_error_message"), NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -82,6 +93,13 @@ offsetof(ngx_mail_proxy_conf_t, smtp_auth), NULL }, + { ngx_string("proxy_protocol"), + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_MAIL_SRV_CONF_OFFSET, + offsetof(ngx_mail_proxy_conf_t, proxy_protocol), + NULL }, + ngx_null_command }; @@ -156,7 +174,6 @@ p->upstream.connection->pool = s->connection->pool; s->connection->read->handler = ngx_mail_proxy_block_read; - p->upstream.connection->write->handler = ngx_mail_proxy_dummy_handler; pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); @@ -169,23 +186,139 @@ s->out.len = 0; + if (rc == NGX_AGAIN) { + p->upstream.connection->write->handler = ngx_mail_proxy_connect_handler; + p->upstream.connection->read->handler = ngx_mail_proxy_connect_handler; + + ngx_add_timer(p->upstream.connection->write, pcf->connect_timeout); + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, "mail proxy delay connect"); + return; + } + + if (pcf->proxy_protocol) { + ngx_mail_proxy_send_proxy_protocol(s); + return; + } + + ngx_mail_proxy_start(s); +} + + +void +ngx_mail_proxy_connect_handler(ngx_event_t *ev) +{ + ngx_connection_t *c; + ngx_mail_session_t *s; + ngx_mail_proxy_conf_t *pcf; + + c = ev->data; + s = c->data; + + if (ev->timedout) { + ngx_log_error(NGX_LOG_ERR, c->log, NGX_ETIMEDOUT, "upstream timed out"); + ngx_mail_session_internal_server_error(s); + return; + } + + ngx_del_timer(c->write); + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy connect upstream"); + + pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); + + if (pcf->proxy_protocol) { + ngx_mail_proxy_send_proxy_protocol(s); + return; + } + + ngx_mail_proxy_start(s); +} + + +void +ngx_mail_proxy_start(ngx_mail_session_t *s) +{ + ngx_connection_t *pc; + + pc = s->proxy->upstream.connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, + "mail proxy starting"); + + pc->write->handler = ngx_mail_proxy_dummy_handler; + switch (s->protocol) { case NGX_MAIL_POP3_PROTOCOL: - p->upstream.connection->read->handler = ngx_mail_proxy_pop3_handler; + pc->read->handler = ngx_mail_proxy_pop3_handler; s->mail_state = ngx_pop3_start; break; case NGX_MAIL_IMAP_PROTOCOL: - p->upstream.connection->read->handler = ngx_mail_proxy_imap_handler; + pc->read->handler = ngx_mail_proxy_imap_handler; s->mail_state = ngx_imap_start; break; default: /* NGX_MAIL_SMTP_PROTOCOL */ - p->upstream.connection->read->handler = ngx_mail_proxy_smtp_handler; + pc->read->handler = ngx_mail_proxy_smtp_handler; s->mail_state = ngx_smtp_start; break; } + + if (pc->read->ready) { + ngx_post_event(pc->read, &ngx_posted_events); + } +} + + +void +ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s) +{ + u_char *p; + ssize_t n, size; + ngx_connection_t *c, *pc; + ngx_peer_connection_t *u; + u_char buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; + + c = s->connection; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "mail proxy send PROXY protocol header"); + + p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); + if (p == NULL) { + ngx_mail_proxy_internal_server_error(s); + return; + } + + u = &s->proxy->upstream; + + pc = u->connection; + + size = p - buf; + + n = pc->send(pc, buf, size); + + if (n != size) { + + /* + * PROXY protocol specification: + * The sender must always ensure that the header + * is sent at once, so that the transport layer + * maintains atomicity along the path to the receiver. + */ + + ngx_log_error(NGX_LOG_ERR, c->log, 0, + "could not send PROXY protocol header at once (%z)", n); + + ngx_mail_proxy_internal_server_error(s); + + return; + } + + ngx_mail_proxy_start(s); } @@ -1184,6 +1317,8 @@ pcf->smtp_auth = NGX_CONF_UNSET; pcf->buffer_size = NGX_CONF_UNSET_SIZE; pcf->timeout = NGX_CONF_UNSET_MSEC; + pcf->connect_timeout = NGX_CONF_UNSET_MSEC; + pcf->proxy_protocol = NGX_CONF_UNSET; return pcf; } @@ -1202,6 +1337,8 @@ ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, (size_t) ngx_pagesize); ngx_conf_merge_msec_value(conf->timeout, prev->timeout, 24 * 60 * 60000); + ngx_conf_merge_msec_value(conf->connect_timeout, prev->connect_timeout, 1000); + ngx_conf_merge_value(conf->proxy_protocol, prev->proxy_protocol, 0); return NGX_CONF_OK; } From mail at muradm.net Tue Jan 19 15:38:05 2021 From: mail at muradm.net (muradm) Date: Tue, 19 Jan 2021 18:38:05 +0300 Subject: [PATCH] Mail: added PROXY PROTOCOL support In-Reply-To: <4618e767b84c5b3a7712.1611070470@muradm-aln1> References: <20210118160128.GO1147@mdounin.ru> <4618e767b84c5b3a7712.1611070470@muradm-aln1> Message-ID: <20210119153805.yqj2zcklydahefkf@muradm-aln1> Sorry, my mercurial skills sucks. Last patch includes "set_real_ip_from" support in the same way it is supported by "ngx_streams" without variables implementation, i.e. it just configures the "real_ip thing", but addresses cannot be used as variables else where in configuration. On 2021.01.19 18:34, muradm wrote: ># HG changeset patch ># User muradm ># Date 1611069863 -10800 ># Tue Jan 19 18:24:23 2021 +0300 ># Node ID 4618e767b84c5b3a7712466edb5bf37e3f0294ed ># Parent 83c4622053b02821a12d522d08eaff3ac27e65e3 >Mail: added PROXY PROTOCOL support. > >This implements propxy protocol support for both upstream and downstream. > >Downstream proxy protocol support: > >mail { > server { > listen [ssl] proxy_protocol; > protocol ; > } >} > >This will properly handle incoming connections from load balancer sending >PROXY protocol header. Without this, it is impossible to run nginx mail >proxy behind such balancer. Header reading is done with existing function >"ngx_proxy_protocol_read", so it should support both v1 and v2 headers. >This will also set "sockaddr" and "local_sockaddr" addresses from received >header, mimicing "set_realip". While "realip_module" deals with variables >etc., which is necessary for HTTP protocol, mail protocols are pretty >strict, so there is no need for flexible handling of real addresses >received. > >Upstream proxy protocol support: > >mail { > server { > listen [ssl]; > protocol ; > proxy_protocol on; > } >} > >With this, upstream server (like Postfix, Exim, Dovecot) will have PROXY >protocol header. Mentioned programs do support proxy protocol out of the >box. Header is written with existing function "ngx_proxy_protocol_write" >which supports only v1 header writing. Contents of header are written >from "sockaddr" and "local_sockaddr". > >Downstream and upstream proxy protocol support: > >mail { > server { > listen [ssl] proxy_protocol; > protocol ; > proxy_protocol on; > } >} > >This will combine both receiving PROXY header and sending PROXY header. With >this, upstream server (like Postfix, Exim, Dovecot) will receive the same >header as was sent by downstream load balancer. > >Above configurations work for SSL as well and should be transparent to other >mail related configurations. > >Added upstream server "connect_timeout" which defaults to 1 second. > >Server configurations enabling proxy_protocol in listen directive, require >"set_real_ip_from" configuration. Like the following: > >mail { > # ... > server { > listen 587 proxy_protocol; > set_real_ip_from "192.168.1.1"; > set_real_ip_from "10.10.0.0/16"; > set_real_ip_from "0.0.0.0/0"; > } >} > >With enabled "proxy_protocol" and missing at least one "set_real_ip_from", >all connections will be dropped and at startup user will see in error_log: > > using PROXY protocol without set_real_ip_from \ > while reading PROXY protocol header > >When "set_real_ip_from" is provided, but remote address on physical connection >does not satisfy any address criteria, at "notice" level, in error_log, user >will see: > > UNTRUSTED PROXY protocol provider: 127.0.0.1 \ > while reading PROXY protocol header, \ > client: 127.0.0.1, server: 127.0.0.1:8143 > >diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail.c >--- a/src/mail/ngx_mail.c Tue Jan 12 16:59:31 2021 +0300 >+++ b/src/mail/ngx_mail.c Tue Jan 19 18:24:23 2021 +0300 >@@ -402,6 +402,7 @@ > addrs[i].addr = sin->sin_addr.s_addr; > > addrs[i].conf.ctx = addr[i].opt.ctx; >+ addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; > #if (NGX_MAIL_SSL) > addrs[i].conf.ssl = addr[i].opt.ssl; > #endif >@@ -436,6 +437,7 @@ > addrs6[i].addr6 = sin6->sin6_addr; > > addrs6[i].conf.ctx = addr[i].opt.ctx; >+ addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; > #if (NGX_MAIL_SSL) > addrs6[i].conf.ssl = addr[i].opt.ssl; > #endif >diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail.h >--- a/src/mail/ngx_mail.h Tue Jan 12 16:59:31 2021 +0300 >+++ b/src/mail/ngx_mail.h Tue Jan 19 18:24:23 2021 +0300 >@@ -37,6 +37,7 @@ > unsigned bind:1; > unsigned wildcard:1; > unsigned ssl:1; >+ unsigned proxy_protocol:1; > #if (NGX_HAVE_INET6) > unsigned ipv6only:1; > #endif >@@ -56,6 +57,7 @@ > ngx_mail_conf_ctx_t *ctx; > ngx_str_t addr_text; > ngx_uint_t ssl; /* unsigned ssl:1; */ >+ unsigned proxy_protocol:1; > } ngx_mail_addr_conf_t; > > typedef struct { >@@ -125,6 +127,8 @@ > ngx_mail_conf_ctx_t *ctx; > > ngx_uint_t listen; /* unsigned listen:1; */ >+ >+ ngx_array_t *realip_from; /* array of ngx_cidr_t */ > } ngx_mail_core_srv_conf_t; > > >@@ -190,6 +194,7 @@ > void **ctx; > void **main_conf; > void **srv_conf; >+ ngx_mail_addr_conf_t *addr_conf; > > ngx_resolver_ctx_t *resolver_ctx; > >@@ -197,6 +202,7 @@ > > ngx_uint_t mail_state; > >+ unsigned proxy_protocol:1; > unsigned protocol:3; > unsigned blocked:1; > unsigned quit:1; >diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_core_module.c >--- a/src/mail/ngx_mail_core_module.c Tue Jan 12 16:59:31 2021 +0300 >+++ b/src/mail/ngx_mail_core_module.c Tue Jan 19 18:24:23 2021 +0300 >@@ -25,7 +25,7 @@ > void *conf); > static char *ngx_mail_core_resolver(ngx_conf_t *cf, ngx_command_t *cmd, > void *conf); >- >+static char *ngx_mail_core_realip_from(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); > > static ngx_command_t ngx_mail_core_commands[] = { > >@@ -85,6 +85,13 @@ > offsetof(ngx_mail_core_srv_conf_t, resolver_timeout), > NULL }, > >+ { ngx_string("set_real_ip_from"), >+ NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, >+ ngx_mail_core_realip_from, >+ NGX_MAIL_SRV_CONF_OFFSET, >+ offsetof(ngx_mail_core_srv_conf_t, realip_from), >+ NULL }, >+ > ngx_null_command > }; > >@@ -165,6 +172,8 @@ > > cscf->resolver = NGX_CONF_UNSET_PTR; > >+ cscf->realip_from = NGX_CONF_UNSET_PTR; >+ > cscf->file_name = cf->conf_file->file.name.data; > cscf->line = cf->conf_file->line; > >@@ -206,6 +215,10 @@ > > ngx_conf_merge_ptr_value(conf->resolver, prev->resolver, NULL); > >+ ngx_conf_merge_ptr_value(conf->realip_from, >+ prev->realip_from, >+ NGX_CONF_UNSET_PTR); >+ > return NGX_CONF_OK; > } > >@@ -548,6 +561,11 @@ > #endif > } > >+ if (ngx_strcmp(value[i].data, "proxy_protocol") == 0) { >+ ls->proxy_protocol = 1; >+ continue; >+ } >+ > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > "the invalid \"%V\" parameter", &value[i]); > return NGX_CONF_ERROR; >@@ -676,3 +694,104 @@ > > return NGX_CONF_OK; > } >+ >+char * >+ngx_mail_core_realip_from(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) >+{ >+ ngx_mail_core_srv_conf_t *cscf = conf; >+ >+ ngx_int_t rc; >+ ngx_str_t *value; >+ ngx_url_t u; >+ ngx_cidr_t c, *cidr; >+ ngx_uint_t i; >+ struct sockaddr_in *sin; >+#if (NGX_HAVE_INET6) >+ struct sockaddr_in6 *sin6; >+#endif >+ >+ value = cf->args->elts; >+ >+ if (cscf->realip_from == NGX_CONF_UNSET_PTR) { >+ cscf->realip_from = ngx_array_create(cf->pool, 2, sizeof(ngx_cidr_t)); >+ if (cscf->realip_from == NULL) { >+ return NGX_CONF_ERROR; >+ } >+ } >+ >+#if (NGX_HAVE_UNIX_DOMAIN) >+ >+ if (ngx_strcmp(value[1].data, "unix:") == 0) { >+ cidr = ngx_array_push(cscf->realip_from); >+ if (cidr == NULL) { >+ return NGX_CONF_ERROR; >+ } >+ >+ cidr->family = AF_UNIX; >+ return NGX_CONF_OK; >+ } >+ >+#endif >+ >+ rc = ngx_ptocidr(&value[1], &c); >+ >+ if (rc != NGX_ERROR) { >+ if (rc == NGX_DONE) { >+ ngx_conf_log_error(NGX_LOG_WARN, cf, 0, >+ "low address bits of %V are meaningless", >+ &value[1]); >+ } >+ >+ cidr = ngx_array_push(cscf->realip_from); >+ if (cidr == NULL) { >+ return NGX_CONF_ERROR; >+ } >+ >+ *cidr = c; >+ >+ return NGX_CONF_OK; >+ } >+ >+ ngx_memzero(&u, sizeof(ngx_url_t)); >+ u.host = value[1]; >+ >+ if (ngx_inet_resolve_host(cf->pool, &u) != NGX_OK) { >+ if (u.err) { >+ ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, >+ "%s in set_real_ip_from \"%V\"", >+ u.err, &u.host); >+ } >+ >+ return NGX_CONF_ERROR; >+ } >+ >+ cidr = ngx_array_push_n(cscf->realip_from, u.naddrs); >+ if (cidr == NULL) { >+ return NGX_CONF_ERROR; >+ } >+ >+ ngx_memzero(cidr, u.naddrs * sizeof(ngx_cidr_t)); >+ >+ for (i = 0; i < u.naddrs; i++) { >+ cidr[i].family = u.addrs[i].sockaddr->sa_family; >+ >+ switch (cidr[i].family) { >+ >+#if (NGX_HAVE_INET6) >+ case AF_INET6: >+ sin6 = (struct sockaddr_in6 *) u.addrs[i].sockaddr; >+ cidr[i].u.in6.addr = sin6->sin6_addr; >+ ngx_memset(cidr[i].u.in6.mask.s6_addr, 0xff, 16); >+ break; >+#endif >+ >+ default: /* AF_INET */ >+ sin = (struct sockaddr_in *) u.addrs[i].sockaddr; >+ cidr[i].u.in.addr = sin->sin_addr.s_addr; >+ cidr[i].u.in.mask = 0xffffffff; >+ break; >+ } >+ } >+ >+ return NGX_CONF_OK; >+} >diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_handler.c >--- a/src/mail/ngx_mail_handler.c Tue Jan 12 16:59:31 2021 +0300 >+++ b/src/mail/ngx_mail_handler.c Tue Jan 19 18:24:23 2021 +0300 >@@ -12,6 +12,8 @@ > > > static void ngx_mail_init_session(ngx_connection_t *c); >+static void ngx_mail_init_connection_complete(ngx_connection_t *c); >+static void ngx_mail_proxy_protocol_handler(ngx_event_t *rev); > > #if (NGX_MAIL_SSL) > static void ngx_mail_ssl_init_connection(ngx_ssl_t *ssl, ngx_connection_t *c); >@@ -128,6 +130,7 @@ > > s->main_conf = addr_conf->ctx->main_conf; > s->srv_conf = addr_conf->ctx->srv_conf; >+ s->addr_conf = addr_conf; > > s->addr_text = &addr_conf->addr_text; > >@@ -159,13 +162,181 @@ > > c->log_error = NGX_ERROR_INFO; > >+ /* >+ * Before all process proxy protocol >+ */ >+ >+ if (addr_conf->proxy_protocol) { >+ s->proxy_protocol = 1; >+ c->log->action = "reading PROXY protocol header"; >+ c->read->handler = ngx_mail_proxy_protocol_handler; >+ >+ ngx_add_timer(c->read, cscf->timeout); >+ >+ if (ngx_handle_read_event(c->read, 0) != NGX_OK) { >+ ngx_mail_close_connection(c); >+ } >+ >+ return; >+ } >+ >+ ngx_mail_init_connection_complete(c); >+} >+ >+ >+ngx_int_t >+ngx_mail_proxy_protoco_set_addrs(ngx_connection_t *c) >+{ >+ ngx_addr_t addr_peer, addr_local; >+ u_char *p, text[NGX_SOCKADDR_STRLEN]; >+ size_t len; >+ >+ if (ngx_parse_addr(c->pool, &addr_peer, >+ c->proxy_protocol->src_addr.data, >+ c->proxy_protocol->src_addr.len) != NGX_OK) >+ { >+ return NGX_ERROR; >+ } >+ >+ ngx_inet_set_port(addr_peer.sockaddr, c->proxy_protocol->src_port); >+ >+ if (ngx_parse_addr(c->pool, &addr_local, >+ c->proxy_protocol->dst_addr.data, >+ c->proxy_protocol->dst_addr.len) != NGX_OK) >+ { >+ return NGX_ERROR; >+ } >+ >+ ngx_inet_set_port(addr_local.sockaddr, c->proxy_protocol->dst_port); >+ >+ len = ngx_sock_ntop(addr_peer.sockaddr, addr_peer.socklen, text, >+ NGX_SOCKADDR_STRLEN, 0); >+ if (len == 0) { >+ return NGX_ERROR; >+ } >+ >+ p = ngx_pnalloc(c->pool, len); >+ if (p == NULL) { >+ return NGX_ERROR; >+ } >+ >+ ngx_memcpy(p, text, len); >+ >+ c->sockaddr = addr_peer.sockaddr; >+ c->socklen = addr_peer.socklen; >+ c->addr_text.len = len; >+ c->addr_text.data = p; >+ >+ len = ngx_sock_ntop(addr_local.sockaddr, addr_local.socklen, text, >+ NGX_SOCKADDR_STRLEN, 0); >+ if (len == 0) { >+ return NGX_ERROR; >+ } >+ >+ p = ngx_pnalloc(c->pool, len); >+ if (p == NULL) { >+ return NGX_ERROR; >+ } >+ >+ ngx_memcpy(p, text, len); >+ >+ c->local_sockaddr = addr_local.sockaddr; >+ c->local_socklen = addr_local.socklen; >+ >+ return NGX_OK; >+} >+ >+ >+void >+ngx_mail_proxy_protocol_handler(ngx_event_t *rev) >+{ >+ ngx_mail_core_srv_conf_t *cscf; >+ ngx_mail_session_t *s; >+ ngx_connection_t *c; >+ u_char *p, buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; >+ size_t size; >+ ssize_t n; >+ >+ c = rev->data; >+ s = c->data; >+ >+ if (rev->timedout) { >+ ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, >+ "mail PROXY protocol header timed out"); >+ c->timedout = 1; >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, >+ "mail PROXY protocol handler"); >+ >+ cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); >+ >+ if (cscf->realip_from == NGX_CONF_UNSET_PTR) { >+ ngx_log_error(NGX_LOG_WARN, c->log, 0, >+ "using PROXY protocol without set_real_ip_from"); >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ if (ngx_cidr_match(c->sockaddr, cscf->realip_from) != NGX_OK) { >+ ngx_log_error(NGX_LOG_NOTICE, c->log, 0, >+ "UNTRUSTED PROXY protocol provider: %V", >+ &c->addr_text); >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ size = NGX_PROXY_PROTOCOL_MAX_HEADER; >+ >+ n = recv(c->fd, (char *) buf, size, MSG_PEEK); >+ >+ ngx_log_debug1(NGX_LOG_DEBUG, c->log, 0, "mail recv(): %z", n); >+ >+ p = ngx_proxy_protocol_read(c, buf, buf + n); >+ >+ if (p == NULL) { >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ ngx_log_error(NGX_LOG_NOTICE, c->log, 0, >+ "PROXY protocol %V:%d => %V:%d", >+ &c->proxy_protocol->src_addr, >+ c->proxy_protocol->src_port, >+ &c->proxy_protocol->dst_addr, >+ c->proxy_protocol->dst_port); >+ >+ size = p - buf; >+ >+ if (c->recv(c, buf, size) != (ssize_t) size) { >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ if (ngx_mail_proxy_protoco_set_addrs(c) != NGX_OK) { >+ ngx_mail_close_connection(c); >+ return; >+ } >+ >+ ngx_mail_init_connection_complete(c); >+} >+ >+ >+void >+ngx_mail_init_connection_complete(ngx_connection_t *c) >+{ > #if (NGX_MAIL_SSL) > { >- ngx_mail_ssl_conf_t *sslcf; >+ ngx_mail_session_t *s; >+ ngx_mail_ssl_conf_t *sslcf; >+ >+ s = c->data; > > sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); > >- if (sslcf->enable || addr_conf->ssl) { >+ if (sslcf->enable || s->addr_conf->ssl) { > c->log->action = "SSL handshaking"; > > ngx_mail_ssl_init_connection(&sslcf->ssl, c); >@@ -348,6 +519,7 @@ > return; > } > >+ c->log->action = "sending client greeting line"; > c->write->handler = ngx_mail_send; > > cscf->protocol->init_session(s, c); >diff -r 83c4622053b0 -r 4618e767b84c src/mail/ngx_mail_proxy_module.c >--- a/src/mail/ngx_mail_proxy_module.c Tue Jan 12 16:59:31 2021 +0300 >+++ b/src/mail/ngx_mail_proxy_module.c Tue Jan 19 18:24:23 2021 +0300 >@@ -19,6 +19,8 @@ > ngx_flag_t smtp_auth; > size_t buffer_size; > ngx_msec_t timeout; >+ ngx_msec_t connect_timeout; >+ ngx_flag_t proxy_protocol; > } ngx_mail_proxy_conf_t; > > >@@ -36,7 +38,9 @@ > static void *ngx_mail_proxy_create_conf(ngx_conf_t *cf); > static char *ngx_mail_proxy_merge_conf(ngx_conf_t *cf, void *parent, > void *child); >- >+static void ngx_mail_proxy_connect_handler(ngx_event_t *ev); >+static void ngx_mail_proxy_start(ngx_mail_session_t *s); >+static void ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s); > > static ngx_command_t ngx_mail_proxy_commands[] = { > >@@ -61,6 +65,13 @@ > offsetof(ngx_mail_proxy_conf_t, timeout), > NULL }, > >+ { ngx_string("connect_timeout"), >+ NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, >+ ngx_conf_set_msec_slot, >+ NGX_MAIL_SRV_CONF_OFFSET, >+ offsetof(ngx_mail_proxy_conf_t, connect_timeout), >+ NULL }, >+ > { ngx_string("proxy_pass_error_message"), > NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, > ngx_conf_set_flag_slot, >@@ -82,6 +93,13 @@ > offsetof(ngx_mail_proxy_conf_t, smtp_auth), > NULL }, > >+ { ngx_string("proxy_protocol"), >+ NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, >+ ngx_conf_set_flag_slot, >+ NGX_MAIL_SRV_CONF_OFFSET, >+ offsetof(ngx_mail_proxy_conf_t, proxy_protocol), >+ NULL }, >+ > ngx_null_command > }; > >@@ -156,7 +174,6 @@ > p->upstream.connection->pool = s->connection->pool; > > s->connection->read->handler = ngx_mail_proxy_block_read; >- p->upstream.connection->write->handler = ngx_mail_proxy_dummy_handler; > > pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); > >@@ -169,23 +186,139 @@ > > s->out.len = 0; > >+ if (rc == NGX_AGAIN) { >+ p->upstream.connection->write->handler = ngx_mail_proxy_connect_handler; >+ p->upstream.connection->read->handler = ngx_mail_proxy_connect_handler; >+ >+ ngx_add_timer(p->upstream.connection->write, pcf->connect_timeout); >+ >+ ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, "mail proxy delay connect"); >+ return; >+ } >+ >+ if (pcf->proxy_protocol) { >+ ngx_mail_proxy_send_proxy_protocol(s); >+ return; >+ } >+ >+ ngx_mail_proxy_start(s); >+} >+ >+ >+void >+ngx_mail_proxy_connect_handler(ngx_event_t *ev) >+{ >+ ngx_connection_t *c; >+ ngx_mail_session_t *s; >+ ngx_mail_proxy_conf_t *pcf; >+ >+ c = ev->data; >+ s = c->data; >+ >+ if (ev->timedout) { >+ ngx_log_error(NGX_LOG_ERR, c->log, NGX_ETIMEDOUT, "upstream timed out"); >+ ngx_mail_session_internal_server_error(s); >+ return; >+ } >+ >+ ngx_del_timer(c->write); >+ >+ ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, >+ "mail proxy connect upstream"); >+ >+ pcf = ngx_mail_get_module_srv_conf(s, ngx_mail_proxy_module); >+ >+ if (pcf->proxy_protocol) { >+ ngx_mail_proxy_send_proxy_protocol(s); >+ return; >+ } >+ >+ ngx_mail_proxy_start(s); >+} >+ >+ >+void >+ngx_mail_proxy_start(ngx_mail_session_t *s) >+{ >+ ngx_connection_t *pc; >+ >+ pc = s->proxy->upstream.connection; >+ >+ ngx_log_debug0(NGX_LOG_DEBUG_MAIL, s->connection->log, 0, >+ "mail proxy starting"); >+ >+ pc->write->handler = ngx_mail_proxy_dummy_handler; >+ > switch (s->protocol) { > > case NGX_MAIL_POP3_PROTOCOL: >- p->upstream.connection->read->handler = ngx_mail_proxy_pop3_handler; >+ pc->read->handler = ngx_mail_proxy_pop3_handler; > s->mail_state = ngx_pop3_start; > break; > > case NGX_MAIL_IMAP_PROTOCOL: >- p->upstream.connection->read->handler = ngx_mail_proxy_imap_handler; >+ pc->read->handler = ngx_mail_proxy_imap_handler; > s->mail_state = ngx_imap_start; > break; > > default: /* NGX_MAIL_SMTP_PROTOCOL */ >- p->upstream.connection->read->handler = ngx_mail_proxy_smtp_handler; >+ pc->read->handler = ngx_mail_proxy_smtp_handler; > s->mail_state = ngx_smtp_start; > break; > } >+ >+ if (pc->read->ready) { >+ ngx_post_event(pc->read, &ngx_posted_events); >+ } >+} >+ >+ >+void >+ngx_mail_proxy_send_proxy_protocol(ngx_mail_session_t *s) >+{ >+ u_char *p; >+ ssize_t n, size; >+ ngx_connection_t *c, *pc; >+ ngx_peer_connection_t *u; >+ u_char buf[NGX_PROXY_PROTOCOL_MAX_HEADER]; >+ >+ c = s->connection; >+ >+ ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, >+ "mail proxy send PROXY protocol header"); >+ >+ p = ngx_proxy_protocol_write(c, buf, buf + NGX_PROXY_PROTOCOL_MAX_HEADER); >+ if (p == NULL) { >+ ngx_mail_proxy_internal_server_error(s); >+ return; >+ } >+ >+ u = &s->proxy->upstream; >+ >+ pc = u->connection; >+ >+ size = p - buf; >+ >+ n = pc->send(pc, buf, size); >+ >+ if (n != size) { >+ >+ /* >+ * PROXY protocol specification: >+ * The sender must always ensure that the header >+ * is sent at once, so that the transport layer >+ * maintains atomicity along the path to the receiver. >+ */ >+ >+ ngx_log_error(NGX_LOG_ERR, c->log, 0, >+ "could not send PROXY protocol header at once (%z)", n); >+ >+ ngx_mail_proxy_internal_server_error(s); >+ >+ return; >+ } >+ >+ ngx_mail_proxy_start(s); > } > > >@@ -1184,6 +1317,8 @@ > pcf->smtp_auth = NGX_CONF_UNSET; > pcf->buffer_size = NGX_CONF_UNSET_SIZE; > pcf->timeout = NGX_CONF_UNSET_MSEC; >+ pcf->connect_timeout = NGX_CONF_UNSET_MSEC; >+ pcf->proxy_protocol = NGX_CONF_UNSET; > > return pcf; > } >@@ -1202,6 +1337,8 @@ > ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, > (size_t) ngx_pagesize); > ngx_conf_merge_msec_value(conf->timeout, prev->timeout, 24 * 60 * 60000); >+ ngx_conf_merge_msec_value(conf->connect_timeout, prev->connect_timeout, 1000); >+ ngx_conf_merge_value(conf->proxy_protocol, prev->proxy_protocol, 0); > > return NGX_CONF_OK; > } -- Murad M(tr): +90 (533) 4874329 M(az): +994 (50) 2219909 From vl at nginx.com Tue Jan 19 16:42:50 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 19 Jan 2021 19:42:50 +0300 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> Message-ID: On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote: > It depends on if disk io is the performance hot spot or not. If yes, > io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx > thread it's hard to see performance difference because iostat is only > around ~10MB/100MB per second. Disk io is not the performance bottle > neck, both libaio and io_uring have the same performance. If you > increase request size or Nginx threads number, for example 1MB length > or Nginx thread number 4. In this case, disk io became the performance > bottle neck, you will see io_uring performance improvement. Can you please provide full test results with specific nginx configuration? From mdounin at mdounin.ru Tue Jan 19 17:21:38 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 17:21:38 +0000 Subject: [nginx] Removed incorrect optimization of HEAD requests. Message-ID: details: https://hg.nginx.org/nginx/rev/43a0a9e988be branches: changeset: 7761:43a0a9e988be user: Maxim Dounin date: Tue Jan 19 20:21:12 2021 +0300 description: Removed incorrect optimization of HEAD requests. The stub status module and ngx_http_send_response() (used by the empty gif module and the "return" directive) incorrectly assumed that responding to HEAD requests always results in r->header_only being set. This is not true, and results in incorrect behaviour, for example, in the following configuration: location / { image_filter size; return 200 test; } Fix is to remove this incorrect micro-optimization from both stub status module and ngx_http_send_response(). Reported by Chris Newton. diffstat: src/http/modules/ngx_http_stub_status_module.c | 10 ---------- src/http/ngx_http_core_module.c | 2 +- 2 files changed, 1 insertions(+), 11 deletions(-) diffs (32 lines): diff -r 83c4622053b0 -r 43a0a9e988be src/http/modules/ngx_http_stub_status_module.c --- a/src/http/modules/ngx_http_stub_status_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/http/modules/ngx_http_stub_status_module.c Tue Jan 19 20:21:12 2021 +0300 @@ -103,16 +103,6 @@ ngx_http_stub_status_handler(ngx_http_re ngx_str_set(&r->headers_out.content_type, "text/plain"); r->headers_out.content_type_lowcase = NULL; - if (r->method == NGX_HTTP_HEAD) { - r->headers_out.status = NGX_HTTP_OK; - - rc = ngx_http_send_header(r); - - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { - return rc; - } - } - size = sizeof("Active connections: \n") + NGX_ATOMIC_T_LEN + sizeof("server accepts handled requests\n") - 1 + 6 + 3 * NGX_ATOMIC_T_LEN diff -r 83c4622053b0 -r 43a0a9e988be src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Tue Jan 12 16:59:31 2021 +0300 +++ b/src/http/ngx_http_core_module.c Tue Jan 19 20:21:12 2021 +0300 @@ -1782,7 +1782,7 @@ ngx_http_send_response(ngx_http_request_ } } - if (r->method == NGX_HTTP_HEAD || (r != r->main && val.len == 0)) { + if (r != r->main && val.len == 0) { return ngx_http_send_header(r); } From mdounin at mdounin.ru Tue Jan 19 17:28:02 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 20:28:02 +0300 Subject: Remove unnecessary check in ngx_http_stub_status_handler() In-Reply-To: References: Message-ID: <20210119172802.GS1147@mdounin.ru> Hello! On Tue, Jan 05, 2021 at 01:24:04PM +0000, Chris Newton wrote: > I was desk checking return codes generated in handlers following calls to > ngx_http_send_header(), and noticed what appears to be an unnecessary test > in ngx_http_stub_status_handler() -- or rather, I think the test should > always evaluate as true, and if somehow it isn't odd things could occur - > at least an additional ALERT message would be logged, as well as some > unnecessary work performed. > > As such, I'd like to propose the following change: > > *--- a/src/http/modules/ngx_http_stub_status_module.c* > *+++ b/src/http/modules/ngx_http_stub_status_module.c* > @@ -106,11 +106,7 @@ ngx_http_stub_status_handler(ngx_http_request_t *r) > if (r->method == NGX_HTTP_HEAD) { > r->headers_out.status = NGX_HTTP_OK; > > - rc = ngx_http_send_header(r); > - > - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { > - return rc; > - } > + return ngx_http_send_header(r); > } > > size = sizeof("Active connections: \n") + NGX_ATOMIC_T_LEN > > > On a successful call to ngx_http_send_header() I believe that > r->header_only will be set true and otherwise I'd expect one of those error > checks to evaluate true, so unconditionally returning the value from > ngx_http_send_header() seems 'cleaner'. > > If the test were to somehow fail, then processing would fall through and > try the ngx_http_send_header() call again (resulting in the ALERT message), > as well as performing other additional work that should be unnecessary when > making a HEAD request > > That test seems to be SOP after calling ngx_http_send_header(), but it > seems inappropriate when that function is called within an "r->method == > NGX_HTTP_HEAD" block. After looking at this I tend to think that this optimization is simply wrong, and, for example, image_filter or xslt filter make this obvious. I've committed a patch which completely removes this optimization from the stub_status, as well as similar optimization in ngx_http_send_response() (used by "return" and "empty_gif"): https://hg.nginx.org/nginx/rev/43a0a9e988be Thanks for reporting this. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jan 19 17:32:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 17:32:32 +0000 Subject: [nginx] Core: removed post_accept_timeout. Message-ID: details: https://hg.nginx.org/nginx/rev/4e141d0816d4 branches: changeset: 7762:4e141d0816d4 user: Maxim Dounin date: Tue Jan 19 20:32:00 2021 +0300 description: Core: removed post_accept_timeout. Keeping post_accept_timeout in ngx_listening_t is no longer needed since we've switched to 1 second timeout for deferred accept in 5541:fdb67cfc957d. Further, using it in HTTP code can result in client_header_timeout being used from an incorrect server block, notably if address-specific virtual servers are used along with a wildcard listening socket, or if we've switched to a different server block based on SNI in SSL handshake. diffstat: src/core/ngx_connection.h | 2 -- src/http/ngx_http.c | 1 - src/http/ngx_http_request.c | 34 +++++++++++++++++++++------------- 3 files changed, 21 insertions(+), 16 deletions(-) diffs (103 lines): diff -r 43a0a9e988be -r 4e141d0816d4 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Tue Jan 19 20:21:12 2021 +0300 +++ b/src/core/ngx_connection.h Tue Jan 19 20:32:00 2021 +0300 @@ -45,8 +45,6 @@ struct ngx_listening_s { size_t pool_size; /* should be here because of the AcceptEx() preread */ size_t post_accept_buffer_size; - /* should be here because of the deferred accept */ - ngx_msec_t post_accept_timeout; ngx_listening_t *previous; ngx_connection_t *connection; diff -r 43a0a9e988be -r 4e141d0816d4 src/http/ngx_http.c --- a/src/http/ngx_http.c Tue Jan 19 20:21:12 2021 +0300 +++ b/src/http/ngx_http.c Tue Jan 19 20:32:00 2021 +0300 @@ -1714,7 +1714,6 @@ ngx_http_add_listening(ngx_conf_t *cf, n cscf = addr->default_server; ls->pool_size = cscf->connection_pool_size; - ls->post_accept_timeout = cscf->client_header_timeout; clcf = cscf->ctx->loc_conf[ngx_http_core_module.ctx_index]; diff -r 43a0a9e988be -r 4e141d0816d4 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Tue Jan 19 20:21:12 2021 +0300 +++ b/src/http/ngx_http_request.c Tue Jan 19 20:32:00 2021 +0300 @@ -206,16 +206,17 @@ ngx_http_header_t ngx_http_headers_in[] void ngx_http_init_connection(ngx_connection_t *c) { - ngx_uint_t i; - ngx_event_t *rev; - struct sockaddr_in *sin; - ngx_http_port_t *port; - ngx_http_in_addr_t *addr; - ngx_http_log_ctx_t *ctx; - ngx_http_connection_t *hc; + ngx_uint_t i; + ngx_event_t *rev; + struct sockaddr_in *sin; + ngx_http_port_t *port; + ngx_http_in_addr_t *addr; + ngx_http_log_ctx_t *ctx; + ngx_http_connection_t *hc; + ngx_http_core_srv_conf_t *cscf; #if (NGX_HAVE_INET6) - struct sockaddr_in6 *sin6; - ngx_http_in6_addr_t *addr6; + struct sockaddr_in6 *sin6; + ngx_http_in6_addr_t *addr6; #endif hc = ngx_pcalloc(c->pool, sizeof(ngx_http_connection_t)); @@ -361,7 +362,9 @@ ngx_http_init_connection(ngx_connection_ return; } - ngx_add_timer(rev, c->listening->post_accept_timeout); + cscf = ngx_http_get_module_srv_conf(hc->conf_ctx, ngx_http_core_module); + + ngx_add_timer(rev, cscf->client_header_timeout); ngx_reusable_connection(c, 1); if (ngx_handle_read_event(rev, 0) != NGX_OK) { @@ -431,7 +434,7 @@ ngx_http_wait_request_handler(ngx_event_ if (n == NGX_AGAIN) { if (!rev->timer_set) { - ngx_add_timer(rev, c->listening->post_accept_timeout); + ngx_add_timer(rev, cscf->client_header_timeout); ngx_reusable_connection(c, 1); } @@ -649,6 +652,7 @@ ngx_http_ssl_handshake(ngx_event_t *rev) ngx_http_connection_t *hc; ngx_http_ssl_srv_conf_t *sscf; ngx_http_core_loc_conf_t *clcf; + ngx_http_core_srv_conf_t *cscf; c = rev->data; hc = c->data; @@ -680,7 +684,9 @@ ngx_http_ssl_handshake(ngx_event_t *rev) rev->ready = 0; if (!rev->timer_set) { - ngx_add_timer(rev, c->listening->post_accept_timeout); + cscf = ngx_http_get_module_srv_conf(hc->conf_ctx, + ngx_http_core_module); + ngx_add_timer(rev, cscf->client_header_timeout); ngx_reusable_connection(c, 1); } @@ -755,7 +761,9 @@ ngx_http_ssl_handshake(ngx_event_t *rev) if (rc == NGX_AGAIN) { if (!rev->timer_set) { - ngx_add_timer(rev, c->listening->post_accept_timeout); + cscf = ngx_http_get_module_srv_conf(hc->conf_ctx, + ngx_http_core_module); + ngx_add_timer(rev, cscf->client_header_timeout); } c->ssl->handler = ngx_http_ssl_handshake_handler; From mdounin at mdounin.ru Tue Jan 19 17:36:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 17:36:32 +0000 Subject: [nginx] Year 2021. Message-ID: details: https://hg.nginx.org/nginx/rev/61d0df8fcc7c branches: changeset: 7763:61d0df8fcc7c user: Maxim Dounin date: Tue Jan 19 20:35:17 2021 +0300 description: Year 2021. diffstat: docs/text/LICENSE | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (12 lines): diff -r 4e141d0816d4 -r 61d0df8fcc7c docs/text/LICENSE --- a/docs/text/LICENSE Tue Jan 19 20:32:00 2021 +0300 +++ b/docs/text/LICENSE Tue Jan 19 20:35:17 2021 +0300 @@ -1,6 +1,6 @@ /* - * Copyright (C) 2002-2019 Igor Sysoev - * Copyright (C) 2011-2019 Nginx, Inc. + * Copyright (C) 2002-2021 Igor Sysoev + * Copyright (C) 2011-2021 Nginx, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without From 4692122291 at txt.att.net Tue Jan 19 17:52:02 2021 From: 4692122291 at txt.att.net (4692122291 at txt.att.net) Date: Tue, 19 Jan 2021 17:52:02 -0000 Subject: Remove unnecessary check in ngx_http_stub_status_handler() In-Reply-To: 20210119172802.GS1147@mdounin.ru Message-ID: Rather -----Original Message----- From: Sent: Tue, 19 Jan 2021 20:28:02 +0300 To: 4692122291 at txt.att.net Subject: Re: Remove unnecessary check in ngx_http_stub_status_handler() >Hello! > >On Tue, Jan 05, 2021 at 01:24:04PM +0000, Chris Newton wrote: > >> I was desk checking return codes generated in handlers following calls to >> ngx_http_send_header(), ================================================================== This mobile text message is brought to you by AT&T From h312841925 at gmail.com Wed Jan 20 11:59:50 2021 From: h312841925 at gmail.com (Jim T) Date: Wed, 20 Jan 2021 19:59:50 +0800 Subject: Http: protect prefix variable when add variable Message-ID: Hello! There is a incident occur in our team, when we use auth_request_set like this in many server, and print $upstream_http_x_auth_request_email in log: server { listen 8080 reuseport; server_name test.io; location / { auth_request /oauth2/auth; auth_request_set $email $upstream_http_x_auth_request_email; } } But when we add a bad auth_request_set like below: server { listen 8080 reuseport; server_name test2.io; location / { auth_request /oauth2/auth; auth_request_set $upstream_http_x_auth_request_email $email; } } We will lost all $upstream_http_x_auth_request_email even the server haven't use, because there is a new variable $upstream_http_x_auth_request_email, and the prefix variable can't be read any more. So I think we can fix it like this, to avoid the wrong configuration: # HG changeset patch # User Jinhua Tan <312841925 at qq.com> # Date 1611143620 -28800 # Wed Jan 20 19:53:40 2021 +0800 # Node ID fd7e9432a59abcfcf380ddedb1e892098a54a845 # Parent 61d0df8fcc7c630da35e832ba8e983db0061a3be Http: protect prefix variable when add variable diff -r 61d0df8fcc7c -r fd7e9432a59a src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Tue Jan 19 20:35:17 2021 +0300 +++ b/src/http/ngx_http_variables.c Wed Jan 20 19:53:40 2021 +0800 @@ -393,6 +393,20 @@ }; +static ngx_str_t ngx_http_protect_variables_prefix[] = { + ngx_string("arg_"), + ngx_string("http_"), + ngx_string("sent_http_"), + ngx_string("sent_trailer_"), + ngx_string("cookie_"), + ngx_string("arg_"), + ngx_string("upstream_http_"), + ngx_string("upstream_trailer_"), + ngx_string("upstream_cookie_"), + ngx_null_string +}; + + ngx_http_variable_value_t ngx_http_variable_null_value = ngx_http_variable(""); ngx_http_variable_value_t ngx_http_variable_true_value = @@ -410,6 +424,7 @@ ngx_hash_key_t *key; ngx_http_variable_t *v; ngx_http_core_main_conf_t *cmcf; + ngx_str_t *p; if (name->len == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, @@ -421,6 +436,18 @@ return ngx_http_add_prefix_variable(cf, name, flags); } + if (flags & NGX_HTTP_VAR_CHANGEABLE) { + for (p = ngx_http_protect_variables_prefix; p->len; p++) { + if (name->len >= p.len + && ngx_strncasecmp(name->data, p->data, p->len) == 0) + { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "similar to prefix variable \"%V\"", *p); + return NULL; + } + } + } + cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); key = cmcf->variables_keys->keys.elts; -------------- next part -------------- An HTML attachment was scrubbed... URL: From harishkumarivaturi at gmail.com Wed Jan 20 13:34:12 2021 From: harishkumarivaturi at gmail.com (HARISH KUMAR Ivaturi) Date: Wed, 20 Jan 2021 14:34:12 +0100 Subject: nginx conf file for downloading a text file. Message-ID: Hi All, I would like to know where I went wrong on writing the nginx.conf file for downloading addDevice.txt file which is located at /var/www/files/addDevice.txt. I used curl command as follows: curl -k -v --http3 "https://localhost:8443/files/addDevice.txt" curl -k -v --http3 "https://localhost:8443/static/addDevice.txt" But did not get any response. the nginx.conf file is as follows : worker_processes auto; events { worker_connections 1024; } http { include /etc/nginx/sites-available/*; log_format quic '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$quic" "$http3"'; access_log logs/access.log quic; server { listen 8443 ssl; listen 8443 http3; ssl_protocols TLSv1.3 TLSv1.2; client_max_body_size 10M; ssl_certificate /home/ubuntu/nginxcertsimp/cert.crt; ssl_certificate_key /home/ubuntu/nginxcertsimp/cert.key; location / { proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_buffering off; proxy_pass https://localhost; } location /static { root /var/www/files/addDevice.txt; add_header Alt-Svc '$http3=":8443"; ma=86400'; } } } Please help me with this so that i can get total time of downloading a file using curl --write-out command. Best Regards Harish Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 20 16:50:02 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Jan 2021 19:50:02 +0300 Subject: Http: protect prefix variable when add variable In-Reply-To: References: Message-ID: <20210120165002.GW1147@mdounin.ru> Hello! On Wed, Jan 20, 2021 at 07:59:50PM +0800, Jim T wrote: > Hello! > > There is a incident occur in our team, when we use auth_request_set like > this in many server, and print $upstream_http_x_auth_request_email in log: > > server { > listen 8080 reuseport; > server_name test.io; > location / { > auth_request /oauth2/auth; > auth_request_set $email $upstream_http_x_auth_request_email; > } > } > > But when we add a bad auth_request_set like below: > server { > listen 8080 reuseport; > server_name test2.io; > location / { > auth_request /oauth2/auth; > auth_request_set $upstream_http_x_auth_request_email $email; > } > } > > We will lost all $upstream_http_x_auth_request_email even the server > haven't use, because there is a new variable > $upstream_http_x_auth_request_email, and the prefix variable can't be read > any more. > > So I think we can fix it like this, to avoid the wrong configuration: Thank you for your suggestion and patch. See comments below. > # HG changeset patch > # User Jinhua Tan <312841925 at qq.com> > # Date 1611143620 -28800 > # Wed Jan 20 19:53:40 2021 +0800 > # Node ID fd7e9432a59abcfcf380ddedb1e892098a54a845 > # Parent 61d0df8fcc7c630da35e832ba8e983db0061a3be > Http: protect prefix variable when add variable > > diff -r 61d0df8fcc7c -r fd7e9432a59a src/http/ngx_http_variables.c > --- a/src/http/ngx_http_variables.c Tue Jan 19 20:35:17 2021 +0300 > +++ b/src/http/ngx_http_variables.c Wed Jan 20 19:53:40 2021 +0800 > @@ -393,6 +393,20 @@ > }; > > > +static ngx_str_t ngx_http_protect_variables_prefix[] = { > + ngx_string("arg_"), > + ngx_string("http_"), > + ngx_string("sent_http_"), > + ngx_string("sent_trailer_"), > + ngx_string("cookie_"), > + ngx_string("arg_"), > + ngx_string("upstream_http_"), > + ngx_string("upstream_trailer_"), > + ngx_string("upstream_cookie_"), > + ngx_null_string > +}; Using a static list of prefixes is certainly wrong: there can be arbitrary prefix variables added by various modules, and limiting checks to a predefied list is not going to work correctly. > + > + > ngx_http_variable_value_t ngx_http_variable_null_value = > ngx_http_variable(""); > ngx_http_variable_value_t ngx_http_variable_true_value = > @@ -410,6 +424,7 @@ > ngx_hash_key_t *key; > ngx_http_variable_t *v; > ngx_http_core_main_conf_t *cmcf; > + ngx_str_t *p; > > if (name->len == 0) { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > @@ -421,6 +436,18 @@ > return ngx_http_add_prefix_variable(cf, name, flags); > } > > + if (flags & NGX_HTTP_VAR_CHANGEABLE) { > + for (p = ngx_http_protect_variables_prefix; p->len; p++) { > + if (name->len >= p.len > + && ngx_strncasecmp(name->data, p->data, p->len) == 0) > + { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "similar to prefix variable \"%V\"", > *p); > + return NULL; > + } > + } > + } > + > cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); > > key = cmcf->variables_keys->keys.elts; Prefix variables are intentionally implemented in a way which allows one to overwrite them: for example, this is used in nginx itself to provide custom handler for some $http_* variables, such as $http_host (which can be effectively retrieved, since nginx has a pointer to the Host header explicitly stored). That is, it is quite normal that the ngx_http_add_variable() function you modify is called with a variable which is more specific than a registered prefix variable. And using the NGX_HTTP_VAR_CHANGEABLE flag to distinguish when to fail looks wrong, as this is an unrelated flag. You probably mean to detect user-added variables, such as introduced by "set" or "auth_request_set". There is no way to detect such variables except in a particular directive used to define the variable. Further, I tend to think there are valid use cases when one may want to actually redefine a particular variable. Overall, the patch certainly needs more work, and I very much doubt we want to introduce such checks at all and hence this work needs to be done. A better solution might be to avoid doing mistakes like the one you've described above. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 20 16:52:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Jan 2021 19:52:41 +0300 Subject: nginx conf file for downloading a text file. In-Reply-To: References: Message-ID: <20210120165241.GX1147@mdounin.ru> Hello! On Wed, Jan 20, 2021 at 02:34:12PM +0100, HARISH KUMAR Ivaturi wrote: > Hi All, > > I would like to know where I went wrong on writing the nginx.conf file for > downloading addDevice.txt file which is located at > /var/www/files/addDevice.txt. > > I used curl command as follows: > > curl -k -v --http3 "https://localhost:8443/files/addDevice.txt" > curl -k -v --http3 "https://localhost:8443/static/addDevice.txt" > > But did not get any response. [...] This is a mailing list for nginx developers. For user-level questions, please use the nginx@ mailing list instead. Further details can be found here: http://nginx.org/en/support.html Thank you. -- Maxim Dounin http://mdounin.ru/ From ping.zhao at intel.com Thu Jan 21 01:44:28 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Thu, 21 Jan 2021 01:44:28 +0000 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> Message-ID: Hi Vladimir, No special/extra configuration needed, but need check if 'aio on' and 'sendfile off' is correctly set. This is my Nginx config for reference: user nobody; daemon off; worker_processes 1; error_log error.log ; events { worker_connections 65535; use epoll; } http { include mime.types; default_type application/octet-stream; access_log on; aio on; sendfile off; directio 2k; # Cache Configurations proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m max_size=1400g inactive=4d use_temp_path=off; ...... To better measure the disk io performance data, I do the following steps: 1. To exclude other impact, and focus on disk io part.(This patch only impact disk aio read process) Use cgroup to limit Nginx memory usage. Otherwise Nginx may also use memory as cache storage and this may cause test result not so straight.(since most cache hit in memory, disk io bw is low, like my previous mail found which didn't exclude the memory cache impact) echo 2G > memory.limit_in_bytes use ' cgexec -g memory:nginx' to start Nginx. 2. use wrk -t 100 -c 1000, with random 25000 http requests. My previous test used -t 200 connections, comparing with -t 1000, libaio performance drop more when connections numbers increased from 200 to 1000, but io_uring doesn't. It's another advantage of io_uring. 3. First clean the cache disk and run the test for 30 minutes to let Nginx store the cache files to nvme disk as much as possible. 4. Rerun the test, this time Nginx will use ngx_file_aio_read to extract the cache files in nvme cache disk. Use iostat to track the io data. The data should be align with NIC bw since all data should be from cache disk.(need exclude memory as cache storage impact) Following is the test result: Nginx worker_processes 1: 4k 100k 1M Io_uring 220MB/s 1GB/s 1.3GB/s Libaio 70MB/s 250MB/s 600MB/s(with -c 200, 1.0GB/s) Nginx worker_processes 4: 4k 100k 1M Io_uring 800MB/s 2.5GB/s 2.6GB/s(my nvme disk io maximum bw) libaio 250MB/s 900MB/s 2.0GB/s So for small request, io_uring has huge improvement than libaio. In previous mail, because I didn't exclude the memory cache storage impact, most cache file is stored in memory, very few are from disk in case of 4k/100k. The data is not correct.(for 1M, because the cache is too big to store in memory, it wat in disk) Also I enabled directio option "directio 2k" this time to avoid this. Regards, Ping -----Original Message----- From: nginx-devel On Behalf Of Vladimir Homutov Sent: Wednesday, January 20, 2021 12:43 AM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add io_uring support in AIO(async io) module On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote: > It depends on if disk io is the performance hot spot or not. If yes, > io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx > thread it's hard to see performance difference because iostat is only > around ~10MB/100MB per second. Disk io is not the performance bottle > neck, both libaio and io_uring have the same performance. If you > increase request size or Nginx threads number, for example 1MB length > or Nginx thread number 4. In this case, disk io became the performance > bottle neck, you will see io_uring performance improvement. Can you please provide full test results with specific nginx configuration? _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From dronimal at yandex-team.ru Thu Jan 21 15:24:58 2021 From: dronimal at yandex-team.ru (=?utf-8?B?0JDQvdC00YDQtdC5INCR0LjRhw==?=) Date: Thu, 21 Jan 2021 18:24:58 +0300 Subject: Fix proxy_bind with upstreams with keepalive Message-ID: <406111611242639@mail.yandex-team.ru> An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 21 17:55:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Jan 2021 20:55:41 +0300 Subject: Fix proxy_bind with upstreams with keepalive In-Reply-To: <406111611242639@mail.yandex-team.ru> References: <406111611242639@mail.yandex-team.ru> Message-ID: <20210121175541.GZ1147@mdounin.ru> Hello! On Thu, Jan 21, 2021 at 06:24:58PM +0300, ?????? ??? wrote: > There was a problem that we encountered: proxy_bind option is sometimes > ignored when keepalive enabled in target upstream. > In search for connection in cache the only comparison is with target > address and local address set by proxy_bind is ignored. > I'd like to propose the following change to fix this issue. > Would like to receive your comments. Thank you for your patch. Cache of upstream connections only takes into account the address of the server it connects to. If you want to take into account other connection-related properties, such as different proxy_bind, proxy_socket_keepalive, or various SSL options such as SNI name or ciphers/protocols used, you are expected to take care of this yourself, either by using different upstream{} blocks, or by not using keepalive cache. Further, taking proxy_bind into account doesn't look right at least in some use cases. For example, consider a configuration where connections to a backend are configured to use random source IP addresses from a set of IP addresses available on the server (such configurations are sometimes used to avoid hitting 64k connections limit). With your patch, checking if source address matches the one selected for a particular request will needlessly reject some connections. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Thu Jan 21 18:45:29 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 21 Jan 2021 18:45:29 +0000 Subject: [njs] Modules: added ngx.fetch(). Message-ID: details: https://hg.nginx.org/njs/rev/81040de6b085 branches: changeset: 1593:81040de6b085 user: Dmitry Volyntsev date: Thu Jan 21 18:44:58 2021 +0000 description: Modules: added ngx.fetch(). This is an initial implementation of Fetch API. The following init options are supported: body, headers, buffer_size (nginx specific), max_response_body_size (nginx specific), method. The following properties and methods of Response object are implemented: arrayBuffer(), bodyUsed, json(), headers, ok, redirect, status, statusText, text(), type, url. The following properties and methods of Header object are implemented: get(), getAll(), has(). Notable limitations: only http:// scheme is supported, redirects are not handled. In collaboration with ??? (Hong Zhi Dao). diffstat: nginx/config | 3 +- nginx/ngx_http_js_module.c | 65 +- nginx/ngx_js.c | 28 + nginx/ngx_js.h | 24 +- nginx/ngx_js_fetch.c | 2212 ++++++++++++++++++++++++++++++++++++++++++ nginx/ngx_js_fetch.h | 18 + nginx/ngx_stream_js_module.c | 40 +- 7 files changed, 2378 insertions(+), 12 deletions(-) diffs (truncated from 2551 to 1000 lines): diff -r dc7d94c05669 -r 81040de6b085 nginx/config --- a/nginx/config Mon Jan 11 19:53:10 2021 +0000 +++ b/nginx/config Thu Jan 21 18:44:58 2021 +0000 @@ -1,7 +1,8 @@ ngx_addon_name="ngx_js_module" NJS_DEPS="$ngx_addon_dir/ngx_js.h" -NJS_SRCS="$ngx_addon_dir/ngx_js.c" +NJS_SRCS="$ngx_addon_dir/ngx_js.c \ + $ngx_addon_dir/ngx_js_fetch.c" if [ $HTTP != NO ]; then ngx_module_type=HTTP diff -r dc7d94c05669 -r 81040de6b085 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Mon Jan 11 19:53:10 2021 +0000 +++ b/nginx/ngx_http_js_module.c Thu Jan 21 18:44:58 2021 +0000 @@ -179,6 +179,13 @@ static njs_host_event_t ngx_http_js_set_ static void ngx_http_js_clear_timer(njs_external_ptr_t external, njs_host_event_t event); static void ngx_http_js_timer_handler(ngx_event_t *ev); +static ngx_pool_t *ngx_http_js_pool(njs_vm_t *vm, ngx_http_request_t *r); +static ngx_resolver_t *ngx_http_js_resolver(njs_vm_t *vm, + ngx_http_request_t *r); +static ngx_msec_t ngx_http_js_resolver_timeout(njs_vm_t *vm, + ngx_http_request_t *r); +static void ngx_http_js_handle_vm_event(ngx_http_request_t *r, + njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs); static void ngx_http_js_handle_event(ngx_http_request_t *r, njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs); @@ -576,11 +583,15 @@ static njs_vm_ops_t ngx_http_js_ops = { static uintptr_t ngx_http_js_uptr[] = { offsetof(ngx_http_request_t, connection), + (uintptr_t) ngx_http_js_pool, + (uintptr_t) ngx_http_js_resolver, + (uintptr_t) ngx_http_js_resolver_timeout, + (uintptr_t) ngx_http_js_handle_event, }; static njs_vm_meta_t ngx_http_js_metas = { - .size = 1, + .size = 5, .values = ngx_http_js_uptr }; @@ -2754,7 +2765,7 @@ ngx_http_js_subrequest_done(ngx_http_req return NGX_ERROR; } - ngx_http_js_handle_event(r->parent, vm_event, njs_value_arg(&reply), 1); + ngx_http_js_handle_vm_event(r->parent, vm_event, njs_value_arg(&reply), 1); return NGX_OK; } @@ -2895,7 +2906,6 @@ ngx_http_js_clear_timer(njs_external_ptr static void ngx_http_js_timer_handler(ngx_event_t *ev) { - ngx_connection_t *c; ngx_http_request_t *r; ngx_http_js_event_t *js_event; @@ -2903,16 +2913,41 @@ ngx_http_js_timer_handler(ngx_event_t *e r = js_event->request; - c = r->connection; - ngx_http_js_handle_event(r, js_event->vm_event, NULL, 0); - - ngx_http_run_posted_requests(c); +} + + +static ngx_pool_t * +ngx_http_js_pool(njs_vm_t *vm, ngx_http_request_t *r) +{ + return r->pool; +} + + +static ngx_resolver_t * +ngx_http_js_resolver(njs_vm_t *vm, ngx_http_request_t *r) +{ + ngx_http_core_loc_conf_t *clcf; + + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + + return clcf->resolver; +} + + +static ngx_msec_t +ngx_http_js_resolver_timeout(njs_vm_t *vm, ngx_http_request_t *r) +{ + ngx_http_core_loc_conf_t *clcf; + + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); + + return clcf->resolver_timeout; } static void -ngx_http_js_handle_event(ngx_http_request_t *r, njs_vm_event_t vm_event, +ngx_http_js_handle_vm_event(ngx_http_request_t *r, njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs) { njs_int_t rc; @@ -2925,6 +2960,10 @@ ngx_http_js_handle_event(ngx_http_reques rc = njs_vm_run(ctx->vm); + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "http js post event handler rc: %i event: %p", + (ngx_int_t) rc, vm_event); + if (rc == NJS_ERROR) { njs_vm_retval_string(ctx->vm, &exception); @@ -2940,6 +2979,16 @@ ngx_http_js_handle_event(ngx_http_reques } +static void +ngx_http_js_handle_event(ngx_http_request_t *r, njs_vm_event_t vm_event, + njs_value_t *args, njs_uint_t nargs) +{ + ngx_http_js_handle_vm_event(r, vm_event, args, nargs); + + ngx_http_run_posted_requests(r->connection); +} + + static char * ngx_http_js_init_main_conf(ngx_conf_t *cf, void *conf) { diff -r dc7d94c05669 -r 81040de6b085 nginx/ngx_js.c --- a/nginx/ngx_js.c Mon Jan 11 19:53:10 2021 +0000 +++ b/nginx/ngx_js.c Thu Jan 21 18:44:58 2021 +0000 @@ -9,6 +9,7 @@ #include #include #include "ngx_js.h" +#include "ngx_js_fetch.h" static njs_external_t ngx_js_ext_core[] = { @@ -50,6 +51,17 @@ static njs_external_t ngx_js_ext_core[] .magic32 = NGX_LOG_ERR, } }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("fetch"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_js_ext_fetch, + } + }, }; @@ -117,10 +129,16 @@ ngx_js_string(njs_vm_t *vm, njs_value_t ngx_int_t ngx_js_core_init(njs_vm_t *vm, ngx_log_t *log) { + ngx_int_t rc; njs_int_t ret, proto_id; njs_str_t name; njs_opaque_value_t value; + rc = ngx_js_fetch_init(vm, log); + if (rc != NGX_OK) { + return NGX_ERROR; + } + proto_id = njs_vm_external_prototype(vm, ngx_js_ext_core, njs_nitems(ngx_js_ext_core)); if (proto_id < 0) { @@ -178,6 +196,16 @@ ngx_js_ext_constant(njs_vm_t *vm, njs_ob njs_int_t +ngx_js_ext_boolean(njs_vm_t *vm, njs_object_prop_t *prop, + njs_value_t *value, njs_value_t *setval, njs_value_t *retval) +{ + njs_value_boolean_set(retval, njs_vm_prop_magic32(prop)); + + return NJS_OK; +} + + +njs_int_t ngx_js_ext_log(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t level) { diff -r dc7d94c05669 -r 81040de6b085 nginx/ngx_js.h --- a/nginx/ngx_js.h Mon Jan 11 19:53:10 2021 +0000 +++ b/nginx/ngx_js.h Thu Jan 21 18:44:58 2021 +0000 @@ -20,10 +20,28 @@ #define NGX_JS_BUFFER 2 #define NGX_JS_PROTO_MAIN 0 +#define NGX_JS_PROTO_RESPONSE 1 -#define ngx_external_connection(vm, ext) \ - (*((ngx_connection_t **) ((u_char *) ext + njs_vm_meta(vm, 0)))) +typedef ngx_pool_t *(*ngx_external_pool_pt)(njs_vm_t *vm, njs_external_ptr_t e); +typedef void (*ngx_js_event_handler_pt)(njs_external_ptr_t e, + njs_vm_event_t vm_event, njs_value_t *args, njs_uint_t nargs); +typedef ngx_resolver_t *(*ngx_external_resolver_pt)(njs_vm_t *vm, + njs_external_ptr_t e); +typedef ngx_msec_t (*ngx_external_resolver_timeout_pt)(njs_vm_t *vm, + njs_external_ptr_t e); + + +#define ngx_external_connection(vm, e) \ + (*((ngx_connection_t **) ((u_char *) (e) + njs_vm_meta(vm, 0)))) +#define ngx_external_pool(vm, e) \ + ((ngx_external_pool_pt) njs_vm_meta(vm, 1))(vm, e) +#define ngx_external_resolver(vm, e) \ + ((ngx_external_resolver_pt) njs_vm_meta(vm, 2))(vm, e) +#define ngx_external_resolver_timeout(vm, e) \ + ((ngx_external_resolver_timeout_pt) njs_vm_meta(vm, 3))(vm, e) +#define ngx_external_event_handler(vm, e) \ + ((ngx_js_event_handler_pt) njs_vm_meta(vm, 4)) #define ngx_js_prop(vm, type, value, start, len) \ @@ -41,6 +59,8 @@ njs_int_t ngx_js_ext_string(njs_vm_t *vm njs_value_t *value, njs_value_t *setval, njs_value_t *retval); njs_int_t ngx_js_ext_constant(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); +njs_int_t ngx_js_ext_boolean(njs_vm_t *vm, njs_object_prop_t *prop, + njs_value_t *value, njs_value_t *setval, njs_value_t *retval); ngx_int_t ngx_js_core_init(njs_vm_t *vm, ngx_log_t *log); diff -r dc7d94c05669 -r 81040de6b085 nginx/ngx_js_fetch.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/nginx/ngx_js_fetch.c Thu Jan 21 18:44:58 2021 +0000 @@ -0,0 +1,2212 @@ + +/* + * Copyright (C) Dmitry Volyntsev + * Copyright (C) hongzhidao + * Copyright (C) NGINX, Inc. + */ + + +#include +#include +#include +#include +#include "ngx_js.h" + + +typedef struct ngx_js_http_s ngx_js_http_t; + + +typedef struct { + ngx_uint_t state; + ngx_uint_t code; + u_char *status_text; + u_char *status_text_end; + ngx_uint_t count; + ngx_flag_t chunked; + off_t content_length_n; + + u_char *header_name_start; + u_char *header_name_end; + u_char *header_start; + u_char *header_end; +} ngx_js_http_parse_t; + + +typedef struct { + u_char *pos; + uint64_t chunk_size; + uint8_t state; + uint8_t last; +} ngx_js_http_chunk_parse_t; + + +struct ngx_js_http_s { + ngx_log_t *log; + ngx_pool_t *pool; + + njs_vm_t *vm; + njs_external_ptr_t external; + njs_vm_event_t vm_event; + ngx_js_event_handler_pt event_handler; + + ngx_resolver_ctx_t *ctx; + ngx_addr_t addr; + ngx_addr_t *addrs; + ngx_uint_t naddrs; + ngx_uint_t naddr; + in_port_t port; + + ngx_peer_connection_t peer; + ngx_msec_t timeout; + + ngx_int_t buffer_size; + ngx_int_t max_response_body_size; + + njs_str_t url; + ngx_array_t headers; + + ngx_buf_t *buffer; + ngx_buf_t *chunk; + njs_chb_t chain; + + njs_opaque_value_t reply; + njs_opaque_value_t promise; + njs_opaque_value_t promise_callbacks[2]; + + uint8_t done; + uint8_t body_used; + ngx_js_http_parse_t http_parse; + ngx_js_http_chunk_parse_t http_chunk_parse; + ngx_int_t (*process)(ngx_js_http_t *http); +}; + + +#define ngx_js_http_error(http, err, fmt, ...) \ + do { \ + njs_vm_value_error_set((http)->vm, njs_value_arg(&(http)->reply), \ + fmt, ##__VA_ARGS__); \ + ngx_js_http_fetch_done(http, &(http)->reply, NJS_ERROR); \ + } while (0) + + +static ngx_js_http_t *ngx_js_http_alloc(njs_vm_t *vm, ngx_pool_t *pool, + ngx_log_t *log); +static void ngx_js_resolve_handler(ngx_resolver_ctx_t *ctx); +static njs_int_t ngx_js_fetch_result(njs_vm_t *vm, ngx_js_http_t *http, + njs_value_t *result, njs_int_t rc); +static njs_int_t ngx_js_fetch_promissified_result(njs_vm_t *vm, + njs_value_t *result, njs_int_t rc); +static void ngx_js_http_fetch_done(ngx_js_http_t *http, + njs_opaque_value_t *retval, njs_int_t rc); +static njs_int_t ngx_js_http_promise_trampoline(njs_vm_t *vm, + njs_value_t *args, njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_js_http_connect(ngx_js_http_t *http); +static njs_int_t ngx_js_http_next(ngx_js_http_t *http); +static void ngx_js_http_write_handler(ngx_event_t *wev); +static void ngx_js_http_read_handler(ngx_event_t *rev); +static ngx_int_t ngx_js_http_process_status_line(ngx_js_http_t *http); +static ngx_int_t ngx_js_http_process_headers(ngx_js_http_t *http); +static ngx_int_t ngx_js_http_process_body(ngx_js_http_t *http); +static ngx_int_t ngx_js_http_parse_status_line(ngx_js_http_parse_t *hp, + ngx_buf_t *b); +static ngx_int_t ngx_js_http_parse_header_line(ngx_js_http_parse_t *hp, + ngx_buf_t *b); +static ngx_int_t ngx_js_http_parse_chunked(ngx_js_http_chunk_parse_t *hcp, + ngx_buf_t *b, njs_chb_t *chain); +static void ngx_js_http_dummy_handler(ngx_event_t *ev); + +static njs_int_t ngx_response_js_ext_headers_get(njs_vm_t *vm, + njs_value_t *args, njs_uint_t nargs, njs_index_t as_array); +static njs_int_t ngx_response_js_ext_headers_has(njs_vm_t *vm, + njs_value_t *args, njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_response_js_ext_header(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_keys(njs_vm_t *vm, njs_value_t *value, + njs_value_t *keys); +static njs_int_t ngx_response_js_ext_status(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_status_text(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_ok(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_body_used(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_type(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_response_js_ext_body(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); + + +static njs_external_t ngx_js_ext_http_response_headers[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "Headers", + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("get"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_headers_get, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("getAll"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_headers_get, + .magic8 = 1 + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("has"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_headers_has, + } + }, + +}; + + +static njs_external_t ngx_js_ext_http_response[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "Response", + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("arrayBuffer"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_body, +#define NGX_JS_BODY_ARRAY_BUFFER 0 +#define NGX_JS_BODY_JSON 1 +#define NGX_JS_BODY_TEXT 2 + .magic8 = NGX_JS_BODY_ARRAY_BUFFER + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("bodyUsed"), + .enumerable = 1, + .u.property = { + .handler = ngx_response_js_ext_body_used, + } + }, + + { + .flags = NJS_EXTERN_OBJECT, + .name.string = njs_str("headers"), + .enumerable = 1, + .u.object = { + .enumerable = 1, + .properties = ngx_js_ext_http_response_headers, + .nproperties = njs_nitems(ngx_js_ext_http_response_headers), + .prop_handler = ngx_response_js_ext_header, + .keys = ngx_response_js_ext_keys, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("json"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_body, + .magic8 = NGX_JS_BODY_JSON + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("ok"), + .enumerable = 1, + .u.property = { + .handler = ngx_response_js_ext_ok, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("redirected"), + .enumerable = 1, + .u.property = { + .handler = ngx_js_ext_boolean, + .magic32 = 0, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("status"), + .enumerable = 1, + .u.property = { + .handler = ngx_response_js_ext_status, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("statusText"), + .enumerable = 1, + .u.property = { + .handler = ngx_response_js_ext_status_text, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("text"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_response_js_ext_body, + .magic8 = NGX_JS_BODY_TEXT + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("type"), + .enumerable = 1, + .u.property = { + .handler = ngx_response_js_ext_type, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("url"), + .enumerable = 1, + .u.property = { + .handler = ngx_js_ext_string, + .magic32 = offsetof(ngx_js_http_t, url), + } + }, +}; + + +njs_int_t +ngx_js_ext_fetch(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + int64_t i, length; + njs_int_t ret; + njs_str_t method, body, name, header; + ngx_url_t u; + njs_bool_t has_host; + ngx_pool_t *pool; + njs_value_t *init, *value, *headers, *keys; + ngx_js_http_t *http; + ngx_connection_t *c; + ngx_resolver_ctx_t *ctx; + njs_external_ptr_t external; + njs_opaque_value_t *start, lvalue, headers_value; + + static const njs_str_t body_key = njs_str("body"); + static const njs_str_t headers_key = njs_str("headers"); + static const njs_str_t buffer_size_key = njs_str("buffer_size"); + static const njs_str_t body_size_key = njs_str("max_response_body_size"); + static const njs_str_t method_key = njs_str("method"); + + external = njs_vm_external(vm, njs_argument(args, 0)); + if (external == NULL) { + njs_vm_error(vm, "\"this\" is not an external"); + return NJS_ERROR; + } + + c = ngx_external_connection(vm, external); + pool = ngx_external_pool(vm, external); + + http = ngx_js_http_alloc(vm, pool, c->log); + if (http == NULL) { + return NJS_ERROR; + } + + http->external = external; + http->event_handler = ngx_external_event_handler(vm, external); + http->buffer_size = 4096; + http->max_response_body_size = 32 * 1024; + + ret = ngx_js_string(vm, njs_arg(args, nargs, 1), &http->url); + if (ret != NJS_OK) { + njs_vm_error(vm, "failed to convert url arg"); + goto fail; + } + + ngx_memzero(&u, sizeof(ngx_url_t)); + + u.url.len = http->url.length; + u.url.data = http->url.start; + u.default_port = 80; + u.uri_part = 1; + u.no_resolve = 1; + + if (u.url.len > 7 + && ngx_strncasecmp(u.url.data, (u_char *) "http://", 7) == 0) + { + u.url.len -= 7; + u.url.data += 7; + + } else { + njs_vm_error(vm, "unsupported URL prefix"); + goto fail; + } + + if (ngx_parse_url(pool, &u) != NGX_OK) { + njs_vm_error(vm, "invalid url"); + goto fail; + } + + init = njs_arg(args, nargs, 2); + + method = njs_str_value("GET"); + body = njs_str_value(""); + headers = NULL; + + if (njs_value_is_object(init)) { + value = njs_vm_object_prop(vm, init, &method_key, &lvalue); + if (value != NULL && ngx_js_string(vm, value, &method) != NGX_OK) { + goto fail; + } + + headers = njs_vm_object_prop(vm, init, &headers_key, &headers_value); + if (headers != NULL && !njs_value_is_object(headers)) { + njs_vm_error(vm, "headers is not an object"); + goto fail; + } + + value = njs_vm_object_prop(vm, init, &body_key, &lvalue); + if (value != NULL && ngx_js_string(vm, value, &body) != NGX_OK) { + goto fail; + } + + value = njs_vm_object_prop(vm, init, &buffer_size_key, &lvalue); + if (value != NULL + && ngx_js_integer(vm, value, &http->buffer_size) + != NGX_OK) + { + goto fail; + } + + value = njs_vm_object_prop(vm, init, &body_size_key, &lvalue); + if (value != NULL + && ngx_js_integer(vm, value, &http->max_response_body_size) + != NGX_OK) + { + goto fail; + } + } + + njs_chb_init(&http->chain, njs_vm_memory_pool(vm)); + + njs_chb_append(&http->chain, method.start, method.length); + njs_chb_append_literal(&http->chain, " "); + + if (u.uri.len == 0 || u.uri.data[0] != '/') { + njs_chb_append_literal(&http->chain, "/"); + } + + njs_chb_append(&http->chain, u.uri.data, u.uri.len); + njs_chb_append_literal(&http->chain, " HTTP/1.1" CRLF); + njs_chb_append_literal(&http->chain, "Connection: close" CRLF); + + has_host = 0; + + if (headers != NULL) { + keys = njs_vm_object_keys(vm, headers, njs_value_arg(&lvalue)); + if (keys == NULL) { + goto fail; + } + + start = (njs_opaque_value_t *) njs_vm_array_start(vm, keys); + if (start == NULL) { + goto fail; + } + + (void) njs_vm_array_length(vm, keys, &length); + + for (i = 0; i < length; i++) { + if (ngx_js_string(vm, njs_value_arg(start), &name) != NGX_OK) { + goto fail; + } + + start++; + + value = njs_vm_object_prop(vm, headers, &name, &lvalue); + if (ret != NJS_OK) { + goto fail; + } + + if (njs_value_is_null_or_undefined(value)) { + continue; + } + + if (ngx_js_string(vm, value, &header) != NGX_OK) { + goto fail; + } + + if (name.length == 4 + && ngx_strncasecmp(name.start, (u_char *) "Host", 4) == 0) + { + has_host = 1; + } + + njs_chb_append(&http->chain, name.start, name.length); + njs_chb_append_literal(&http->chain, ": "); + njs_chb_append(&http->chain, header.start, header.length); + njs_chb_append_literal(&http->chain, CRLF); + } + } + + if (!has_host) { + njs_chb_append_literal(&http->chain, "Host: "); + njs_chb_append(&http->chain, u.host.data, u.host.len); + njs_chb_append_literal(&http->chain, CRLF); + } + + if (body.length != 0) { + njs_chb_sprintf(&http->chain, 32, "Content-Length: %uz" CRLF CRLF, + body.length); + njs_chb_append(&http->chain, body.start, body.length); + + } else { + njs_chb_append_literal(&http->chain, CRLF); + } + + if (u.addrs == NULL) { + ctx = ngx_resolve_start(ngx_external_resolver(vm, external), NULL); + if (ctx == NULL) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + if (ctx == NGX_NO_RESOLVER) { + njs_vm_error(vm, "no resolver defined"); + goto fail; + } + + http->ctx = ctx; + http->port = u.port; + + ctx->name = u.host; + ctx->handler = ngx_js_resolve_handler; + ctx->data = http; + ctx->timeout = ngx_external_resolver_timeout(vm, external); + + ret = ngx_resolve_name(http->ctx); + if (ret != NGX_OK) { + http->ctx = NULL; + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + } else { + http->naddrs = 1; + ngx_memcpy(&http->addr, &u.addrs[0], sizeof(ngx_addr_t)); + http->addrs = &http->addr; + + ret = ngx_js_http_connect(http); + } + + return ngx_js_fetch_result(vm, http, njs_value_arg(&http->reply), ret); + +fail: + + return ngx_js_fetch_result(vm, http, njs_vm_retval(vm), NJS_ERROR); +} + + +static ngx_js_http_t * +ngx_js_http_alloc(njs_vm_t *vm, ngx_pool_t *pool, ngx_log_t *log) +{ + ngx_js_http_t *http; + + http = ngx_pcalloc(pool, sizeof(ngx_js_http_t)); + if (http == NULL) { + goto failed; + } + + http->pool = pool; + http->log = log; + http->vm = vm; + + http->timeout = 10000; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, log, 0, "js http alloc:%p", http); + + return http; + +failed: + + njs_vm_error(vm, "internal error"); + + return NULL; +} + + +static void +ngx_js_resolve_handler(ngx_resolver_ctx_t *ctx) +{ + u_char *p; + size_t len; + socklen_t socklen; + ngx_uint_t i; + ngx_js_http_t *http; + struct sockaddr *sockaddr; + + http = ctx->data; + + if (ctx->state) { + ngx_js_http_error(http, 0, "\"%V\" could not be resolved (%i: %s)", + &ctx->name, ctx->state, + ngx_resolver_strerror(ctx->state)); + return; + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, http->log, 0, + "http fetch resolved: \"%V\"", &ctx->name); + +#if (NGX_DEBUG) + { + u_char text[NGX_SOCKADDR_STRLEN]; + ngx_str_t addr; + ngx_uint_t i; + + addr.data = text; + + for (i = 0; i < ctx->naddrs; i++) { + addr.len = ngx_sock_ntop(ctx->addrs[i].sockaddr, ctx->addrs[i].socklen, + text, NGX_SOCKADDR_STRLEN, 0); + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, http->log, 0, + "name was resolved to \"%V\"", &addr); + } + } +#endif + + http->naddrs = ctx->naddrs; + http->addrs = ngx_pcalloc(http->pool, http->naddrs * sizeof(ngx_addr_t)); + + if (http->addrs == NULL) { + goto failed; + } + + for (i = 0; i < ctx->naddrs; i++) { + socklen = ctx->addrs[i].socklen; + + sockaddr = ngx_palloc(http->pool, socklen); + if (sockaddr == NULL) { + goto failed; + } + + ngx_memcpy(sockaddr, ctx->addrs[i].sockaddr, socklen); + ngx_inet_set_port(sockaddr, http->port); + + http->addrs[i].sockaddr = sockaddr; + http->addrs[i].socklen = socklen; + + p = ngx_pnalloc(http->pool, NGX_SOCKADDR_STRLEN); + if (p == NULL) { + goto failed; + } + + len = ngx_sock_ntop(sockaddr, socklen, p, NGX_SOCKADDR_STRLEN, 1); + http->addrs[i].name.len = len; + http->addrs[i].name.data = p; + } + + ngx_resolve_name_done(ctx); + http->ctx = NULL; + + (void) ngx_js_http_connect(http); + + return; + +failed: + + ngx_js_http_error(http, 0, "memory error"); +} + + +static void +njs_js_http_destructor(njs_external_ptr_t external, njs_host_event_t host) +{ + ngx_js_http_t *http; + + http = host; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, http->log, 0, "js http destructor:%p", + http); + + if (http->ctx != NULL) { + ngx_resolve_name_done(http->ctx); + http->ctx = NULL; + } + + if (http->peer.connection != NULL) { + ngx_close_connection(http->peer.connection); + http->peer.connection = NULL; + } +} + + +static njs_int_t +ngx_js_fetch_result(njs_vm_t *vm, ngx_js_http_t *http, njs_value_t *result, + njs_int_t rc) +{ + njs_int_t ret; + njs_function_t *callback; + njs_vm_event_t vm_event; + njs_opaque_value_t arguments[2]; + + ret = njs_vm_promise_create(vm, njs_value_arg(&http->promise), + njs_value_arg(&http->promise_callbacks)); + if (ret != NJS_OK) { + goto error; + } + + callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); + if (callback == NULL) { + goto error; + } + + vm_event = njs_vm_add_event(vm, callback, 1, http, njs_js_http_destructor); + if (vm_event == NULL) { + goto error; + } + + http->vm_event = vm_event; + + if (rc == NJS_ERROR) { + njs_value_assign(&arguments[0], &http->promise_callbacks[1]); + njs_value_assign(&arguments[1], result); + + ret = njs_vm_post_event(vm, vm_event, njs_value_arg(&arguments), 2); + if (ret == NJS_ERROR) { + goto error; + } + } + + njs_vm_retval_set(vm, njs_value_arg(&http->promise)); + + return NJS_OK; + +error: + + njs_vm_error(vm, "internal error"); + + return NJS_ERROR; +} + + +static njs_int_t +ngx_js_fetch_promissified_result(njs_vm_t *vm, njs_value_t *result, + njs_int_t rc) +{ + njs_int_t ret; + njs_function_t *callback; + njs_vm_event_t vm_event; + njs_opaque_value_t retval, arguments[2]; + + ret = njs_vm_promise_create(vm, njs_value_arg(&retval), + njs_value_arg(&arguments)); + if (ret != NJS_OK) { + goto error; + } + + callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); From h312841925 at gmail.com Sat Jan 23 14:29:30 2021 From: h312841925 at gmail.com (Jim T) Date: Sat, 23 Jan 2021 22:29:30 +0800 Subject: Http: protect prefix variable when add variable In-Reply-To: References: Message-ID: Hi Maxim, Thanks for your reply. In fact I want to discuss is that I think nginx should help user to check the configuration and ensure the configuration will follow user's expectations. When user set config like auth_request_set "$email $upstream_http_x_auth_request_email;" will effect on all server, I think user can't image this. Avoid doing mistakes like this by users themselves I think maybe is difficult? So I think if can interrupt these kind of variable, or we should raise a warn. Or we should miracle these variables only work for specific context? Could you share more advice for this case? Thanks. Best Regards, Jinhua ? 2021?1?21??? 20:01??? > Date: Wed, 20 Jan 2021 19:50:02 +0300 > From: Maxim Dounin > To: nginx-devel at nginx.org > Subject: Re: Http: protect prefix variable when add variable > Message-ID: <20210120165002.GW1147 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Wed, Jan 20, 2021 at 07:59:50PM +0800, Jim T wrote: > > > Hello! > > > > There is a incident occur in our team, when we use auth_request_set like > > this in many server, and print $upstream_http_x_auth_request_email in > log: > > > > server { > > listen 8080 reuseport; > > server_name test.io; > > location / { > > auth_request /oauth2/auth; > > auth_request_set $email $upstream_http_x_auth_request_email; > > } > > } > > > > But when we add a bad auth_request_set like below: > > server { > > listen 8080 reuseport; > > server_name test2.io; > > location / { > > auth_request /oauth2/auth; > > auth_request_set $upstream_http_x_auth_request_email $email; > > } > > } > > > > We will lost all $upstream_http_x_auth_request_email even the server > > haven't use, because there is a new variable > > $upstream_http_x_auth_request_email, and the prefix variable can't be > read > > any more. > > > > So I think we can fix it like this, to avoid the wrong configuration: > > Thank you for your suggestion and patch. > See comments below. > > > # HG changeset patch > > # User Jinhua Tan <312841925 at qq.com> > > # Date 1611143620 -28800 > > # Wed Jan 20 19:53:40 2021 +0800 > > # Node ID fd7e9432a59abcfcf380ddedb1e892098a54a845 > > # Parent 61d0df8fcc7c630da35e832ba8e983db0061a3be > > Http: protect prefix variable when add variable > > > > diff -r 61d0df8fcc7c -r fd7e9432a59a src/http/ngx_http_variables.c > > --- a/src/http/ngx_http_variables.c Tue Jan 19 20:35:17 2021 +0300 > > +++ b/src/http/ngx_http_variables.c Wed Jan 20 19:53:40 2021 +0800 > > @@ -393,6 +393,20 @@ > > }; > > > > > > +static ngx_str_t ngx_http_protect_variables_prefix[] = { > > + ngx_string("arg_"), > > + ngx_string("http_"), > > + ngx_string("sent_http_"), > > + ngx_string("sent_trailer_"), > > + ngx_string("cookie_"), > > + ngx_string("arg_"), > > + ngx_string("upstream_http_"), > > + ngx_string("upstream_trailer_"), > > + ngx_string("upstream_cookie_"), > > + ngx_null_string > > +}; > > Using a static list of prefixes is certainly wrong: there can be > arbitrary prefix variables added by various modules, and limiting > checks to a predefied list is not going to work correctly. > > > + > > + > > ngx_http_variable_value_t ngx_http_variable_null_value = > > ngx_http_variable(""); > > ngx_http_variable_value_t ngx_http_variable_true_value = > > @@ -410,6 +424,7 @@ > > ngx_hash_key_t *key; > > ngx_http_variable_t *v; > > ngx_http_core_main_conf_t *cmcf; > > + ngx_str_t *p; > > > > if (name->len == 0) { > > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > @@ -421,6 +436,18 @@ > > return ngx_http_add_prefix_variable(cf, name, flags); > > } > > > > + if (flags & NGX_HTTP_VAR_CHANGEABLE) { > > + for (p = ngx_http_protect_variables_prefix; p->len; p++) { > > + if (name->len >= p.len > > + && ngx_strncasecmp(name->data, p->data, p->len) == 0) > > + { > > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > + "similar to prefix variable \"%V\"", > > *p); > > + return NULL; > > + } > > + } > > + } > > + > > cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module); > > > > key = cmcf->variables_keys->keys.elts; > > Prefix variables are intentionally implemented in a way which > allows one to overwrite them: for example, this is used in nginx > itself to provide custom handler for some $http_* variables, such > as $http_host (which can be effectively retrieved, since nginx has > a pointer to the Host header explicitly stored). That is, it is > quite normal that the ngx_http_add_variable() function you modify > is called with a variable which is more specific than a registered > prefix variable. And using the NGX_HTTP_VAR_CHANGEABLE flag to > distinguish when to fail looks wrong, as this is an unrelated > flag. You probably mean to detect user-added variables, such as > introduced by "set" or "auth_request_set". There is no way to > detect such variables except in a particular directive used to > define the variable. > > Further, I tend to think there are valid use cases when one may > want to actually redefine a particular variable. > > Overall, the patch certainly needs more work, and I very much > doubt we want to introduce such checks at all and hence this work > needs to be done. A better solution might be to avoid doing > mistakes like the one you've described above. > > -- > Maxim Dounin > http://mdounin.ru/ > > > ------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ping.zhao at intel.com Mon Jan 25 08:24:49 2021 From: ping.zhao at intel.com (Zhao, Ping) Date: Mon, 25 Jan 2021 08:24:49 +0000 Subject: [PATCH] Add io_uring support in AIO(async io) module In-Reply-To: References: <95886c3353dc80a3da21.1610629151@cdn001.sh.intel.com> <7463caa5-76d0-f6f8-e9b6-0c0b3fe1077c@nginx.com> Message-ID: Hello, add a small update to correct the length when part of request already received in previous. This case may happen when using io_uring and throughput increased. # HG changeset patch # User Ping Zhao # Date 1611566408 18000 # Mon Jan 25 04:20:08 2021 -0500 # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027 # Parent 1372f9ee2e829b5de5d12c05713c307e325e0369 Correct length calculation when part of request received. diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c --- a/src/core/ngx_output_chain.c Wed Jan 13 11:10:05 2021 -0500 +++ b/src/core/ngx_output_chain.c Mon Jan 25 04:20:08 2021 -0500 @@ -531,6 +531,14 @@ size = ngx_buf_size(src); size = ngx_min(size, dst->end - dst->pos); +#if (NGX_HAVE_FILE_IOURING) + /* + * check if already received part of the request in previous, + * calculate the remain length + */ + if(dst->last > dst->pos && size > (dst->last - dst->pos)) + size = size - (dst->last - dst->pos); +#endif sendfile = ctx->sendfile && !ctx->directio; -----Original Message----- From: nginx-devel On Behalf Of Zhao, Ping Sent: Thursday, January 21, 2021 9:44 AM To: nginx-devel at nginx.org Subject: RE: [PATCH] Add io_uring support in AIO(async io) module Hi Vladimir, No special/extra configuration needed, but need check if 'aio on' and 'sendfile off' is correctly set. This is my Nginx config for reference: user nobody; daemon off; worker_processes 1; error_log error.log ; events { worker_connections 65535; use epoll; } http { include mime.types; default_type application/octet-stream; access_log on; aio on; sendfile off; directio 2k; # Cache Configurations proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m max_size=1400g inactive=4d use_temp_path=off; ...... To better measure the disk io performance data, I do the following steps: 1. To exclude other impact, and focus on disk io part.(This patch only impact disk aio read process) Use cgroup to limit Nginx memory usage. Otherwise Nginx may also use memory as cache storage and this may cause test result not so straight.(since most cache hit in memory, disk io bw is low, like my previous mail found which didn't exclude the memory cache impact) echo 2G > memory.limit_in_bytes use ' cgexec -g memory:nginx' to start Nginx. 2. use wrk -t 100 -c 1000, with random 25000 http requests. My previous test used -t 200 connections, comparing with -t 1000, libaio performance drop more when connections numbers increased from 200 to 1000, but io_uring doesn't. It's another advantage of io_uring. 3. First clean the cache disk and run the test for 30 minutes to let Nginx store the cache files to nvme disk as much as possible. 4. Rerun the test, this time Nginx will use ngx_file_aio_read to extract the cache files in nvme cache disk. Use iostat to track the io data. The data should be align with NIC bw since all data should be from cache disk.(need exclude memory as cache storage impact) Following is the test result: Nginx worker_processes 1: 4k 100k 1M Io_uring 220MB/s 1GB/s 1.3GB/s Libaio 70MB/s 250MB/s 600MB/s(with -c 200, 1.0GB/s) Nginx worker_processes 4: 4k 100k 1M Io_uring 800MB/s 2.5GB/s 2.6GB/s(my nvme disk io maximum bw) libaio 250MB/s 900MB/s 2.0GB/s So for small request, io_uring has huge improvement than libaio. In previous mail, because I didn't exclude the memory cache storage impact, most cache file is stored in memory, very few are from disk in case of 4k/100k. The data is not correct.(for 1M, because the cache is too big to store in memory, it wat in disk) Also I enabled directio option "directio 2k" this time to avoid this. Regards, Ping -----Original Message----- From: nginx-devel On Behalf Of Vladimir Homutov Sent: Wednesday, January 20, 2021 12:43 AM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add io_uring support in AIO(async io) module On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote: > It depends on if disk io is the performance hot spot or not. If yes, > io_uring shows improvement than libaio. With 4KB/100KB length 1 Nginx > thread it's hard to see performance difference because iostat is only > around ~10MB/100MB per second. Disk io is not the performance bottle > neck, both libaio and io_uring have the same performance. If you > increase request size or Nginx threads number, for example 1MB length > or Nginx thread number 4. In this case, disk io became the performance > bottle neck, you will see io_uring performance improvement. Can you please provide full test results with specific nginx configuration? _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From pluknet at nginx.com Tue Jan 26 09:48:39 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 26 Jan 2021 09:48:39 +0000 Subject: [nginx] Clean up trailers in ngx_http_clean_header() as well. Message-ID: details: https://hg.nginx.org/nginx/rev/ecc0ae881a25 branches: changeset: 7764:ecc0ae881a25 user: Sergey Kandaurov date: Tue Jan 26 12:39:28 2021 +0300 description: Clean up trailers in ngx_http_clean_header() as well. The function has not been updated with introduction of trailers support in 7034:1b068a4e82d8 (1.13.2). diffstat: src/http/ngx_http_special_response.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r 61d0df8fcc7c -r ecc0ae881a25 src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c Tue Jan 19 20:35:17 2021 +0300 +++ b/src/http/ngx_http_special_response.c Tue Jan 26 12:39:28 2021 +0300 @@ -575,6 +575,10 @@ ngx_http_clean_header(ngx_http_request_t r->headers_out.headers.part.next = NULL; r->headers_out.headers.last = &r->headers_out.headers.part; + r->headers_out.trailers.part.nelts = 0; + r->headers_out.trailers.part.next = NULL; + r->headers_out.trailers.last = &r->headers_out.trailers.part; + r->headers_out.content_length_n = -1; r->headers_out.last_modified_time = -1; } From kyr.zarifis at gmail.com Tue Jan 26 10:26:15 2021 From: kyr.zarifis at gmail.com (Kyriakos Zarifis) Date: Tue, 26 Jan 2021 02:26:15 -0800 Subject: nginx-quic: setting transport parameters Message-ID: Hi, I can't seem to set a few of the quic parameters using their respective directives. Specifically, doing e.g. this in the conf: * quic_max_udp_payload_size 1472;* * quic_max_ack_delay 10;* * quic_ack_delay_exponent 2;* ... results in the default values being sent (as seen in qvis): * "max_packet_size": 65527 "max_ack_delay": 25 "ack_delay_exponent": 3* Other parameters (like quic_inital_*) are being set just fine. Any idea what I might be doing wrong for these 3 above? p.s. I think *quic_max_packet_size* needs to be updated to *quic_max_udp_payload_size* in the README to match the latest drafts and code. Thanks, Kyriakos -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jan 26 12:52:57 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Jan 2021 12:52:57 +0000 Subject: [njs] Fixed typo introduced in 81040de6b085. Message-ID: details: https://hg.nginx.org/njs/rev/d64c77837095 branches: changeset: 1594:d64c77837095 user: Dmitry Volyntsev date: Tue Jan 26 12:52:12 2021 +0000 description: Fixed typo introduced in 81040de6b085. Found by Coverity (CID 1472504, CID 1472505). diffstat: nginx/ngx_js_fetch.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 81040de6b085 -r d64c77837095 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Thu Jan 21 18:44:58 2021 +0000 +++ b/nginx/ngx_js_fetch.c Tue Jan 26 12:52:12 2021 +0000 @@ -467,7 +467,7 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value start++; value = njs_vm_object_prop(vm, headers, &name, &lvalue); - if (ret != NJS_OK) { + if (value == NULL) { goto fail; } From xeioex at nginx.com Tue Jan 26 12:52:59 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Jan 2021 12:52:59 +0000 Subject: [njs] Fixed allocation failure detection in njs_backtrace_to_string(). Message-ID: details: https://hg.nginx.org/njs/rev/60d363cb92b3 branches: changeset: 1595:60d363cb92b3 user: Dmitry Volyntsev date: Tue Jan 26 12:52:15 2021 +0000 description: Fixed allocation failure detection in njs_backtrace_to_string(). Found by Coverity (CID 1472503). diffstat: src/njs_error.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diffs (22 lines): diff -r d64c77837095 -r 60d363cb92b3 src/njs_error.c --- a/src/njs_error.c Tue Jan 26 12:52:12 2021 +0000 +++ b/src/njs_error.c Tue Jan 26 12:52:15 2021 +0000 @@ -1226,6 +1226,7 @@ njs_backtrace_to_string(njs_vm_t *vm, nj { size_t count; njs_chb_t chain; + njs_int_t ret; njs_uint_t i; njs_backtrace_entry_t *be, *prev; @@ -1271,8 +1272,8 @@ njs_backtrace_to_string(njs_vm_t *vm, nj be++; } - njs_chb_join(&chain, dst); + ret = njs_chb_join(&chain, dst); njs_chb_destroy(&chain); - return NJS_OK; + return ret; } From xeioex at nginx.com Tue Jan 26 12:53:01 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 26 Jan 2021 12:53:01 +0000 Subject: [njs] Fixed Headers object keys forgotten in 81040de6b085. Message-ID: details: https://hg.nginx.org/njs/rev/63147f56e418 branches: changeset: 1596:63147f56e418 user: Dmitry Volyntsev date: Tue Jan 26 12:52:17 2021 +0000 description: Fixed Headers object keys forgotten in 81040de6b085. Found by Coverity (CID 1472501). diffstat: nginx/ngx_js_fetch.c | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diffs (24 lines): diff -r 60d363cb92b3 -r 63147f56e418 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Tue Jan 26 12:52:15 2021 +0000 +++ b/nginx/ngx_js_fetch.c Tue Jan 26 12:52:17 2021 +0000 @@ -2035,6 +2035,20 @@ ngx_response_js_ext_keys(njs_vm_t *vm, n break; } } + + if (k == length) { + value = njs_vm_array_push(vm, keys); + if (value == NULL) { + return NJS_ERROR; + } + + rc = njs_vm_value_string_set(vm, value, h->key.data, h->key.len); + if (rc != NJS_OK) { + return NJS_ERROR; + } + + length++; + } } return NJS_OK; From pluknet at nginx.com Wed Jan 27 10:16:14 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 27 Jan 2021 13:16:14 +0300 Subject: nginx-quic: setting transport parameters In-Reply-To: References: Message-ID: <0CEE5C0E-2EFB-4C0C-BF69-282CAAA18D27@nginx.com> > On 26 Jan 2021, at 13:26, Kyriakos Zarifis wrote: > > Hi, > > I can't seem to set a few of the quic parameters using their respective directives. > Specifically, doing e.g. this in the conf: > quic_max_udp_payload_size 1472; > quic_max_ack_delay 10; > quic_ack_delay_exponent 2; > > ... results in the default values being sent (as seen in qvis): > "max_packet_size": 65527 > "max_ack_delay": 25 > "ack_delay_exponent": 3 > > Other parameters (like quic_inital_*) are being set just fine. Any idea what I might be doing wrong for these 3 above? These directives do not currently affect sending transport parameters. It needs to be fixed. > p.s. I think quic_max_packet_size needs to be updated to quic_max_udp_payload_size in the README to match the latest drafts and code. This one has been fixed, thanks. https://hg.nginx.org/nginx-quic/rev/27bd6dc24426 -- Sergey Kandaurov