From wangjiahao at openresty.com Wed Jun 1 04:21:15 2022 From: wangjiahao at openresty.com (Jiahao Wang) Date: Wed, 1 Jun 2022 12:21:15 +0800 Subject: [ANN] Test::Nginx 0.30 is released Message-ID: Hi there, I am happy to announce the new 0.30 release of Test::Nginx: https://openresty.org/en/ann-test-nginx-030.html This version has many new features and fixes several bugs since 0.29, refer to the above link for details. This Perl module provides a test scaffold for automated testing in Nginx C module or OpenResty-based Lua library development and regression testing. This class inherits from Test::Base, thus bringing all its declarative power to the Nginx C module testing practices. All of our OpenResty projects are using this test scaffold for automated regression testing. Enjoy! Best regards, Jiahao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jun 1 12:22:16 2022 From: nginx-forum at forum.nginx.org (hanzhai) Date: Wed, 01 Jun 2022 08:22:16 -0400 Subject: Buffer reuse like gzip filter module, with pre-configured number of buffers In-Reply-To: References: Message-ID: Hi Maxim, Thanks for your reply. Your guide made me understand thoroughly the role of calling ngx_http_next_body_filter(r, NULL) in the gzip module which helped a lot. The buffer now can be reused but I still got one issue that confused me a lot. I got curl: (18) transfer closed with outstanding read data remaining error when I access the path the code modified. I captured packets through tcpdump and the last packet containing the response was marked as Malformed Packet. Here's the code: if (ctx->nomem) { if (ngx_http_next_body_filter(r, NULL) == NGX_ERROR) { goto failed; } ngx_chain_t *cl = NULL; ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &cl, (ngx_buf_tag_t) &ngx_http_my_filter_module); ctx->nomem = 0; flush = 0; } else { flush = ctx->busy ? 1 : 0; } for (;;) { /* cycle while we can write to a client */ for (;;) { /* cycle while there is data to insert into the beginning */ rc = ngx_http_my_get_buf(r, ctx); if (rc == NGX_DECLINED) { break; } if (rc == NGX_ERROR) { goto failed; } /* there are buffers to write data */ // rc = operation to copy 64 kb data to the ctx->out_buf; ctx->out_buf->last = ctx->out_buf->pos + 64 * 4096; ngx_chain_t *cl = ngx_alloc_chain_link(r->pool); if (cl == NULL) { goto failed; } cl->buf = ctx->out_buf; cl->next = NULL; *ctx->last_out = cl; ctx->last_out = &cl->next; if (rc == OK) { ctx->stage = DONE; break; } /* rc == NGX_AGAIN */ } if (ctx->out == NULL && !flush) { return ctx->busy ? NGX_AGAIN : NGX_OK; } ngx_chain_t *a = ctx->out; while (a) { if (ngx_buf_size(a->buf)) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "beign: %*s, end: %*s", 10, a->buf->pos, 10, a->buf->last - 10); // add logging to make sure the buf is complete, all buf were logged } a = a->next; } rc = ngx_http_next_body_filter(r, ctx->out); if (rc == NGX_ERROR) { goto failed; } ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &ctx->out, (ngx_buf_tag_t) &ngx_http_my_filter_module); ctx->last_out = &ctx->out; ctx->nomem = 0; flush = 0; if (ctx->stage == DONE) { return rc; } } static ngx_int_t ngx_http_my_get_buf(ngx_http_request_t *r, ngx_http_my_ctx_t *ctx) { ngx_chain_t *cl; ngx_http_my_loc_conf_t *conf = ngx_http_get_module_loc_conf(r, ngx_http_my_filter_module); if (ctx->free) { cl = ctx->free; ctx->out_buf = cl->buf; ctx->free = cl->next; ngx_free_chain(r->pool, cl); } else if (ctx->bufs < conf->bufs.num) { ctx->out_buf = ngx_create_temp_buf(r->pool, conf->bufs.size); if (ctx->out_buf == NULL) { return NGX_ERROR; } ctx->out_buf->tag = (ngx_buf_tag_t) &ngx_http_my_filter_module; ctx->out_buf->recycled = 1; ctx->bufs++; } else { ctx->nomem = 1; return NGX_DECLINED; } return NGX_OK; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294374,294385#msg-294385 From nginx-forum at forum.nginx.org Wed Jun 1 12:32:35 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Wed, 01 Jun 2022 08:32:35 -0400 Subject: Unknown 428 http status code description via nginx Message-ID: <2c244c08757c28e9665f1d495890463f.NginxMailingListEnglish@forum.nginx.org> Hello, I added 428 as an allowed http status code that can be returned from nginx. But when I access this from postman/insomnia the status code description comes out to be "unknown". eg - HTTP/1.1 428 Unknown, instead of HTTP/1.1 428 Precondition Required >From my nginx gateway, I am just returning the status code and no description. And this works fine for all the other status codes. Going through the nginx documentation - https://www.nginx.com/resources/wiki/extending/api/http/, seems like nginx doesn't support 428 http status(I might be wrong here). Is there any way I can override something in nginx to display correct status code description? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294386,294386#msg-294386 From nginx-forum at forum.nginx.org Wed Jun 1 12:45:52 2022 From: nginx-forum at forum.nginx.org (hanzhai) Date: Wed, 01 Jun 2022 08:45:52 -0400 Subject: Buffer reuse like gzip filter module, with pre-configured number of buffers In-Reply-To: References: Message-ID: <65e3c7859895a8dcb94c496085864c0c.NginxMailingListEnglish@forum.nginx.org> Hi, I also did these in my header_filter to make sure that the modified response is sent to the client with chunked encoding. ngx_http_clear_content_length(r); ngx_http_clear_accept_ranges(r); ngx_http_clear_etag(r); ngx_table_elt_t *header_entry = ngx_list_push(&r->headers_out.headers); if (header_entry == NULL) { return ngx_http_t1k_bot_rsp_body_next_header_filter(r); } header_entry->hash = 1; header_entry->key.len = t1k_confuse_header_key.len; header_entry->key.data = (u_char *)t1k_confuse_header_key.data; header_entry->value.len = t1k_confuse_header_value.len; header_entry->value.data = (u_char *)t1k_confuse_header_value.data; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294374,294387#msg-294387 From roger at netskrt.io Wed Jun 1 15:07:13 2022 From: roger at netskrt.io (Roger Fischer) Date: Wed, 1 Jun 2022 08:07:13 -0700 Subject: Preferred method to reopen the log with logrotate Message-ID: <440168FD-EBAD-4557-A43C-8232E1351488@netskrt.io> Hello, there seem to be two methods to tell nginx to re-open the log file after the file was rotated (we use logrotate). 1) nginx -s reopen 2) kill -USR1 Which is the preferred method, and why. I am asking because we have seen nginx -s reopen failing because of a transient issue with the configuration. According to the man page reopen should be the same as SIGUSR1, but the error we saw implies that the config was reloaded (ie SIGHUP). Version: nginx/1.19.9 Thanks… Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jun 1 16:07:06 2022 From: nginx-forum at forum.nginx.org (hanzhai) Date: Wed, 01 Jun 2022 12:07:06 -0400 Subject: Buffer reuse like gzip filter module, with pre-configured number of buffers In-Reply-To: <65e3c7859895a8dcb94c496085864c0c.NginxMailingListEnglish@forum.nginx.org> References: <65e3c7859895a8dcb94c496085864c0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Never mind, I figured it out by myself. The subrequest will enter the fail label which will return an NGX_ERR that caused the above-mentioned error. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294374,294392#msg-294392 From mdounin at mdounin.ru Wed Jun 1 16:42:08 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Jun 2022 19:42:08 +0300 Subject: Preferred method to reopen the log with logrotate In-Reply-To: <440168FD-EBAD-4557-A43C-8232E1351488@netskrt.io> References: <440168FD-EBAD-4557-A43C-8232E1351488@netskrt.io> Message-ID: Hello! On Wed, Jun 01, 2022 at 08:07:13AM -0700, Roger Fischer wrote: > Hello, > > there seem to be two methods to tell nginx to re-open the log > file after the file was rotated (we use logrotate). > > 1) nginx -s reopen > > 2) kill -USR1 > > Which is the preferred method, and why. > > I am asking because we have seen nginx -s reopen failing because > of a transient issue with the configuration. According to the > man page reopen should be the same as SIGUSR1, but the error we > saw implies that the config was reloaded (ie SIGHUP). Preferred method is to use kill, it is more effective and has less intermediate operations (such as parsing the configuration to find out path to the pid file) which can fail. The "nginx -s ..." command was introduced to support Windows, where there is no kill. On Unix systems, it is essentially equivalent to "kill /path/to/nginx.pid", except it accepts user-friendly signal names and parses nginx configuration to find out path to the pid file. It might be easier to use in user-interactive scenarios, though it might be costly as it parses nginx configuration, and it might not work at all in some specific cases (such as incorrect on-disk configuration you've mentioned, or on-disk configuration not compatible with on-disk nginx binary). As such, it is not generally recommended to use the "nginx -s ..." form on Unix, especially in automatic tasks such as log rotation. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Jun 2 13:51:23 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 2 Jun 2022 17:51:23 +0400 Subject: Unknown 428 http status code description via nginx In-Reply-To: <2c244c08757c28e9665f1d495890463f.NginxMailingListEnglish@forum.nginx.org> References: <2c244c08757c28e9665f1d495890463f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1E11655D-63DC-41E8-A436-D5092740CE1F@nginx.com> > On 1 Jun 2022, at 16:32, libresco_27 wrote: > > Hello, > > I added 428 as an allowed http status code that can be returned from nginx. > But when I access this from postman/insomnia the status code description > comes out to be "unknown". eg - HTTP/1.1 428 Unknown, instead of HTTP/1.1 > 428 Precondition Required > From my nginx gateway, I am just returning the status code and no > description. And this works fine for all the other status codes. > > [..] The reason phrase, that's the status code description, is an optional part of the status line in HTTP/1.1, it doesn't present in subsequent HTTP versions. Moreover, citing the relevant parts of RFC: - The reason-phrase element exists for the sole purpose of providing a textual description associated with the numeric status code - the reason-phrase .. might be translated for a given locale, overwritten by intermediaries, or discarded So it should be fine to have it absent. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Jun 2 21:44:26 2022 From: nginx-forum at forum.nginx.org (acidiclight) Date: Thu, 02 Jun 2022 17:44:26 -0400 Subject: If statement with $limit_req_status under location block with proxy_pass not working Message-ID: Hello, I'm trying to customize my response to rate-limited requests by keeping limit_req_dry_run on, and using an if statement depending on the value of $limit_req_status: This works as expected: limit_req_zone $binary_remote_addr zone=one:1m rate=2r/m; server { listen 80; server_name localhost; location / { # set rate limiting for this location limit_req zone=one burst=10 nodelay; limit_req_dry_run on; add_header X-my-var "$myvar" always; if ($limit_req_status = "REJECTED_DRY_RUN") { add_header X-custom-header "rejected" always; return 400 'rejected'; } root /usr/share/nginx/html; index index.html; } } But once I replace root and index with a proxy_pass, the whole thing stops working: limit_req_zone $binary_remote_addr zone=one:1m rate=2r/m; server { resolver 8.8.8.8; listen 80; server_name localhost; location / { set $myupstream "myurl.com"; # set rate limiting for this location limit_req zone=one burst=10 nodelay; limit_req_dry_run on; add_header X-limit-req-status "$limit_req_status" always; if ($limit_req_status = "REJECTED_DRY_RUN") { add_header X-custom-header "rejected" always; return 400 'rejected'; } proxy_pass http://$myupstream; } } I added $limit_req_status to my log_format and can confirm that the value of $limit_req_status does get set to "REJECTED_DRY_RUN". I also see the header "X-limit-req-status" from the request set to "REJECTED_DRY_RUN". I'm assuming the issue is the way Nginx evaluates if statements that have unset variables at the beginning of the request? If so, any pointers on how to get this working? Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294407,294407#msg-294407 From mdounin at mdounin.ru Fri Jun 3 21:05:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 Jun 2022 00:05:42 +0300 Subject: If statement with $limit_req_status under location block with proxy_pass not working In-Reply-To: References: Message-ID: Hello! On Thu, Jun 02, 2022 at 05:44:26PM -0400, acidiclight wrote: > I'm trying to customize my response to rate-limited requests by keeping > limit_req_dry_run on, and using an if statement depending on the value of > $limit_req_status: > > This works as expected: > > limit_req_zone $binary_remote_addr zone=one:1m rate=2r/m; > > server { > > listen 80; > server_name localhost; > > location / { > # set rate limiting for this location > limit_req zone=one burst=10 nodelay; > limit_req_dry_run on; > > add_header X-my-var "$myvar" always; > > if ($limit_req_status = "REJECTED_DRY_RUN") { > add_header X-custom-header "rejected" always; > return 400 'rejected'; > } > > root /usr/share/nginx/html; > index index.html; > } > > } > > But once I replace root and index with a proxy_pass, the whole thing stops > working: > > limit_req_zone $binary_remote_addr zone=one:1m rate=2r/m; > > server { > resolver 8.8.8.8; > listen 80; > server_name localhost; > > location / { > set $myupstream "myurl.com"; > > # set rate limiting for this location > limit_req zone=one burst=10 nodelay; > limit_req_dry_run on; > > add_header X-limit-req-status "$limit_req_status" always; > > if ($limit_req_status = "REJECTED_DRY_RUN") { > add_header X-custom-header "rejected" always; > return 400 'rejected'; > } > > proxy_pass http://$myupstream; > } > > } > > I added $limit_req_status to my log_format and can confirm that the value of > $limit_req_status does get set to "REJECTED_DRY_RUN". I also see the header > "X-limit-req-status" from the request set to "REJECTED_DRY_RUN". > > I'm assuming the issue is the way Nginx evaluates if statements that have > unset variables at the beginning of the request? If so, any pointers on how > to get this working? Thank you! The rewrite module directives, notably "if", "set", and "return" in your configuration, are evaluated while looking for a configuration to process a request[1]. In contrast, limit_req limits are evaluated before processing a request in a given configuration. As such, both the above configurations are not expected to work. First one likely appear to work for you because you are testing it with requests such as "/", involving an internal redirect to the index file[2], so "if" acts on the limit_req results before the internal redirect. Testing with direct link to a file will reveal the it doesn't work too. Proper solution to get things working would be to actually switch off limit_req_dry_run. If you need to redefine response code and/or add custom headers, consider using limit_req_status and/or error_page instead. For example: location / { limit_req zone=one burst=10 nodelay; error_page 503 = /rejected; proxy_pass http://...; } location = /rejected { add_header X-custom-header "rejected" always; return 400 rejected; } Hope this helps. [1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html [2] http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration -- Maxim Dounin http://mdounin.ru/ From roger at netskrt.io Sat Jun 4 00:38:07 2022 From: roger at netskrt.io (Roger Fischer) Date: Fri, 3 Jun 2022 17:38:07 -0700 Subject: worker_connections are not enough, reusing connections with idle workers Message-ID: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> Hello, my understanding is that worker_connections applies to each worker (eg. when set to 1024, 10 worker processes could handle up to 10240 connections). But we are seeing 1024 worker_connections are not enough, reusing connections from one worker while other workers are idle. Is there something we can do to balance connections more evenly across workers? This is from a performance test. The connections are established all at once. Would spreading them out over some time make a difference? nginx version: nginx/1.19.9 Thanks… Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Sat Jun 4 03:40:12 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sat, 4 Jun 2022 06:40:12 +0300 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> Message-ID: Hi Roger, hope you're doing well. On Fri, Jun 03, 2022 at 05:38:07PM -0700, Roger Fischer wrote: > Hello, > > my understanding is that worker_connections applies to each worker > (eg. when set to 1024, 10 worker processes could handle up to 10240 > connections). That's exactly right. Please read the following link [1] to get more details. > But we are seeing 1024 worker_connections are not enough, reusing > connections from one worker while other workers are idle. So, it's possibe to increase the number of those. > Is there something we can do to balance connections more evenly > across workers? Could you please add a bit more details on this. Please note, that there were several improvements on that topic, so please follow the recommendations below. > nginx version: nginx/1.19.9 Recent stable version is 1.22.0, [2] so I'd recommend to update to that version. Thank you. References 1. https://nginx.org/en/docs/ngx_core_module.html#worker_connections 2. http://nginx.org/en/CHANGES-1.22 -- Sergey A. Osokin From community at thoughtmaybe.com Sun Jun 5 11:18:31 2022 From: community at thoughtmaybe.com (Jore) Date: Sun, 5 Jun 2022 21:18:31 +1000 Subject: Migrating from PHP-FPM to Nginx Unit: worth it? In-Reply-To: References: Message-ID: <43eaa432-0fff-5339-7c5e-e3be8eacf239@thoughtmaybe.com> Hi there I'm interested in this question too if anyone has any pointers. Thanks, Jore On 25/5/22 02:20, petecooper wrote: > I run a fleet of small- to medium-scale web apps on PHP, and I'm comfortable > compiling Nginx + PHP to to optimise for my needs. Until now, I've used > PHP-FPM exclusively. I have read about performance improvements with Nginx > Unit as far as PHP is concerned. This interests me, and I have time > available to learn. > > My question - for anyone who's gone from PHP-FPM to Unit…was it worth it? > What advice would you give? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294258,294258#msg-294258 > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From nginx-forum at forum.nginx.org Mon Jun 6 03:06:53 2022 From: nginx-forum at forum.nginx.org (hanzhai) Date: Sun, 05 Jun 2022 23:06:53 -0400 Subject: How is buffer size and buffer number determined in the HTTP response chain? Message-ID: I saw 4k, 16k, and 32k buffer sizes in the response chain, why not keep all buffers in the same size? Are these sizes of buffer relevant to the chunked HTTP transfer encoding? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294421,294421#msg-294421 From peter.volkov at gmail.com Tue Jun 7 09:41:47 2022 From: peter.volkov at gmail.com (Peter Volkov) Date: Tue, 7 Jun 2022 12:41:47 +0300 Subject: How to disable http v2 Message-ID: Hi. After we enabled HTTP/2 in nginx some old software started to fail. So we would like to have HTTP v2 enabled in general but disabled for some specific IP:PORT. I've tried two listen directives in server block: listen IP:443 ssl http2; listen IP:1443 ssl; The problem is that on both ports I see: * ALPN: offers h2. Is it possible to disable HTTP v2 for specific IP:PORT? Thanks in advance, -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Jun 7 11:09:24 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 7 Jun 2022 15:09:24 +0400 Subject: How to disable http v2 In-Reply-To: References: Message-ID: > On 7 Jun 2022, at 13:41, Peter Volkov wrote: > > Hi. > > After we enabled HTTP/2 in nginx some old software started to fail. So we would like to have HTTP v2 enabled in general but disabled for some specific IP:PORT. I've tried two listen directives in server block: > > listen IP:443 ssl http2; > listen IP:1443 ssl; > > The problem is that on both ports I see: * ALPN: offers h2. Is it possible to disable HTTP v2 for specific IP:PORT? nginx offers HTTP/2 ALPN on IP:PORT configured to accept HTTP/2 connections. Make sure you have no the "http2" option on a particular IP:1443 elsewhere, as "http2" attributes to all virtual servers sharing such IP:PORT. -- Sergey Kandaurov From peter.volkov at gmail.com Tue Jun 7 13:45:56 2022 From: peter.volkov at gmail.com (Peter Volkov) Date: Tue, 7 Jun 2022 16:45:56 +0300 Subject: How to disable http v2 In-Reply-To: References: Message-ID: On Tue, 7 Jun 2022 at 14:15, Sergey Kandaurov wrote: > > On 7 Jun 2022, at 13:41, Peter Volkov wrote: > > After we enabled HTTP/2 in nginx some old software started to fail. So > we would like to have HTTP v2 enabled in general but disabled for some > specific IP:PORT. I've tried two listen directives in server block: > > > > listen IP:443 ssl http2; > > listen IP:1443 ssl; > > > > The problem is that on both ports I see: * ALPN: offers h2. Is it > possible to disable HTTP v2 for specific IP:PORT? > > nginx offers HTTP/2 ALPN on IP:PORT configured to accept HTTP/2 > connections. > Make sure you have no the "http2" option on a particular IP:1443 elsewhere, > as "http2" attributes to all virtual servers sharing such IP:PORT. > That was my understanding as well. But take a look at nginx.conf in attachment - I see nginx announces h2 on both ports 1444 and 1445. # nginx -V nginx version: nginx/1.21.6 built with OpenSSL 1.1.1d 10 Sep 2019 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib64 --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/lib/nginx/tmp/client --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --with-compat --with-http_v2_module --with-pcre --without-http_grpc_module --without-http_ssi_module --without-http_upstream_hash_module --without-http_upstream_zone_module --with-http_flv_module --with-http_geoip_module --with-http_mp4_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_realip_module --add-module=external_module/headers-more-nginx-module-0.33 --add-module=external_module/nginx_upstream_check_module-9aecf15ec379fe98f62355c57b60c0bc83296f04 --add-module=external_module/nginx-push-stream-module-0.5.4 --add-module=external_module/ngx_http_geoip2_module-3.3 --with-http_ssl_module --without-stream_access_module --without-stream_geo_module --without-stream_limit_conn_module --without-stream_map_module --without-stream_return_module --without-stream_split_clients_module --without-stream_upstream_hash_module --without-stream_upstream_least_conn_module --without-stream_upstream_zone_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user=nginx --group=nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- user nginx nginx; worker_processes auto; worker_rlimit_nofile 32768; events { worker_connections 16384; use epoll; multi_accept on; } error_log /var/log/nginx/NG_error_log warn; http { server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request_uri" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio" "$request_time"'; access_log /var/log/nginx/NG_access.log main; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 16k; request_pool_size 4k; proxy_buffering on; proxy_buffers 256 32k; proxy_buffer_size 32k; uwsgi_buffering on; uwsgi_buffers 256 4k; # http://nginx.org/ru/docs/hash.html server_names_hash_max_size 1024; server_names_hash_bucket_size 128; variables_hash_max_size 2048; variables_hash_bucket_size 128; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_min_length 1024; gzip_buffers 4 8k; gzip_types text/plain text/css application/x-javascript application/javascript application/json application/octet-stream ; output_buffers 1 32k; postpone_output 1460; keepalive_timeout 75 20; keepalive_requests 4096; ignore_invalid_headers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; ssl_dhparam dhparams.pem; ssl_session_cache shared:SSL:30m; ssl_session_timeout 10m; index index.html; ssl_stapling on; ssl_stapling_verify on; resolver 172.16.11.20 172.16.11.91 valid=300s ipv6=off; resolver_timeout 1s; server { listen edge1_clients_vip1:1445 ssl; listen edge1_clients_vip1:1444 ssl http2; server_name *.proxy.lfstrm.tv proxy.lfstrm.tv; ssl_certificate /etc/letsencrypt/live/proxy.lfstrm.tv/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/proxy.lfstrm.tv/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/proxy.lfstrm.tv/chain.pem; location / { return 200; } } } From roger at netskrt.io Tue Jun 7 20:18:36 2022 From: roger at netskrt.io (Roger Fischer) Date: Tue, 7 Jun 2022 13:18:36 -0700 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> Message-ID: <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> Thanks, Sergey. We are simulating 1000 clients. Some get cache hits, and some go upstream. So there are more than 1000 connections. We have 24 workers running, each configured: events { worker_connections 1024; } We are seeing the following errors from nginx: [warn] 21151#21151: 1024 worker_connections are not enough, reusing connections [crit] 21151#21151: accept4() failed (24: Too many open files) [alert] 21151#21151: *15716 socket() failed (24: Too many open files) while connecting to upstream, I am assuming the second and third error are for the OS limit. But the first seems to be from a worker process. My assumption is that the client requests will be distributed over the 24 worker processes. So no individual worker should come anywhere close to 1000 connections. But when I look at the process stats for the workers (ps command), I see a uneven distribution of CPU time used. Note that this is from a different run than the above logs. UID PID PPID C STIME TTY TIME CMD netskrt 16905 16902 2 12:19 ? 00:07:05 nginx: worker process netskrt 16906 16902 1 12:19 ? 00:04:29 nginx: worker process netskrt 16908 16902 1 12:19 ? 00:03:30 nginx: worker process netskrt 16910 16902 0 12:19 ? 00:02:26 nginx: worker process netskrt 16911 16902 0 12:19 ? 00:01:32 nginx: worker process netskrt 16912 16902 0 12:19 ? 00:00:51 nginx: worker process netskrt 16913 16902 0 12:19 ? 00:00:11 nginx: worker process netskrt 16914 16902 0 12:19 ? 00:00:04 nginx: worker process netskrt 16915 16902 0 12:19 ? 00:00:25 nginx: worker process netskrt 16916 16902 0 12:19 ? 00:00:01 nginx: worker process netskrt 16917 16902 0 12:19 ? 00:00:00 nginx: worker process ... Is there anything we can configure to more evenly distribute the connections? Thanks… Roger > On Jun 3, 2022, at 8:40 PM, Sergey A. Osokin wrote: > > Hi Roger, > > hope you're doing well. > > On Fri, Jun 03, 2022 at 05:38:07PM -0700, Roger Fischer wrote: >> Hello, >> >> my understanding is that worker_connections applies to each worker >> (eg. when set to 1024, 10 worker processes could handle up to 10240 >> connections). > > That's exactly right. Please read the following link [1] to get more > details. > >> But we are seeing 1024 worker_connections are not enough, reusing >> connections from one worker while other workers are idle. > > So, it's possibe to increase the number of those. > >> Is there something we can do to balance connections more evenly >> across workers? > > Could you please add a bit more details on this. Please note, that > there were several improvements on that topic, so please follow the > recommendations below. > >> nginx version: nginx/1.19.9 > > Recent stable version is 1.22.0, [2] so I'd recommend to update to > that version. > > Thank you. > > References > 1. https://nginx.org/en/docs/ngx_core_module.html#worker_connections > 2. http://nginx.org/en/CHANGES-1.22 > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Tue Jun 7 21:29:41 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 8 Jun 2022 00:29:41 +0300 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> Message-ID: On Tue, Jun 07, 2022 at 01:18:36PM -0700, Roger Fischer wrote: > > We are simulating 1000 clients. Some get cache hits, and some go upstream. So there are more than 1000 connections. > > We have 24 workers running, each configured: events { worker_connections 1024; } > > We are seeing the following errors from nginx: > [warn] 21151#21151: 1024 worker_connections are not enough, reusing connections > [crit] 21151#21151: accept4() failed (24: Too many open files) > [alert] 21151#21151: *15716 socket() failed (24: Too many open files) while connecting to upstream, > > I am assuming the second and third error are for the OS limit. But the first seems to be from a worker process. That looks like OS or user account limits, so could you share an output of the followoing commands: % uname -a % cat /etc/*release % ulimit -Hn % ulimit -Sn % cat /proc/sys/fs/file-max Also, it's possible to increase nginx limits with the worker_rlimit_nofile directive, http://nginx.org/ru/docs/ngx_core_module.html#worker_rlimit_nofile Thank you. -- Sergey A. Osokin From roger at netskrt.io Tue Jun 7 21:42:23 2022 From: roger at netskrt.io (Roger Fischer) Date: Tue, 7 Jun 2022 14:42:23 -0700 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> Message-ID: <1B7BB5AB-9E47-449D-96BA-B7E0ACDF3381@netskrt.io> Here are the additional details: $ uname -a Linux a002 4.15.0-177-generic #186-Ubuntu SMP Thu Apr 14 20:23:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS" $ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.6 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.6 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic $ ulimit -Hn 1048576 $ ulimit -Sn 1024 $ cat /proc/sys/fs/file-max 262144 worker_rlimit_nofile 65535; The ulimits are for the user that nginx runs at (only the master process runs as root). Roger > On Jun 7, 2022, at 2:29 PM, Sergey A. Osokin wrote: > > On Tue, Jun 07, 2022 at 01:18:36PM -0700, Roger Fischer wrote: >> >> We are simulating 1000 clients. Some get cache hits, and some go upstream. So there are more than 1000 connections. >> >> We have 24 workers running, each configured: events { worker_connections 1024; } >> >> We are seeing the following errors from nginx: >> [warn] 21151#21151: 1024 worker_connections are not enough, reusing connections >> [crit] 21151#21151: accept4() failed (24: Too many open files) >> [alert] 21151#21151: *15716 socket() failed (24: Too many open files) while connecting to upstream, >> >> I am assuming the second and third error are for the OS limit. But the first seems to be from a worker process. > > That looks like OS or user account limits, so could you share an > output of the followoing commands: > > % uname -a > % cat /etc/*release > % ulimit -Hn > % ulimit -Sn > % cat /proc/sys/fs/file-max > > Also, it's possible to increase nginx limits with the worker_rlimit_nofile > directive, http://nginx.org/ru/docs/ngx_core_module.html#worker_rlimit_nofile > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 7 23:37:16 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Jun 2022 02:37:16 +0300 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> Message-ID: Hello! On Tue, Jun 07, 2022 at 01:18:36PM -0700, Roger Fischer wrote: > My assumption is that the client requests will be distributed > over the 24 worker processes. So no individual worker should > come anywhere close to 1000 connections. > > But when I look at the process stats for the workers (ps > command), I see a uneven distribution of CPU time used. Note > that this is from a different run than the above logs. > UID PID PPID C STIME TTY TIME CMD > netskrt 16905 16902 2 12:19 ? 00:07:05 nginx: worker process > netskrt 16906 16902 1 12:19 ? 00:04:29 nginx: worker process > netskrt 16908 16902 1 12:19 ? 00:03:30 nginx: worker process > netskrt 16910 16902 0 12:19 ? 00:02:26 nginx: worker process > netskrt 16911 16902 0 12:19 ? 00:01:32 nginx: worker process > netskrt 16912 16902 0 12:19 ? 00:00:51 nginx: worker process > netskrt 16913 16902 0 12:19 ? 00:00:11 nginx: worker process > netskrt 16914 16902 0 12:19 ? 00:00:04 nginx: worker process > netskrt 16915 16902 0 12:19 ? 00:00:25 nginx: worker process > netskrt 16916 16902 0 12:19 ? 00:00:01 nginx: worker process > netskrt 16917 16902 0 12:19 ? 00:00:00 nginx: worker process > ... > > Is there anything we can configure to more evenly distribute the > connections? As already recommended, consider upgrading to nginx 1.21.6 or 1.22.0. There is a known issue in older nginx versions with distribution of connections among worker processes on modern Linux kernels, fixed in nginx 1.21.6 (http://nginx.org/en/CHANGES): *) Bugfix: when using EPOLLEXCLUSIVE on Linux client connections were unevenly distributed among worker processes. In older versions, configuring "accept_mutex on;" or "listen ... reuseport;" can be used to improve the distribution of connections between worker processes, see https://trac.nginx.org/nginx/ticket/2285 for details. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Wed Jun 8 01:24:47 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 8 Jun 2022 04:24:47 +0300 Subject: worker_connections are not enough, reusing connections with idle workers In-Reply-To: <1B7BB5AB-9E47-449D-96BA-B7E0ACDF3381@netskrt.io> References: <595FB046-5C2B-4CF4-B42F-67FDB294EECA@netskrt.io> <6BBB24BD-742F-41ED-8A59-E16F5A0A8EB2@netskrt.io> <1B7BB5AB-9E47-449D-96BA-B7E0ACDF3381@netskrt.io> Message-ID: Hi Roger, I've forgotten to ask about the nginx version, so as Maxim Dounin recommended, please upgrade to the recent stable version 1.22.0, https://nginx.org/en/linux_packages.html#Ubuntu On Tue, Jun 07, 2022 at 02:42:23PM -0700, Roger Fischer wrote: > Here are the additional details: > > $ uname -a > Linux a002 4.15.0-177-generic #186-Ubuntu SMP Thu Apr 14 > 20:23:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux > > $ cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=18.04 > DISTRIB_CODENAME=bionic > DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS" > > $ cat /etc/os-release > NAME="Ubuntu" > VERSION="18.04.6 LTS (Bionic Beaver)" > ID=ubuntu > ID_LIKE=debian > PRETTY_NAME="Ubuntu 18.04.6 LTS" > VERSION_ID="18.04" > HOME_URL="https://www.ubuntu.com/" > SUPPORT_URL="https://help.ubuntu.com/" > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" > VERSION_CODENAME=bionic > UBUNTU_CODENAME=bionic > > $ ulimit -Hn > 1048576 > > $ ulimit -Sn > 1024 I'd recommend to increase this limit, it's definitely not for PROD usage. > $ cat /proc/sys/fs/file-max > 262144 > > worker_rlimit_nofile 65535; > > The ulimits are for the user that nginx runs at (only the master process runs as root). -- Sergey A. Osokin From tamil at helptap.com Fri Jun 10 16:39:42 2022 From: tamil at helptap.com (Tamil Vendhan Kanagarasu) Date: Fri, 10 Jun 2022 22:09:42 +0530 Subject: Nginx response times got increased for unknown reasons Message-ID: Hello everyone, We have been using Nginx for a few months now. It was great until this week. For unknown reasons, response times got higher. Like 2 minutes, 3 minutes higher from what was < 300ms before. No change on nginx configuration side. Mostly, I use the configuration unchanged from apt install. Only the following settings are added ``` # Max size of request client_max_body_size 100M; # Max request headers size client_header_buffer_size 5120k; # large_client_header_buffers 16 5120k; # Server name size server_names_hash_bucket_size 128; server_tokens off; # removed pound sign more_set_headers 'Server: helptap.com'; ``` The setup is as follows: 1. Nginx is configured to deliver some static files. 2. Nginx is configured to work as reverse proxy. Upstream communications are done over websocket. 3. SSL is used for all communications. SSL is done using letsencrypt. I timed the upstream & able to confirm that they respond with in < 50ms in all the cases. In the browser, I receive them many seconds and in many cases minutes later. This issue is observed with both static file serving, http requests & websocket messages. So, I am sure that, it is not upstream issue, as static files also takes > 2 minutes to receive in one case. Any help to understand and resolve the problem would be greatly helpful to me. Good day! Best, Tamil Vendhan Kanagarasu -------------- next part -------------- An HTML attachment was scrubbed... URL: From community at thoughtmaybe.com Fri Jun 10 22:04:11 2022 From: community at thoughtmaybe.com (Jore) Date: Sat, 11 Jun 2022 08:04:11 +1000 Subject: Nginx response times got increased for unknown reasons In-Reply-To: References: Message-ID: Is an IPv4 or IPv6 change a factor? On 11/6/22 2:39 am, Tamil Vendhan Kanagarasu wrote: > Hello everyone, > > We have been using Nginx for a few months now. > It was great until this week. For unknown reasons, response times got > higher. > Like 2 minutes, 3 minutes higher from what was < 300ms before. > > No change on nginx configuration side. > Mostly, I use the configuration unchanged from apt install. > Only the following settings are added > > ``` > > # Max size of request > client_max_body_size 100M; > > # Max request headers size > client_header_buffer_size 5120k; > > # > large_client_header_buffers 16 5120k; > > # Server name size > server_names_hash_bucket_size 128; > > server_tokens off; # removed pound sign > more_set_headers 'Server: helptap.com '; > ``` > > The setup is as follows: > 1. Nginx is configured to deliver some static files. > 2. Nginx is configured to work as reverse proxy. Upstream > communications are done over websocket. > 3. SSL is used for all communications. SSL is done using letsencrypt. > > I timed the upstream & able to confirm that they respond with in < > 50ms in all the cases. > In the browser, I receive them many seconds and in many cases minutes > later. > This issue is observed with both static file serving, http requests & > websocket messages. > So, I am sure that, it is not upstream issue, as static files also > takes > 2 minutes to receive in one case. > > Any help to understand and resolve the problem would be greatly > helpful to me. > > Good day! > > Best, > Tamil Vendhan Kanagarasu > > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jun 12 22:15:05 2022 From: nginx-forum at forum.nginx.org (minderdl) Date: Sun, 12 Jun 2022 18:15:05 -0400 Subject: nginx unresponsive after a while Message-ID: <4d432ee212bc965e48c9d0ee66886c10.NginxMailingListEnglish@forum.nginx.org> Hi ! I've upgraded from Debian 9 to 11 (via 10) just recently, i.e. from nginx "1.10.3-1+deb9u7" to "1.18.0-6.1". I'm also running ispconfig on this machine, which modifies configuration. But therefore, I try to post complete configurations at this point in time. Shortly after the upgrade nginx became unresponsive. After restarting the service, it works again, then it takes some days until it's unresponsive again. In the error.log I only see these lines, but many of them: 2022/06/08 23:45:01 [alert] 592#592: 768 worker_connections are not enough Now, then running: lsof | egrep '^nginx .* sock' I get a long list (well, 760+x or so) of these: nginx 592 www-data 3u sock 0,8 0t0 69062 protocol: TCP Thus, it seems that nginx still has a lot of open connections which prevent new requests. Note that this is NOT a high traffic site. It's the very opposite in fact. I enabled debug log and tried to figure out when a connection was left, and it seems to be this: 2022/06/10 00:05:26 [debug] 1548997#1548997: accept on 0.0.0.0:8080, ready: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: posix_memalign: 000055C757E54E10:512 @16 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 accept: :57006 fd:18 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 event timer add: 18: 60000:740999452 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 reusable connection: 1 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 epoll add event: fd:18 op:1 ev:80002001 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http check ssl handshake 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http recv(): 1 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 plain http 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http wait request handler 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 malloc: 000055C757F3A150:1024 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: eof:1, avail:-1 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: fd:18 311 of 1024 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 reusable connection: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 posix_memalign: 000055C757F4BF60:4096 @16 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http process request line 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http request line: "POST /cgi-bin/ViewLog.asp HTTP/1.1" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http uri: "/cgi-bin/ViewLog.asp" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http args: "" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http exten: "asp" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 posix_memalign: 000055C758001210:4096 @16 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http process request header line 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Host: 192.168.0.14:80" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Connection: keep-alive" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Accept-Encoding: gzip, deflate" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Accept: */*" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "User-Agent: python-requests/2.20.0" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Content-Length: 227" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header: "Content-Type: application/x-www-form-urlencoded" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http header done 2022/06/10 00:05:26 [info] 1548997#1548997: *2544 client sent plain HTTP request to HTTPS port while reading client request headers, client: , server: _, request: "POST /cgi-bin/ViewLog.asp HTTP/1.1", host: "192.168.0.14:80" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http finalize request: 497, "/cgi-bin/ViewLog.asp?" a:1, c:1 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 event timer del: 18: 740999452 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http special response: 497, "/cgi-bin/ViewLog.asp?" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script copy: "https://" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script var: "192.168.0.14" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script copy: ":8080" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script var: "/cgi-bin/ViewLog.asp" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http set discard body 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http read discarded body 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: eof:1, avail:0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: fd:18 0 of 152 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http set discard body 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http read discarded body 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 xslt filter header 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 HTTP/1.1 302 Moved Temporarily Server: nginx/1.18.0 Date: Thu, 09 Jun 2022 22:05:26 GMT Content-Type: text/html Content-Length: 145 Connection: close Location: https://192.168.0.14:8080/cgi-bin/ViewLog.asp 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 write new buf t:1 f:0 000055C7580015F0, pos 000055C7580015F0, size: 215 file: 0, size: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http write filter: l:0 f:0 s:215 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http output filter "/cgi-bin/ViewLog.asp?" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http copy filter: "/cgi-bin/ViewLog.asp?" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 image filter 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 xslt filter body 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http postpone filter "/cgi-bin/ViewLog.asp?" 000055C757F4CF28 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 write old buf t:1 f:0 000055C7580015F0, pos 000055C7580015F0, size: 215 file: 0, size: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 write new buf t:0 f:0 0000000000000000, pos 000055C756470AC0, size: 92 file: 0, size: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 write new buf t:0 f:0 0000000000000000, pos 000055C756470E20, size: 53 file: 0, size: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http write filter: l:1 f:0 s:360 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http write filter limit 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 writev: 360 of 360 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http write filter 0000000000000000 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http copy filter: 0 "/cgi-bin/ViewLog.asp?" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http finalize request: 0, "/cgi-bin/ViewLog.asp?" a:1, c:2 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http request count:2 blk:0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http run request: "/cgi-bin/ViewLog.asp?" 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http reading blocked 2022/06/10 00:05:26 [debug] 1548997#1548997: accept on 0.0.0.0:8080, ready: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: posix_memalign: 000055C757F4B510:512 @16 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 accept: :57013 fd:20 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 event timer add: 20: 60000:740999700 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 reusable connection: 1 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 epoll add event: fd:20 op:1 ev:80002001 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 http check ssl handshake 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 http recv(): 0 2022/06/10 00:05:26 [info] 1548997#1548997: *2545 client closed connection while SSL handshaking, client: , server: 0.0.0.0:8080 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 close http connection: 20 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 event timer del: 20: 740999700 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 reusable connection: 0 2022/06/10 00:05:26 [debug] 1548997#1548997: *2545 free: 000055C757F4B510, unused: 232 It seems to be a web scanner sending a POST for /cgi-bin/ViewLog.asp - which does not exist. It finally ends with "http reading blocked" - why? There is a 2nd connection attempt from the same IP in the same second, but I guess the problem comes from the first request. Is it necessary the nginx sends back a 302 if it receives a request for another server (see the "Host: 192.168.0.14:80" line in the request). Is this a default configuration? I did not spot it... Does this sound familiar to anyone? I did not find anything in the forum... Thanks! Daniel Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294468,294468#msg-294468 From mdounin at mdounin.ru Mon Jun 13 02:47:21 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jun 2022 05:47:21 +0300 Subject: nginx unresponsive after a while In-Reply-To: <4d432ee212bc965e48c9d0ee66886c10.NginxMailingListEnglish@forum.nginx.org> References: <4d432ee212bc965e48c9d0ee66886c10.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Jun 12, 2022 at 06:15:05PM -0400, minderdl wrote: > I've upgraded from Debian 9 to 11 (via 10) just recently, i.e. from nginx > "1.10.3-1+deb9u7" to "1.18.0-6.1". I'm also running ispconfig on this > machine, which modifies configuration. But therefore, I try to post complete > configurations at this point in time. > > Shortly after the upgrade nginx became unresponsive. After restarting the > service, it works again, then it takes some days until it's unresponsive > again. > > In the error.log I only see these lines, but many of them: > 2022/06/08 23:45:01 [alert] 592#592: 768 worker_connections are not enough > > Now, then running: lsof | egrep '^nginx .* sock' > I get a long list (well, 760+x or so) of these: > nginx 592 www-data 3u sock 0,8 > 0t0 69062 protocol: TCP > > Thus, it seems that nginx still has a lot of open connections which prevent > new requests. Note that this is NOT a high traffic site. It's the very > opposite in fact. > > I enabled debug log and tried to figure out when a connection was left, and > it seems to be this: [...] > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http special response: 497, "/cgi-bin/ViewLog.asp?" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script copy: "https://" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script var: "192.168.0.14" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script copy: ":8080" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http script var: "/cgi-bin/ViewLog.asp" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http set discard body > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http read discarded body > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: eof:1, avail:0 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 recv: fd:18 0 of 152 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http set discard body > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http read discarded body > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 xslt filter header > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 HTTP/1.1 302 Moved Temporarily [...] > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 writev: 360 of 360 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http write filter 0000000000000000 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http copy filter: 0 "/cgi-bin/ViewLog.asp?" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http finalize request: 0, "/cgi-bin/ViewLog.asp?" a:1, c:2 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http request count:2 blk:0 > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http run request: "/cgi-bin/ViewLog.asp?" > 2022/06/10 00:05:26 [debug] 1548997#1548997: *2544 http reading blocked Interesting, thanks for reporting this. It looks like an issue specific to a particular Debian package. Debian seems to ship the nginx 1.18.0 package with a leftover patch to add ngx_http_discard_request_body() call to error_page handling with redirects: https://sources.debian.org/patches/nginx/1.18.0-6.1/CVE-2019-20372.patch/ And it looks like your configuration uses error_page with redirect, so this affects your configuration. Due to this extra/leftover patch the ngx_http_discard_request_body() function is always called twice, since the same call is already present in nginx 1.18.0. Further, the first call detects a connection close by the client and returns, and the second one results in a socket leak (due to shortcut call to ngx_http_close_request() in case of c->read->eof in ngx_http_finalize_request()). An obvious fix would be to remove the leftover patch in question from the package. Also it might be a good idea to switch to a package from nginx.org: nginx 1.18.0 is obsolete more than a year ago, current stable version is nginx 1.22.0. See http://nginx.org/en/linux_packages.html for details. For the record, the relevant code should be already sufficiently resilient to such duplicate calls after 4a9d28f8f39e (http://hg.nginx.org/nginx/rev/4a9d28f8f39e), nginx 1.19.9, though it might make sense to re-visit ngx_http_discard_request_body() anyway. -- Maxim Dounin http://mdounin.ru/ From tamil at helptap.com Mon Jun 13 09:08:04 2022 From: tamil at helptap.com (Tamil Vendhan Kanagarasu) Date: Mon, 13 Jun 2022 14:38:04 +0530 Subject: nginx Digest, Vol 152, Issue 10 In-Reply-To: <165490566554.56534.3753592555353863385@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> References: <165490566554.56534.3753592555353863385@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> Message-ID: Hi Jore, Thanks for taking time to look into this. I am not able to understand what you mean. If you are asking about, if ipv4/6 addresses of the machines changed recently, then the answer is "no". The servers are deployed on AWS and have an elastic IP address assigned to them. Another thing is, the issue disappeared for unknown reasons. While troubleshooting the issue, I did many changes to the system to see if it worked. Nothing had any effect. I could observe the same issue after those changes. I do not know what went wrong & what went right later. I only know that I had to spend hours to attempt to solve the problem :-) Anyway, thanks again! On Sat, Jun 11, 2022 at 5:31 AM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via email, send a message with subject or > body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..."Today's Topics: > > 1. Nginx response times got increased for unknown reasons > (Tamil Vendhan Kanagarasu) > 2. Re: Nginx response times got increased for unknown reasons (Jore) > > > > ---------- Forwarded message ---------- > From: Tamil Vendhan Kanagarasu > To: nginx at nginx.org > Cc: > Bcc: > Date: Fri, 10 Jun 2022 22:09:42 +0530 > Subject: Nginx response times got increased for unknown reasons > Hello everyone, > > We have been using Nginx for a few months now. > It was great until this week. For unknown reasons, response times got > higher. > Like 2 minutes, 3 minutes higher from what was < 300ms before. > > No change on nginx configuration side. > Mostly, I use the configuration unchanged from apt install. > Only the following settings are added > > ``` > > # Max size of request > client_max_body_size 100M; > > # Max request headers size > client_header_buffer_size 5120k; > > # > large_client_header_buffers 16 5120k; > > # Server name size > server_names_hash_bucket_size 128; > > server_tokens off; # removed pound sign > more_set_headers 'Server: helptap.com'; > ``` > > The setup is as follows: > 1. Nginx is configured to deliver some static files. > 2. Nginx is configured to work as reverse proxy. Upstream > communications are done over websocket. > 3. SSL is used for all communications. SSL is done using letsencrypt. > > I timed the upstream & able to confirm that they respond with in < 50ms in > all the cases. > In the browser, I receive them many seconds and in many cases minutes > later. > This issue is observed with both static file serving, http requests & > websocket messages. > So, I am sure that, it is not upstream issue, as static files also takes > > 2 minutes to receive in one case. > > Any help to understand and resolve the problem would be greatly helpful to > me. > > Good day! > > Best, > Tamil Vendhan Kanagarasu > > > > > > ---------- Forwarded message ---------- > From: Jore > To: nginx at nginx.org > Cc: > Bcc: > Date: Sat, 11 Jun 2022 08:04:11 +1000 > Subject: Re: Nginx response times got increased for unknown reasons > > Is an IPv4 or IPv6 change a factor? > > > On 11/6/22 2:39 am, Tamil Vendhan Kanagarasu wrote: > > Hello everyone, > > We have been using Nginx for a few months now. > It was great until this week. For unknown reasons, response times got > higher. > Like 2 minutes, 3 minutes higher from what was < 300ms before. > > No change on nginx configuration side. > Mostly, I use the configuration unchanged from apt install. > Only the following settings are added > > ``` > > # Max size of request > client_max_body_size 100M; > > # Max request headers size > client_header_buffer_size 5120k; > > # > large_client_header_buffers 16 5120k; > > # Server name size > server_names_hash_bucket_size 128; > > server_tokens off; # removed pound sign > more_set_headers 'Server: helptap.com'; > ``` > > The setup is as follows: > 1. Nginx is configured to deliver some static files. > 2. Nginx is configured to work as reverse proxy. Upstream > communications are done over websocket. > 3. SSL is used for all communications. SSL is done using letsencrypt. > > I timed the upstream & able to confirm that they respond with in < 50ms in > all the cases. > In the browser, I receive them many seconds and in many cases minutes > later. > This issue is observed with both static file serving, http requests & > websocket messages. > So, I am sure that, it is not upstream issue, as static files also takes > > 2 minutes to receive in one case. > > Any help to understand and resolve the problem would be greatly helpful to > me. > > Good day! > > Best, > Tamil Vendhan Kanagarasu > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jun 13 21:16:05 2022 From: nginx-forum at forum.nginx.org (minderdl) Date: Mon, 13 Jun 2022 17:16:05 -0400 Subject: nginx unresponsive after a while In-Reply-To: References: Message-ID: <0d3008b455e27b1b05fb9ad7a317eb33.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thanks for your help - I really appreciate it! I have this line in my cfg: error_page 497 https://$host:8080$request_uri; As I said I'm using ispconfig, and this is part of its configuration for the ispconfig web admin interface. See https://git.ispconfig.org/ispconfig/ispconfig3/-/blob/develop/install/tpl/nginx_ispconfig.vhost.master So, you're saying the 497 redirect is the problem. I.e. removing the error_page directive or returning a static error page would solve the problem? I will try as temporary workaround. Apart from this quick solution, I probably need to file a bug report for Debian. I don't want to manually update a single package but rely on the packages in Debian's stable branch. Thanks, Daniel Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294468,294475#msg-294475 From nginx-forum at forum.nginx.org Mon Jun 13 23:57:26 2022 From: nginx-forum at forum.nginx.org (liwuliu) Date: Mon, 13 Jun 2022 19:57:26 -0400 Subject: Nginx KTLS hardware offloading not working Message-ID: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> Hi Team, I used Nginx to do 443:443 reverse proxy with Mellanox Connect6 DX networking cards. I can make KTLS work for Nginx, but cannot see KTLS offloading (inline TLS @ MLX6) working. Please help on what I missed? Many thanks, Liwu ----------------- To utilize Openssh 3.0 and Nginx 1.21.1: I followed this instruction: https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/ To enable MLX6 inline TLS I followed this instruction: https://docs.nvidia.com/networking/display/OFEDv521040/Kernel+Transport+Layer+Security+(kTLS)+Offloads Here are further system information: root at r57-8814:/boot# nginx -V nginx version: nginx/1.21.4 built by gcc 11.2.0 (Ubuntu 11.2.0-19ubuntu1) built with OpenSSL 3.0.0 7 sep 2021 TLS SNI support enabled configure arguments: --with-debug --prefix=/usr/local --conf-path=/usr/local/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-openssl=../openssl-3.0.0 --with-openssl-opt=enable-ktls --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' root at r57-8814:~# uname -a Linux r57-8814 5.15.0-37-generic #39-Ubuntu SMP Wed Jun 1 19:16:45 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux root at r57-8814:~# ethtool -k enp202s0f0np0 |grep tls tls-hw-tx-offload: on tls-hw-rx-offload: on tls-hw-record: off [fixed] root at r57-8814:~# ethtool -k enp202s0f1np1 |grep tls tls-hw-tx-offload: on tls-hw-rx-offload: on tls-hw-record: off [fixed] root at r57-8814:~# lsmod |grep tls tls 106496 77 mlx5_core root at r57-8814:/boot# grep TLS config-5.15.0-37-generic CONFIG_TLS=m CONFIG_TLS_DEVICE=y # CONFIG_TLS_TOE is not set CONFIG_CHELSIO_TLS_DEVICE=m CONFIG_MLX5_FPGA_TLS=y CONFIG_MLX5_TLS=y CONFIG_MLX5_EN_TLS=y CONFIG_FB_TFT_TLS8204=m root at r57-8814:/usr/local/etc/nginx# cat nginx.conf #user nobody; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; upstream backend { server 1.1.2.2:443; server 1.1.2.3:443; server 1.1.2.4:443; server 1.1.2.5:443; server 1.1.2.6:443; server 1.1.2.7:443; server 1.1.2.8:443; server 1.1.2.9:443; server 1.1.2.10:443; } server { listen 443 ssl; ssl_certificate /usr/local/etc/nginx/cert.crt; ssl_certificate_key /usr/local/etc/nginx/cert.key; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_conf_command Options KTLS; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Fix the “It appears that your reverse proxy set up is broken" error. proxy_pass https://backend; proxy_ssl_certificate /usr/local/etc/nginx/cert.crt; proxy_ssl_certificate_key /usr/local/etc/nginx/cert.key; proxy_ssl_trusted_certificate /usr/local/etc/nginx/cert.crt; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; } } Though the following stats suggest the inline-TLS is not triggered. root at r57-8814:/boot# ethtool -S enp202s0f1np1 |grep tls tx_tls_encrypted_packets: 0 tx_tls_encrypted_bytes: 0 tx_tls_ooo: 0 tx_tls_dump_packets: 0 tx_tls_dump_bytes: 0 tx_tls_resync_bytes: 0 tx_tls_skip_no_sync_data: 0 tx_tls_drop_no_sync_data: 0 tx_tls_drop_bypass_req: 0 rx_tls_decrypted_packets: 0 rx_tls_decrypted_bytes: 0 rx_tls_resync_req_pkt: 0 rx_tls_resync_req_start: 0 rx_tls_resync_req_end: 0 rx_tls_resync_req_skip: 0 rx_tls_resync_res_ok: 0 rx_tls_resync_res_retry: 0 rx_tls_resync_res_skip: 0 rx_tls_err: 0 tx_tls_ctx: 0 tx_tls_del: 0 rx_tls_ctx: 0 rx_tls_del: 0 root at r57-8814:/boot# ethtool -S enp202s0f0np0 |grep tls tx_tls_encrypted_packets: 0 tx_tls_encrypted_bytes: 0 tx_tls_ooo: 0 tx_tls_dump_packets: 0 tx_tls_dump_bytes: 0 tx_tls_resync_bytes: 0 tx_tls_skip_no_sync_data: 0 tx_tls_drop_no_sync_data: 0 tx_tls_drop_bypass_req: 0 rx_tls_decrypted_packets: 0 rx_tls_decrypted_bytes: 0 rx_tls_resync_req_pkt: 0 rx_tls_resync_req_start: 0 rx_tls_resync_req_end: 0 rx_tls_resync_req_skip: 0 rx_tls_resync_res_ok: 0 rx_tls_resync_res_retry: 0 rx_tls_resync_res_skip: 0 rx_tls_err: 0 tx_tls_ctx: 0 tx_tls_del: 0 rx_tls_ctx: 0 rx_tls_del: 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294477,294477#msg-294477 From osa at freebsd.org.ru Tue Jun 14 01:29:15 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 14 Jun 2022 04:29:15 +0300 Subject: Nginx KTLS hardware offloading not working In-Reply-To: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> References: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there, hope you're doing well. On Mon, Jun 13, 2022 at 07:57:26PM -0400, liwuliu wrote: > Hi Team, [...] > Here are further system information: > > root at r57-8814:/boot# nginx -V > nginx version: nginx/1.21.4 This is a bit unclear: nginx version here is 1.21.4, but earlier you've reported about 1.21.1. Could you confirm what version is in use. I'd recommend to use the recent stable version 1.22.0, so please upgrade. > built by gcc 11.2.0 (Ubuntu 11.2.0-19ubuntu1) > built with OpenSSL 3.0.0 7 sep 2021 > TLS SNI support enabled > configure arguments: --with-debug --prefix=/usr/local > --conf-path=/usr/local/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-compat --with-file-aio --with-threads --with-http_addition_module > --with-http_auth_request_module --with-http_dav_module > --with-http_flv_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_mp4_module > --with-http_random_index_module --with-http_realip_module > --with-http_secure_link_module --with-http_slice_module > --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module > --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream > --with-stream_realip_module --with-stream_ssl_module > --with-stream_ssl_preread_module --with-openssl=../openssl-3.0.0 > --with-openssl-opt=enable-ktls --with-cc-opt='-g -O2 > -fstack-protector-strong -Wformat -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2 -fPIC' > > > root at r57-8814:/usr/local/etc/nginx# cat nginx.conf [...] > server { > listen 443 ssl; > ssl_certificate /usr/local/etc/nginx/cert.crt; > ssl_certificate_key /usr/local/etc/nginx/cert.key; > ssl_session_cache builtin:1000 shared:SSL:10m; > ssl_conf_command Options KTLS; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; > ssl_ciphers > HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; Could you provide the output of the following command: % openssl-3.0.0/.openssl/bin/openssl ciphers to verify which TLS ciphers are supported by OpenSSL. > ssl_prefer_server_ciphers on; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > # Fix the “It appears that your reverse proxy set up is > broken" error. > proxy_pass https://backend; In the blog post [1], the root location in NGINX configuraion looks like the following: location / { root /data; } So, that works for static content. Could you try and confirm that works for you. Thank you. References: 1. https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/ -- Sergey A. Osokin From osa at freebsd.org.ru Tue Jun 14 02:04:19 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 14 Jun 2022 05:04:19 +0300 Subject: Migrating from PHP-FPM to Nginx Unit: worth it? In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Tue, May 24, 2022 at 12:20:12PM -0400, petecooper wrote: > I run a fleet of small- to medium-scale web apps on PHP, and I'm comfortable > compiling Nginx + PHP to to optimise for my needs. Until now, I've used > PHP-FPM exclusively. I have read about performance improvements with Nginx > Unit as far as PHP is concerned. This interests me, and I have time > available to learn. > > My question - for anyone who's gone from PHP-FPM to Unit…was it worth it? > What advice would you give? You can definitely try use NGINX Unit, that can help in many cases. One of the main benefits you may see is to use several different versions of PHP (in case that's supported by your favorite UNIX/Linux distribution), so you can easily switch from a previous version to a new one. For NGINX Unit specific cases I'd recommend to subscribe and use unit at nginx dot org mailing list. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Tue Jun 14 02:10:45 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 14 Jun 2022 05:10:45 +0300 Subject: How is buffer size and buffer number determined in the HTTP response chain? In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Sun, Jun 05, 2022 at 11:06:53PM -0400, hanzhai wrote: > I saw 4k, 16k, and 32k buffer sizes in the response chain, why not keep all > buffers in the same size? It's just because of the memory usage optimization [on a specific architecture] and those assumptions that developers have when they wrote a code. Please let me know if you have any questions. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Tue Jun 14 02:20:53 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 14 Jun 2022 05:20:53 +0300 Subject: compiling nchan from source In-Reply-To: References: <2b5ba63f-5c61-03bf-7f95-f306506db112@gmail.com> Message-ID: Hi Ian, hope you're doing well. On Sat, May 21, 2022 at 11:13:07AM +0700, Ian Hobson wrote: > On 21/05/2022 10:37, Sergey A. Osokin wrote: > > On Sat, May 21, 2022 at 09:53:05AM +0700, Ian Hobson wrote: > >> > >> I compile nginx from source. When I use nchan-1.2.12 everything compiles > >> clean. > >> However I tried to upgrade to nchan-1.2.15 and I get a compilation error. > >> Google told me the same error was reported back in February. > > > > nchan-1.2.15 builds well as a part of the FreeBSD www/nginx-devel port, > > and that's what I'd recommed to use. > > > Changing the production O/S would be a lot of work, so its possible but > not attractive. > > I wonder if the problem is some conditional compilation that has been > corrected for FreeBSD and not for Debian/Ubuntu? No additional patches. > > [...] > > > >> Could it be the version of gcc OR is it conflicting with openssl3.0.3, > >> pcre-8.45, or zlib-1.2.12? On FreeBSD 13.1: clang13, openssl 1.1.1o, pcre 8.45 and zlib from the base system. > > The issue is probably related to the OpenSSL version 3, so in case it's > > possible I'd recommend to avoid of usage of that version at the moment. > Tried compiling with openssl-1.1.1n and got the same errors. OpenSSL 1.1.1o-freebsd from 3 May 2022 is here and everything works just fine. The vendor has released new version 1.3.0, [1] recently, so you can try to use the recent one and report how it goes. References: 1. https://github.com/slact/nchan/releases/tag/v1.3.0 -- Sergey A. Osokin From nginx-forum at forum.nginx.org Tue Jun 14 15:51:39 2022 From: nginx-forum at forum.nginx.org (liwuliu) Date: Tue, 14 Jun 2022 11:51:39 -0400 Subject: Nginx KTLS hardware offloading not working In-Reply-To: References: Message-ID: <736e98716c57d58f02d870a704eba30b.NginxMailingListEnglish@forum.nginx.org> Hi Dear Sergey, Many thanks for your kind reply. I attached further testing, seems I still cannot use in-line TLS by NIC when I do the HTTPS access as you suggested (previously I was testing 443:443 reverse proxy). Will try latest Nginx and Openssl. At the same time if you have any hints/advice please help. BR, Liwu ---- qa at r57-8814:~/ktls$ openssl-3.0.0/.openssl/bin/openssl ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-25 6-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA qa at r57-8814:~/ktls$ cat /usr/local/etc/nginx/nginx.conf #user nobody; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # HTTPS server server { listen 443 ssl; server_name localhost; ssl_certificate /usr/local/etc/nginx/cert.crt; ssl_certificate_key /usr/local/etc/nginx/cert.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; ssl_conf_command Options KTLS; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; location / { root html; index index.html index.htm; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294477,294484#msg-294484 From nginx-forum at forum.nginx.org Tue Jun 14 17:28:51 2022 From: nginx-forum at forum.nginx.org (liwuliu) Date: Tue, 14 Jun 2022 13:28:51 -0400 Subject: Nginx KTLS hardware offloading not working In-Reply-To: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> References: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8ae30048cbba562996b22b5dec4f6119.NginxMailingListEnglish@forum.nginx.org> Oh...works when use Nginx 1.22 and OpenSSL 3.1.0. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294477,294485#msg-294485 From colin.r.daniels at gmail.com Wed Jun 15 04:11:32 2022 From: colin.r.daniels at gmail.com (Colin) Date: Tue, 14 Jun 2022 21:11:32 -0700 Subject: Behavior of nginx on Allocation Failure Message-ID: Hi All, I'm interested in how nginx performs when close to/at memory limits, but haven't been able to find answers to a couple of things. Mainly: Does nginx usually treat allocation failures as recoverable, or fatal? I know ngx_alloc fires an emergency-level error on failure, but I'm curious about how nginx usually behaves afterwards. More generally, does nginx make any guarantees about behavior when allocations fail? I've looked at the memory management API but if I've missed any other documentation that answers these questions, feel free to just link me to it. Thanks, Colin -------------- next part -------------- An HTML attachment was scrubbed... URL: From duluxoz at gmail.com Wed Jun 15 07:37:23 2022 From: duluxoz at gmail.com (duluxoz) Date: Wed, 15 Jun 2022 17:37:23 +1000 Subject: NginX Local Mirror Repo Message-ID: <9a7f6516-503d-b2ef-51fa-aea4ac6b4c8e@gmail.com> Hi All, Sorry if this has already been answered somewhere - I had a look, but couldn't seem to find any (relevant) information anywhere. If this has been answered somewhere could someone please point me in the right direction - thanks. My question: I'm trying to set up an rsync'd local mirror of the NginX repo. Is their a public mirror (in Australia) I should be using, and also what is the correct rsync command string? I'm assuming it will be something like: rsync -avzumH --delete --no-motd rsync://mirror.some-server.org/pup/nginx /my-repos/nginx/ Thanks in advance Dulux-Oz From osa at freebsd.org.ru Wed Jun 15 16:03:28 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 15 Jun 2022 19:03:28 +0300 Subject: Nginx KTLS hardware offloading not working In-Reply-To: <8ae30048cbba562996b22b5dec4f6119.NginxMailingListEnglish@forum.nginx.org> References: <2049a21c8f95ebe2d84d0f8e065361b8.NginxMailingListEnglish@forum.nginx.org> <8ae30048cbba562996b22b5dec4f6119.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Jun 14, 2022 at 01:28:51PM -0400, liwuliu wrote: > Oh...works when use Nginx 1.22 and OpenSSL 3.1.0. Thanks for the update. I'm very glad to hear it works as expected. -- Sergey A. Osokin From mdounin at mdounin.ru Thu Jun 16 00:29:06 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Jun 2022 03:29:06 +0300 Subject: Behavior of nginx on Allocation Failure In-Reply-To: References: Message-ID: Hello! On Tue, Jun 14, 2022 at 09:11:32PM -0700, Colin wrote: > I'm interested in how nginx performs when close to/at memory limits, but > haven't been able to find answers to a couple of things. Mainly: Does nginx > usually treat allocation failures as recoverable, or fatal? I know > ngx_alloc fires an emergency-level error on failure, but I'm curious about > how nginx usually behaves afterwards. > > More generally, does nginx make any guarantees about behavior when > allocations fail? I've looked at the memory management API but if I've > missed any other documentation that answers these questions, feel free to > just link me to it. In nginx, allocation errors are properly handled according to the particular place where the error happens. For example, if an error happens during nginx startup - it is not possible to do anything, so nginx will simply exit with an error. If an error happens while handling a request, nginx will respond with error 500 (Internal Server Error) if possible and close the connection. That is, as long as nginx is started, it is expected to work even if some memory allocations fail (though will drop some requests / connections). Note though that proper handling might not apply to 3rd party libraries being used. For example, Perl calls abort() on allocation failures, so one certainly shouldn't rely on nginx with embedded Perl being used in memory-constrained environments. -- Maxim Dounin http://mdounin.ru/ From jay at gooby.org Thu Jun 16 19:09:25 2022 From: jay at gooby.org (Jay Caines-Gooby) Date: Thu, 16 Jun 2022 20:09:25 +0100 Subject: Updated my build-nginx script to support PCRE2 Message-ID: You can choose PCRE or PCRE2 and build-nginx will do the right thing https://jay.gooby.org/2022/06/16/support-pcre2-or-pcre-in-build-nginx https://github.com/jaygooby/build-nginx -- Jay Caines-Gooby http://jay.gooby.org jay at gooby.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From venefax at gmail.com Sat Jun 18 10:53:43 2022 From: venefax at gmail.com (Saint Michael) Date: Sat, 18 Jun 2022 12:53:43 +0200 Subject: Proxy problem is killing me Message-ID: I am writing code to proxy a news website and it works but the top line shows the original site, not my own site. the code is attached: The idea is that the person who needs to go to https://novosti.dn.ua goes instead to https://novosti.oneye.us What am I doing wrong? many thanks Philip -------------- next part -------------- server { default_type application/octet-stream; set $template_root /usr/local/openresty/nginx/html/templates; listen 8.19.245.6:443 ssl; error_log logs/error.log warn; access_log logs/access.log; server_name novosti.oneye.us; ssl_certificate /etc/letsencrypt/live/novosti.oneye.us/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/novosti.oneye.us/privkey.pem; location / { proxy_cookie_domain https://novosti.dn.ua/ https://novosti.oneye.us; resolver 8.8.8.8 ipv6=off; proxy_set_header Accept-Encoding ""; proxy_buffering on; proxy_set_header User-Agent $http_user_agent; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ssl_server_name on; proxy_http_version 1.1; proxy_set_header Accept-Encoding ""; proxy_set_header User-Agent $http_user_agent; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host novosti.dn.ua; proxy_pass https://novosti.dn.ua; proxy_redirect https://novosti.dn.ua https://novosti.oneye.us; subs_filter_types text/css text/javascript application/javascript; subs_filter "https://novosti.dn.ua" "https://novosti.oneye.us" gi; subs_filter "https://novosti.dn.ua" "https://novosti.oneye.us" gi; subs_filter "novosti.dn.ua" "novosti.oneye.us" gi; } } From mwachasu at cisco.com Mon Jun 20 13:37:48 2022 From: mwachasu at cisco.com (Mithilesh Wachasunder (mwachasu)) Date: Mon, 20 Jun 2022 13:37:48 +0000 Subject: Support for nginx-1.14.2 Message-ID: Hello team Had a question, is nginx-1.14.2 declared as EOL(End of Life)? Thank you Mithilesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Mon Jun 20 14:07:36 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 20 Jun 2022 14:07:36 +0000 Subject: Support for nginx-1.14.2 In-Reply-To: References: Message-ID: It is my understanding that a stable release is consudered unsupported when a new stable release is cut from the mainline branch which happens yearly. I would also surmise that 1.14 which is *years* old at this point is beyond its lifespan. Sent from my Galaxy -------- Original message -------- From: "Mithilesh Wachasunder (mwachasu) via nginx" Date: 6/20/22 09:38 (GMT-05:00) To: nginx at nginx.org Cc: "Mithilesh Wachasunder (mwachasu)" Subject: Support for nginx-1.14.2 Hello team Had a question, is nginx-1.14.2 declared as EOL(End of Life)? Thank you Mithilesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Mon Jun 20 17:46:46 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 20 Jun 2022 20:46:46 +0300 Subject: Support for nginx-1.14.2 In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Mon, Jun 20, 2022 at 01:37:48PM +0000, Mithilesh Wachasunder (mwachasu) via nginx wrote: > > Had a question, is nginx-1.14.2 declared as EOL(End of Life)? Well, the support can be discussible. Is there any specic usecase to support so legacy version? Thank you. -- Sergey A. Osokin From nginx-forum at forum.nginx.org Mon Jun 20 18:57:11 2022 From: nginx-forum at forum.nginx.org (alireza) Date: Mon, 20 Jun 2022 14:57:11 -0400 Subject: access to "sent_bytes_length" and send it to API when the user cancel downloading Message-ID: <5089e1b5fc15e535e846afd0ed3a660a.NginxMailingListEnglish@forum.nginx.org> Hello my friends I want to access "sent_bytes_length" of file which has been downloaded incompletely by user and send it to http://locahost:8080 My API is written by nodejs I tried ngx.fetch and r.subrequest to handle it inside "custom access log function" by NJS but it does not allow me to use Promise function Can you help me, please? Thank you nginx.config --------------------------------------------------------------------------- js_import conf.d/download_log.js; js_set $json_download_log download_log.downloadLog; log_format download escape=none $json_download_log; map $request_uri $loggable { ~/download/(.*) 1; default 0; } server { . . . location /downloadlog { proxy_pass http://localhost:10000; } location / { access_log /var/log/nginx/download.log download if=$loggable; proxy_pass http://localhost:10000; } } download_log.js -------------------------------------------------------------------- export default { downloadLog }; function downloadLog(r) { var connection = { "serial": Number(r.variables.connection), "request_count": Number(r.variables.connection_requests), "elapsed_time": Number(r.variables.request_time) } if (r.variables.pipe == "p") { connection.pipelined = true; } else { connection.pipelined = false; } if ( r.variables.ssl_protocol !== undefined ) { connection.ssl = sslInfo(r); } var request = { "client": r.variables.remote_addr, "port": Number(r.variables.server_port), "host": r.variables.host, "method": r.method, "uri": r.uri, "http_version": Number(r.httpVersion), "bytes_received": Number(r.variables.request_length) }; request.headers = {}; for (var h in r.headersIn) { request.headers[h] = r.headersIn[h]; } var upstreams = []; if ( r.variables.upstream_status !== undefined ) { upstreams = upstreamArray(r); } var response = { "status": Number(r.variables.status), "bytes_sent": Number(r.variables.bytes_sent), } response.headers = {}; for (var h in r.headersOut) { response.headers[h] = r.headersOut[h]; } let LOG = JSON.stringify({ "timestamp": r.variables.time_iso8601, "connection": connection, "request": request, "upstreams": upstreams, "response": response }); /******************************************************/ I want to send this LOG to http://localhost:8080 /******************************************************/ return LOG; } --------------------------------------------------------------------------------------------- Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294526,294526#msg-294526 From nginx-forum at forum.nginx.org Mon Jun 20 21:23:23 2022 From: nginx-forum at forum.nginx.org (_lukman_) Date: Mon, 20 Jun 2022 17:23:23 -0400 Subject: nginx redirects all requests to root Message-ID: Hello Everyone. My first post as I am new to this forum, as well as nginx. I have an ecu instance where I host my files. I have a landing page placed in root and an admin area placed in /app folder. However, all requests somehow redirect to the root. My script is below. I dont know what I am doing wrong. server { listen 443 default_server ssl; listen [::]:443 ssl http2; server_name dummysite.io www.dummysite.io; ssl_certificate /etc/letsencrypt/live/dummysite.io/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/dummysite.io/privkey.pem; # managed by Certbot location / { root /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/; index index.html index.php; # try_files $uri /index.html index.php; } error_log /var/log/nginx/dummysite.log; access_log /var/log/nginx/dummysite.log; } server { listen 80; server_name dummysite.io www.dummysite.io; return 301 https://dummysite.io$request_uri; charset utf-8; root /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/; index index.html index.htm index.php; # Always serve index.html for any request location / { root /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/; # try_files $uri = index.html index.php; } return 301 https://dummysite.io$request_uri; # managed by Certbot error_log /var/log/nginx/vue-app-error.log; access_log /var/log/nginx/vue-app-access.log; } thank you in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294531,294531#msg-294531 From lists at lazygranch.com Tue Jun 21 00:56:45 2022 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 20 Jun 2022 17:56:45 -0700 Subject: nginx redirects all requests to root In-Reply-To: References: Message-ID: <20220620175645.3c56a710.lists@lazygranch.com> On Mon, 20 Jun 2022 17:23:23 -0400 "_lukman_" wrote: > server > { > listen 443 default_server ssl; > listen [::]:443 ssl http2; > server_name dummysite.io www.dummysite.io; > ssl_certificate /etc/letsencrypt/live/dummysite.io/fullchain.pem; # > managed by Certbot > ssl_certificate_key > /etc/letsencrypt/live/dummysite.io/privkey.pem; # managed by Certbot > location / > { > root > /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/; > index index.html index.php; > # try_files $uri /index.html index.php; > } The mail wrapping makes this kind of confusing. From my own conf file the root line is not within location. I believe this is what you want: server { listen 443 default_server ssl; listen [::]:443 ssl http2; server_name dummysite.io www.dummysite.io; ssl_certificate /etc/letsencrypt/live/dummysite.io/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/dummysite.io/privkey.pem; # managed by Certbot root /home/ubuntu/dummysite-ui/_work/dummysiteWebV1/dummysiteWebV1/build/web/; location / { index index.html index.php; # try_files $uri /index.html index.php; } I put my webroot in /usr/share/nginx/html/website1 I have this line after the server_name: ssl_dhparam /etc/ssl/certs/dhparam.pem; Hopefully this works. If not wait for the gurus. From nginx-forum at forum.nginx.org Tue Jun 21 06:13:45 2022 From: nginx-forum at forum.nginx.org (alireza) Date: Tue, 21 Jun 2022 02:13:45 -0400 Subject: how to pass body_bytes_sent to reversed proxy when downloading has been done? Message-ID: <3c9c7d194099b69ec67f51bf8a194098.NginxMailingListEnglish@forum.nginx.org> Hello how to pass body_bytes_sent to reversed proxy when downloading has been done? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294537,294537#msg-294537 From nginx-forum at forum.nginx.org Tue Jun 21 07:59:25 2022 From: nginx-forum at forum.nginx.org (_lukman_) Date: Tue, 21 Jun 2022 03:59:25 -0400 Subject: nginx redirects all requests to root In-Reply-To: References: Message-ID: Wow. You are the guru. Thank you very much. I moved the root out of the location block and it now works perfect. Blessings. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294531,294538#msg-294538 From mdounin at mdounin.ru Tue Jun 21 17:03:05 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Jun 2022 20:03:05 +0300 Subject: nginx-1.23.0 Message-ID: Changes with nginx 1.23.0 21 Jun 2022 *) Change in internal API: now header lines are represented as linked lists. *) Change: now nginx combines arbitrary header lines with identical names when sending to FastCGI, SCGI, and uwsgi backends, in the $r->header_in() method of the ngx_http_perl_module, and during lookup of the "$http_...", "$sent_http_...", "$sent_trailer_...", "$upstream_http_...", and "$upstream_trailer_..." variables. *) Bugfix: if there were multiple "Vary" header lines in the backend response, nginx only used the last of them when caching. *) Bugfix: if there were multiple "WWW-Authenticate" header lines in the backend response and errors with code 401 were intercepted or the "auth_request" directive was used, nginx only sent the first of the header lines to the client. *) Change: the logging level of the "application data after close notify" SSL errors has been lowered from "crit" to "info". *) Bugfix: connections might hang if nginx was built on Linux 2.6.17 or newer, but was used on systems without EPOLLRDHUP support, notably with epoll emulation layers; the bug had appeared in 1.17.5. Thanks to Marcus Ball. *) Bugfix: nginx did not cache the response if the "Expires" response header line disabled caching, but following "Cache-Control" header line enabled caching. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Jun 21 19:00:49 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 21 Jun 2022 12:00:49 -0700 Subject: njs-0.7.5 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses on stabilization of recently released features and fixing bugs found by various fuzzers. Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html We are hiring: If you are a C programmer, passionate about Open Source and you love what we do, consider the following career opportunity: https://ffive.wd5.myworkdayjobs.com/NGINX/job/Ireland-Homebase/Software-Engineer-III---NGNIX-NJS_RP1022237 Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Additional examples and howtos can be found here: - Github: https://github.com/nginx/njs-examples Changes with njs 0.7.5 21 Jun 2022 nginx modules: *) Change: adapting to changes in nginx header structures. *) Bugfix: fixed r.headersOut special getters when value is absent. *) Change: returning undefined value instead of an empty string for Content-Type when the header is absent. Core: *) Bugfix: fixed catching of the exception thrown from an awaited function. *) Bugfix: fixed function value initialization. *) Bugfix: fixed interpreter when await fails. *) Bugfix: fixed typed-array constructor when source array is changed while iterating. *) Bugfix: fixed String.prototype.replace() with byte strings. *) Bugfix: fixed template literal from producing byte-strings. *) Bugfix: fixed array iterator with sparse arrays. *) Bugfix: fixed memory free while converting a flat array to a slow array. *) Bugfix: properly handling NJS_DECLINE in promise native functions. *) Bugfix: fixed working with an array-like object in Promise.all() and friends. From ismail783 at gmail.com Wed Jun 22 14:34:20 2022 From: ismail783 at gmail.com (Ahmad Ismail) Date: Wed, 22 Jun 2022 20:34:20 +0600 Subject: Can I serve CLI Applications using Nginx Message-ID: I want to create a CLI app (in this case named CLI_APP), that will output json and can be accessed via web. In Linux terms, it will look like: Request | Web_Server | CLI_APP | ADD_UI | Web_Server > Response Now, I will run the app like `CLI_APP --output json`. Here, I am saying that the CLI_APP will output json (for REST API). Here, `ADD_UI --output web` will add HTML, CSS, JS etc. to the JSON output. Can Nginx help me send the requests to CLI_APP via STDIN and serve the final output of ADD_UI --output web? Thanks and Best Regards, Ahmad Ismail -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyndon at orthanc.ca Wed Jun 22 17:24:40 2022 From: lyndon at orthanc.ca (Lyndon Nerenberg (VE7TFX/VE6BBM)) Date: Wed, 22 Jun 2022 10:24:40 -0700 Subject: Can I serve CLI Applications using Nginx In-Reply-To: References: Message-ID: Ahmad Ismail writes: > Can Nginx help me send the requests to CLI_APP via STDIN and serve the > final output of ADD_UI --output web? I think what you're looking for here is inetd. --lyndon From venefax at gmail.com Wed Jun 22 18:00:20 2022 From: venefax at gmail.com (Saint Michael) Date: Wed, 22 Jun 2022 21:00:20 +0300 Subject: Error downloading regex-tester Message-ID: git clone https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester Cloning into 'nginx-regex-tester'... fatal: repository 'https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester/' not found Any idea how can I install this software? From community at thoughtmaybe.com Thu Jun 23 00:45:21 2022 From: community at thoughtmaybe.com (Jore) Date: Thu, 23 Jun 2022 10:45:21 +1000 Subject: Error downloading regex-tester In-Reply-To: References: Message-ID: <85010b67-3104-c00a-ac19-1105ee312207@thoughtmaybe.com> Hi there, I think you're trying to do something like: git clone https://github.com/nginxinc/NGINX-Demos.git as `nginx-regex-tester` is a part of the `NGINX-Demos` repository. Maybe look at the Readme.md here? https://github.com/nginxinc/NGINX-Demos Beyond that, I'm not sure. Good luck Jore On 23/6/22 4:00 am, Saint Michael wrote: > git clonehttps://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester > Cloning into 'nginx-regex-tester'... > fatal: repository > 'https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-regex-tester/' > not found > > Any idea how can I install this software? > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ismail783 at gmail.com Thu Jun 23 05:19:03 2022 From: ismail783 at gmail.com (Ahmad Ismail) Date: Thu, 23 Jun 2022 11:19:03 +0600 Subject: Can I serve CLI Applications using Nginx In-Reply-To: References: Message-ID: I have bumped into CGI (after asking the question here). However, I have some issues with CGI. For example, I have to add HEADERS maintaining CRLF etc in the output. However, I want the CLI app to be totally independent. I mean, I want to output regular text or json without any header. So, what I really want is: CLI_APP | ADD_UI | ADD_CGI_HEADER Where CLI_APP gives me pure json. ADD_UI adds HTML, CSS, JS on the json output. And ADD_CGI_HEADER adds the extra stuff that is needed to make the final response sendable via the server. Please note that when the user will send a request, it will have to go through the total pipeline. Also please note that, I can always call ADD_UI at the end of CLI_APP and call ADD_CGI_HEADER at the end of ADD_UI. But that way, I am not decoupling. And the later binaries will be dependent on the previous ones. This is not something I want. I want to *pipe the outputs to get the final response*. How can I do that? Do I need to extend nginx in any way (like creating any module or something like that)? Or is there already a solution i do not know about? *Thanks and Best Regards,Ahmad Ismail* On Wed, Jun 22, 2022 at 11:24 PM Lyndon Nerenberg (VE7TFX/VE6BBM) < lyndon at orthanc.ca> wrote: > Ahmad Ismail writes: > > > Can Nginx help me send the requests to CLI_APP via STDIN and serve the > > final output of ADD_UI --output web? > > I think what you're looking for here is inetd. > > --lyndon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From venefax at gmail.com Thu Jun 23 05:19:06 2022 From: venefax at gmail.com (Saint Michael) Date: Thu, 23 Jun 2022 08:19:06 +0300 Subject: Reverse proxy question Message-ID: I am proxying a website where there is a link like this: https://arvamus.postimees.ee/7550782/lauri-vahtre-tead-kull-kes-utles-n-tahega-sona-ja-siis-juhtus-ei-voi-oelda-mis#_ga=2.218428057.1478980589.1655961279-1635368780.1655961270 my new site is https://postimees.oneye.us I have these substitution rules: subs_filter "https://www.postimees.ee" "https://postimees.oneye.us" gir break; subs_filter "www.postimees.ee" "postimees.oneye.us" gir break; subs_filter "http://(.*).postimees.ee/(.*)" "http://postimees.oneye.us/$1/$2" gir break; but they don't process the link above. How do you exactly write the rules, or rewrite rules, in cases like this? From ismail783 at gmail.com Thu Jun 23 13:13:34 2022 From: ismail783 at gmail.com (Ahmad Ismail) Date: Thu, 23 Jun 2022 19:13:34 +0600 Subject: Can I serve CLI Applications using Nginx In-Reply-To: References: Message-ID: will it be a bad idea to extend nginx (ex. create a module) to serve my purpose instead of using inetd? Is it possible to make a module that will give the HTTP request to `Command1 | Command2 | CommandN`, then build a response from the output (by adding HTTP response message to it) and then send the response back to the client. *Thanks and Best Regards,Ahmad Ismail* On Wed, Jun 22, 2022 at 11:24 PM Lyndon Nerenberg (VE7TFX/VE6BBM) < lyndon at orthanc.ca> wrote: > Ahmad Ismail writes: > > > Can Nginx help me send the requests to CLI_APP via STDIN and serve the > > final output of ADD_UI --output web? > > I think what you're looking for here is inetd. > > --lyndon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jprouty at jcius.com Thu Jun 23 17:01:35 2022 From: jprouty at jcius.com (Jason Prouty) Date: Thu, 23 Jun 2022 17:01:35 +0000 Subject: NGINX Proxy pass replace PNG file Message-ID: Is it possible to replace an PNG file with a local image I want to reverse proxy to site and then change the image returned I have tried the sub_filter setting but nothing changes. server { server_name test1.example1.com; location /{ proxy_ssl_server_name on; proxy_pass https://test1.example2.com; sub_filter ' https://test1.example2.com/assets/images/image1.png with a new image hosted locally on my nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Fri Jun 24 04:18:32 2022 From: hobson42 at gmail.com (Ian Hobson) Date: Fri, 24 Jun 2022 11:18:32 +0700 Subject: NGINX Proxy pass replace PNG file In-Reply-To: References: Message-ID: <658a9eeb-31d9-0d03-e2fa-d5bdd74d93a8@gmail.com> Hi, I would try this with a location block... location /URL/of/Picture/tobereplaced.png { root /var/www/test1; try_files where/stored/image1.png; } location / { continue to proxy pass everything else Hope this helps Ian On 24/06/2022 00:01, Jason Prouty wrote: > Is it possible to replace an PNG file with a local image > I want to reverse proxy to site and then change the image returned > > I have tried the sub_filter setting but nothing changes. > > > server { >     server_name test1.example1.com; > > location /{ >     proxy_ssl_server_name on; >     proxy_pass   https://test1.example2.com; >     sub_filter ' src="https://test1.example2.com/assets/images/image1.png' ' src="https://test1.example1.com/image1.png'; >     sub_filter_once on; >     } > > I just want to replace the image of  the proxied > https://test1.example2.com > ; > https://test1.example2.com/assets/images/image1.png > > with > a new image hosted locally on my nginx > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 From hobson42 at gmail.com Fri Jun 24 09:20:26 2022 From: hobson42 at gmail.com (Ian Hobson) Date: Fri, 24 Jun 2022 16:20:26 +0700 Subject: Letsencrypt certbot leads to ssl protocol error Message-ID: <9ea152d2-a63f-b096-82ef-4c515ef655de@gmail.com> Hi All, Two of my sites have suffered problems since I updated them to https, from http. In fact since the latest scheduled update by certbot. The home page of coachmaster.co.uk should be a log in screen. Brave shows me This site can’t provide a secure connection coachmaster.co.uk sent an invalid response. ERR_SSL_PROTOCOL_ERROR I think the protocol message it doesn't like is Upgrade-Insecure-Requests: 1 Edge is really informative: The connection for this site is not secure coachmaster.co.uk sent an invalid response. Try running Windows Network Diagnostics. ERR_SSL_PROTOCOL_ERROR Browser: Brave Version 1.40.105 Chromium: 103.0.5060.53 (Official Build) (64-bit) All others I've tried also fail. nginx version 1.21.6 openSSL version 1.1.1.n special compile. certbot applies configuration of ssl_session_cache shared:le_nginx_SSL:10m; ssl_session_timeout 1440m; ssl_session_tickets off; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off; ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"; This last is all one line. The server block(s) for the site are is (removing a lot of comments to save space). ------------- file begins ---------------- # redirect from http at bottom of file server { server_name coachmaster.co.uk www.coachmaster.co.uk; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; add_header X-Frame-Options DENY always; add_header X-Content-Type-Options nosniff always; add_header X-Xss-Protection "1; mode=block" always; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/coachmaster.co.uk-0001/fullchain.pem; limit_req zone=ip burst=12 delay=8; location ^~ /Avatars { limit_req zone=fp burst=70 nodelay; } root /var/www/coachmaster.co.uk/htsecure; access_log /var/log/nginx/coachmaster.co.uk.access.log; # error_log /var/log/nginx/error.log; set in nginx.conf index index.php; location = /Coachmaster.html { rewrite ^(.*) http://thecoachmasternetwork.com/software/; } location = / { rewrite ^ /index.php last; } location /easyrtc { proxy_pass http://localhost:5006; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } location /socket.io { proxy_pass http://localhost:5006; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } # serve php files via fastcgi if the file exists location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param CENTRAL_ROOT $document_root; fastcgi_param RESELLER_ROOT $document_root; fastcgi_param ENVIRONMENT production; fastcgi_param HTTPS ON; include /etc/nginx/fastcgi.conf; fastcgi_pass 127.0.0.1:9000; } # serve static files try_files $uri $uri/ /index.php; expires 30m; location /publish { nchan_publisher; nchan_channel_id $arg_id; nchan_channel_id $arg_id; nchan_message_buffer_length 10; nchan_message_timeout 90s; } location /activity { nchan_subscriber; nchan_channel_id $arg_id; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/coachmaster.co.uk-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/coachmaster.co.uk-0001/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.coachmaster.co.uk) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = coachmaster.co.uk) { return 301 https://$host$request_uri; } # managed by Certbot server_name coachmaster.co.uk www.coachmaster.co.uk; listen 80; return 404; # managed by Certbot } ------------ end of file ----------- I have inserted a blank line after long lines that wrap. According to the UptimeRobot the site is up. The service at https://www.ssllabs.com/ssltest/analyze.html?d=coachmaster.co.uk give no obvious errors, except that it shows the TLS 1.2 protocol NOT enabled. I'm way out of my depth now. Can anyone suggest something that is not weakening the security. Regards Ian -- Ian Hobson Tel (+66) 626 544 695 From drodriguez at unau.edu.ar Fri Jun 24 19:23:54 2022 From: drodriguez at unau.edu.ar (Daniel Armando Rodriguez) Date: Fri, 24 Jun 2022 16:23:54 -0300 Subject: Reverse proxy to traefik In-Reply-To: References: Message-ID: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> Hi there I need to forward HTTP/HTTPS stream to a traefik within docker container. Additionally, this traefik is also SSL termination. And just at this point where I am stuck, as the SSL management against Let's Encrypt needs both HTTP and HTTPS traffic. I would appreciate any further guidance in this regard. By the way, it's not an ellection we made, just kind of a black box we need to deal with. Made this representation to illustrate the situation. https://i.postimg.cc/Zq1Ndyws/scheme.png Thanks in advance. ________________________________________________ Daniel A. Rodriguez _Informática, Conectividad y Sistemas_ Universidad Nacional del Alto Uruguay San Vicente - Misiones - Argentina informatica.unau.edu.ar From nginx-forum at forum.nginx.org Mon Jun 27 04:47:04 2022 From: nginx-forum at forum.nginx.org (mikecon) Date: Mon, 27 Jun 2022 00:47:04 -0400 Subject: Is it possible to configure socket connection(not web) in Nginx reverse proxy server Message-ID: <6c8a9f44fa62b15aa5784ee807d16da2.NginxMailingListEnglish@forum.nginx.org> Hi all, I have a CLI client and server written in Go Currently, they are communicating via a socket connection and it's a server streaming connection Now I want to have an Nginx proxy between these two Is it possible to configure the normal socket connection in Nginx? How do that, and what all code changes & configuration changes I need to do There's not much on the internet on this on socket connection in Nginx I was wondering if it's possible or not //my client code: func getStreammessages() { connection, err := net.Dial("tcp", "2.221.29.137:9988") _, err = connection.Write([]byte(sendIDtoServertoGetStream)) for { mLen, err := connection.Read(buffer) //some logic to print the message stream } } //my server code: func StartStreamServer() { server, err := net.Listen("tcp", "2.221.29.137:9988") defer server.Close() for { connection, err := server.Accept() go registerClient(connection) } } func registerClient(connection net.Conn) { buffer := make([]byte, 1024) mLen, err := connection.Read(buffer) var sendIDtoServertoGetStream message err = json.Unmarshal(buffer[:mLen], &sendIDtoServertoGetStream) } //strem to client from message queue func StreamMessageToCliCLient(connection net.Conn) { _, err = connection.Write(messageString) } Have anyone done this before currently, I am doing this in my Nginx (nginx.conf file) which is running in the same VM as my server stream { server { auth_basic off; proxy_ssl off; listen 80; #TCP traffic will be forwarded to the proxy_pass #proxy_pass 3.111.69.167:9988; proxy_pass 127.0.0.1:8899; } } I want to open Port 80 and Internally proxy pass to my server, currently getting 400 status code, when I do this, and its not passing my request to my server Can you pls help Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294594,294594#msg-294594 From osa at freebsd.org.ru Mon Jun 27 14:11:58 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 27 Jun 2022 17:11:58 +0300 Subject: Is it possible to configure socket connection(not web) in Nginx reverse proxy server In-Reply-To: <6c8a9f44fa62b15aa5784ee807d16da2.NginxMailingListEnglish@forum.nginx.org> References: <6c8a9f44fa62b15aa5784ee807d16da2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there, hope you're doing well. On Mon, Jun 27, 2022 at 12:47:04AM -0400, mikecon wrote: > > stream { > server { > auth_basic off; The auth_basic directive is a part of http_auth_basic module, [2] so it's not related to a stream modules family. > proxy_ssl off; The proxy_ssl directive is off by default, [2], can be safely removed. > listen 80; > #TCP traffic will be forwarded to the proxy_pass #proxy_pass > 3.111.69.167:9988; > proxy_pass 127.0.0.1:8899; > } > } > > I want to open Port 80 and Internally proxy pass to my server, > currently getting 400 status code, when I do this, and its not passing my > request to my server Other directives should work well. Here's the list of questions: 1. how did you test it? 2. have you checked nginx log files? 3. in case of 400 error, that seems like a web-server (not nginx) replied with that error, any chance to check log files on that side? References: 1. https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html#auth_basic 2. http://nginx.org/ru/docs/stream/ngx_stream_proxy_module.html#proxy_ssl -- Sergey A. Osokin From mikydevel at yahoo.fr Thu Jun 30 12:56:35 2022 From: mikydevel at yahoo.fr (Mik J) Date: Thu, 30 Jun 2022 12:56:35 +0000 (UTC) Subject: Real client IP in the error logs when a server is behind a reverse proxy References: <918936775.577737.1656593795038.ref@mail.yahoo.com> Message-ID: <918936775.577737.1656593795038@mail.yahoo.com> Hello, I have a real server placed behing my reverse proxywww server 192.168.1.10 <---> 192.168.1.20 reverse proxy <---> NAT Firewall <---> Interrnet <---> Client on Internet My configuration on my reverse proxy (192.168.1.20) looks like that     location ^~ / {         proxy_pass              http://192.168.1.10:80;         proxy_redirect          off;         proxy_set_header        Host                    $http_host;         proxy_set_header        X-Real-IP               $remote_addr;         proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;         proxy_set_header        Referer                 "http://app.mydomain.org";      } My configuration on my www server (192.168.1.10) on the vhost looks like thatserver { ...         access_log /var/log/nginx/mylogs.mydomain.org.access.log xforwardedLog;        error_log /var/log/nginx/ mylogs.mydomain.org.error.log; and in nginx.conf http { ... log_format  xforwardedLog   '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; On my www server 192.168.1.10 I can see the access logs 192.168.1.20 forwarded for 54.38.10x.x - - [30/Jun/2022:13:44:38 +0200] "GET / HTTP/1.0" 200 7112 "http://app.mydomain.org" "Mozilla/1.22 (compatible; MSIE 5.01; PalmOS 3.0) EudoraWeb 2.1"And it works correctly for me because I can see the IP of the user on the Internet But on the error.log I don't see the IP of the user on the Internet2022/06/28 16:12:27 [error] 45747#0: *11 access forbidden by rule, client: 192.168.1.20, server: app.mydomain.org, request: "GET /.git/config HTTP/1.0", host: " ", referrer: "http://app.mydomain.org"So here as you can see in the logs my client 192.168.1.20 is the reverse proxy and not the client on the Internet So in access logshttp://nginx.org/en/docs/http/ngx_http_log_module.htmlI can get the IP of the Internet use How can I get the IP of the Internet user when it generates an error log ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From duluxoz at gmail.com Thu Jun 30 12:59:28 2022 From: duluxoz at gmail.com (Matthew J Black) Date: Thu, 30 Jun 2022 22:59:28 +1000 Subject: Real client IP in the error logs when a server is behind a reverse proxy In-Reply-To: <918936775.577737.1656593795038@mail.yahoo.com> References: <918936775.577737.1656593795038.ref@mail.yahoo.com> <918936775.577737.1656593795038@mail.yahoo.com> Message-ID: What linux distro is NginX running on? PEREGRINE IT Signature *Matthew J BLACK*   M.Inf.Tech.(Data Comms)   MBA   B.Sc.   MACS (Snr), CP, IP3P When you want it done /right/ ‒ the first time! Phone: +61 4 0411 0089 Email: matthew at peregrineit.net Web: www.peregrineit.net View Matthew J BLACK's profile on LinkedIn This Email is intended only for the addressee.  Its use is limited to that intended by the author at the time and it is not to be distributed without the author’s consent.  You must not use or disclose the contents of this Email, or add the sender’s Email address to any database, list or mailing list unless you are expressly authorised to do so.  Unless otherwise stated, PEREGRINE I.T. Pty Ltd accepts no liability for the contents of this Email except where subsequently confirmed in writing.  The opinions expressed in this Email are those of the author and do not necessarily represent the views of PEREGRINE I.T. Pty Ltd.  This Email is confidential and may be subject to a claim of legal privilege. If you have received this Email in error, please notify the author and delete this message immediately. On 30/06/2022 22:56, Mik J via nginx wrote: > Hello, > > I have a real server placed behing my reverse proxy > www server 192.168.1.10 <---> 192.168.1.20 reverse proxy <---> NAT > Firewall <---> Interrnet <---> Client on Internet > > My configuration on my reverse proxy (192.168.1.20) looks like that >      location ^~ / { >         proxy_pass http://192.168.1.10:80; >         proxy_redirect          off; >         proxy_set_header        Host $http_host; >         proxy_set_header        X-Real-IP $remote_addr; >         proxy_set_header        X-Forwarded-For > $proxy_add_x_forwarded_for; >         proxy_set_header        Referer "http://app.mydomain.org"; >      } > > > My configuration on my www server (192.168.1.10) on the vhost looks > like that > server { > ... >         access_log /var/log/nginx/mylogs.mydomain.org.access.log > xforwardedLog; >         error_log /var/log/nginx/ mylogs.mydomain.org.error.log; > > and in nginx.conf > http { > ... > log_format xforwardedLog   '$remote_addr forwarded for $http_x_real_ip > - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent"'; > > On my www server 192.168.1.10 I can see the access logs > 192.168.1.20 forwarded for 54.38.10x.x - - [30/Jun/2022:13:44:38 > +0200] "GET / HTTP/1.0" 200 7112 "http://app.mydomain.org" > "Mozilla/1.22 (compatible; MSIE 5.01; PalmOS 3.0) EudoraWeb 2.1" > And it works correctly for me because I can see the IP of the user on > the Internet > > But on the error.log I don't see the IP of the user on the Internet > 2022/06/28 16:12:27 [error] 45747#0: *11 access forbidden by rule, > client: 192.168.1.20, server: app.mydomain.org, request: "GET > /.git/config HTTP/1.0", host: " ", referrer: > "http://app.mydomain.org" > So here as you can see in the logs my client 192.168.1.20 is the > reverse proxy and not the client on the Internet > > So in access logs > http://nginx.org/en/docs/http/ngx_http_log_module.html > I can get the IP of the Internet use > > How can I get the IP of the Internet user when it generates an error log ? > > Thank you > > > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at nanaya.pro Thu Jun 30 15:15:41 2022 From: me at nanaya.pro (nanaya) Date: Fri, 01 Jul 2022 00:15:41 +0900 Subject: Real client IP in the error logs when a server is behind a reverse proxy In-Reply-To: <918936775.577737.1656593795038@mail.yahoo.com> References: <918936775.577737.1656593795038.ref@mail.yahoo.com> <918936775.577737.1656593795038@mail.yahoo.com> Message-ID: Hello, You need to set the reverse proxy ip in the www server: https://nginx.org/r/set_real_ip_from Also note this will replace $remote_addr with the value from X-Real-IP header (the original value is in $realip_remote_addr). On Thu, Jun 30, 2022, at 21:56, Mik J via nginx wrote: > Hello, > > My configuration on my www server (192.168.1.10) on the vhost looks like that > server { > ... > access_log /var/log/nginx/mylogs.mydomain.org.access.log xforwardedLog; > error_log /var/log/nginx/ mylogs.mydomain.org.error.log; > > and in nginx.conf > http { > ... > log_format xforwardedLog '$remote_addr forwarded for $http_x_real_ip > - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent"'; > > On my www server 192.168.1.10 I can see the access logs > 192.168.1.20 forwarded for 54.38.10x.x - - [30/Jun/2022:13:44:38 +0200] > "GET / HTTP/1.0" 200 7112 "http://app.mydomain.org" "Mozilla/1.22 > (compatible; MSIE 5.01; PalmOS 3.0) EudoraWeb 2.1" > And it works correctly for me because I can see the IP of the user on > the Internet > > But on the error.log I don't see the IP of the user on the Internet > 2022/06/28 16:12:27 [error] 45747#0: *11 access forbidden by rule, > client: 192.168.1.20, server: app.mydomain.org, request: "GET > /.git/config HTTP/1.0", host: " ", referrer: > "http://app.mydomain.org" > So here as you can see in the logs my client 192.168.1.20 is the > reverse proxy and not the client on the Internet > > So in access logs > http://nginx.org/en/docs/http/ngx_http_log_module.html > I can get the IP of the Internet use > > How can I get the IP of the Internet user when it generates an error log ? > From mikydevel at yahoo.fr Thu Jun 30 22:40:15 2022 From: mikydevel at yahoo.fr (Mik J) Date: Thu, 30 Jun 2022 22:40:15 +0000 (UTC) Subject: Real client IP in the error logs when a server is behind a reverse proxy In-Reply-To: References: <918936775.577737.1656593795038.ref@mail.yahoo.com> <918936775.577737.1656593795038@mail.yahoo.com> Message-ID: <1071071259.956822.1656628815061@mail.yahoo.com> Thank you for your answers, Matthew, I use Openbsd Nanaya, I tried your solution and it worked. I had to readapt a bit my configuration (removed xforwardedLog) so that my access_log is formated without duplicate IPs. Regards Le jeudi 30 juin 2022 à 17:17:01 UTC+2, nanaya a écrit : Hello, You need to set the reverse proxy ip in the www server: https://nginx.org/r/set_real_ip_from Also note this will replace $remote_addr with the value from X-Real-IP header (the original value is in $realip_remote_addr). On Thu, Jun 30, 2022, at 21:56, Mik J via nginx wrote: > Hello, > > My configuration on my www server (192.168.1.10) on the vhost looks like that > server { > ... >        access_log /var/log/nginx/mylogs.mydomain.org.access.log xforwardedLog; >        error_log /var/log/nginx/ mylogs.mydomain.org.error.log; > > and in nginx.conf > http { > ... > log_format  xforwardedLog  '$remote_addr forwarded for $http_x_real_ip > - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent"'; > > On my www server 192.168.1.10 I can see the access logs > 192.168.1.20 forwarded for 54.38.10x.x - - [30/Jun/2022:13:44:38 +0200] > "GET / HTTP/1.0" 200 7112 "http://app.mydomain.org" "Mozilla/1.22 > (compatible; MSIE 5.01; PalmOS 3.0) EudoraWeb 2.1" > And it works correctly for me because I can see the IP of the user on > the Internet > > But on the error.log I don't see the IP of the user on the Internet > 2022/06/28 16:12:27 [error] 45747#0: *11 access forbidden by rule, > client: 192.168.1.20, server: app.mydomain.org, request: "GET > /.git/config HTTP/1.0", host: " ", referrer: > "http://app.mydomain.org" > So here as you can see in the logs my client 192.168.1.20 is the > reverse proxy and not the client on the Internet > > So in access logs > http://nginx.org/en/docs/http/ngx_http_log_module.html > I can get the IP of the Internet use > > How can I get the IP of the Internet user when it generates an error log ? > _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: