NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity

Kin Seng ckinseng at gmail.com
Mon Feb 19 08:24:48 UTC 2024


Please refer to the attachments for reference.

On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com> wrote:

> My current nginx setup always kill the TCP connection after 5 minutes of
> inactivity, i.e no transaction.
> [From wireshark, nginx send RST to upstream server and then send FIN,ACK
> to downstream client]
>
> I have this setup which requires TLS1.2 connection connecting from my
> internal network [client application] to public network [server]. It only
> use TCP ports (not http/https) and establish with a server located at
> public network. The client application does not support TLS1.2 connection
> hence the introduction of nginx proxy/reverse proxy for TLS wrapping
> purpose. You may refer below :
>
>                        Internal Network
>   | INTERNET/Public
> [Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
> Server]
>                   <Non TLS TCP Traffic>                        <TLS 1.2>
>
>
> - using stream module
> - no error shown in nginx error log
> - access log showing TCP 200 Status but the session only last 300s
> everytime. [Recorded in the access_log]
>
> Below is my nginx configuration
>
> # more nginx.conf
>
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
>
> # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
> include /usr/share/nginx/modules/*.conf;
>
> events {
> worker_connections 2048;
> }
>
> stream {
> resolver 127.0.0.1;
> include /etc/nginx/conf.d/*.conf;
>
> log_format basic '$remote_addr [$time_local] '
> '$protocol $status $bytes_sent $bytes_received '
> '$session_time $upstream_addr'
> '"$upstream_bytes_sent" "$upstream_bytes_received"
> "$upstream_connect_time"';
>
> access_log /var/log/nginx/stream.access.log basic;
>
> error_log log_file;
> error_log /var/log/nginx/error_log;
>
> server {
> listen 35012;
> proxy_pass X.X.X.X:35012;
> proxy_timeout 86400s;
> proxy_connect_timeout 1200s;
> proxy_socket_keepalive on;
> ssl_session_cache shared:SSL:5m;
> ssl_session_timeout 30m;
>
> # For securing TCP Traffic with upstream servers.
> proxy_ssl on;
> proxy_ssl_certificate /etc/ssl/certs/backend.crt;
> proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
> proxy_ssl_protocols TLSv1.2;
> proxy_ssl_ciphers HIGH:!aNULL:!MD5;
>
> # proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
> # proxy_ssl_verify on;
> proxy_ssl_verify_depth 2;
>
> #To have NGINX proxy previously negotiated connection parameters and use a
> so-called abbreviated handshake - Fast
> proxy_ssl_session_reuse on;
>
> }
> }
>
>
> After capturing the tcp packet and check via wireshark, I found out that
> the nginx is sending out the RST to the public server and then send FIN/ACK
> (refer attached pcap picture) to client application.
>
> I have tried to enable keepalive related parameters as per the nginx
> config above and also check on the OS's TCP tunable and i could not find
> any related settings which make NGINX to kill the TCP connection.
>
> Anyone encountering the same issues?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: accesslog1.jpg
Type: image/jpeg
Size: 30817 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: wiresharkpcap1.png
Type: image/png
Size: 40488 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/5e6aedae/attachment-0001.png>


More information about the nginx mailing list