NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity

Kin Seng ckinseng at gmail.com
Mon Feb 19 08:24:04 UTC 2024


My current nginx setup always kill the TCP connection after 5 minutes of
inactivity, i.e no transaction.
[From wireshark, nginx send RST to upstream server and then send FIN,ACK to
downstream client]

I have this setup which requires TLS1.2 connection connecting from my
internal network [client application] to public network [server]. It only
use TCP ports (not http/https) and establish with a server located at
public network. The client application does not support TLS1.2 connection
hence the introduction of nginx proxy/reverse proxy for TLS wrapping
purpose. You may refer below :

                       Internal Network
  | INTERNET/Public
[Client Application] <-----> [NGINX Reverse Proxy] <--- | ---> [Public
Server]
                  <Non TLS TCP Traffic>                        <TLS 1.2>


- using stream module
- no error shown in nginx error log
- access log showing TCP 200 Status but the session only last 300s
everytime. [Recorded in the access_log]

Below is my nginx configuration

# more nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 2048;
}

stream {
resolver 127.0.0.1;
include /etc/nginx/conf.d/*.conf;

log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time $upstream_addr'
'"$upstream_bytes_sent" "$upstream_bytes_received"
"$upstream_connect_time"';

access_log /var/log/nginx/stream.access.log basic;

error_log log_file;
error_log /var/log/nginx/error_log;

server {
listen 35012;
proxy_pass X.X.X.X:35012;
proxy_timeout 86400s;
proxy_connect_timeout 1200s;
proxy_socket_keepalive on;
ssl_session_cache shared:SSL:5m;
ssl_session_timeout 30m;

# For securing TCP Traffic with upstream servers.
proxy_ssl on;
proxy_ssl_certificate /etc/ssl/certs/backend.crt;
proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;

# proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
# proxy_ssl_verify on;
proxy_ssl_verify_depth 2;

#To have NGINX proxy previously negotiated connection parameters and use a
so-called abbreviated handshake - Fast
proxy_ssl_session_reuse on;

}
}


After capturing the tcp packet and check via wireshark, I found out that
the nginx is sending out the RST to the public server and then send FIN/ACK
(refer attached pcap picture) to client application.

I have tried to enable keepalive related parameters as per the nginx config
above and also check on the OS's TCP tunable and i could not find any
related settings which make NGINX to kill the TCP connection.

Anyone encountering the same issues?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240219/db125629/attachment.htm>


More information about the nginx mailing list