NGINX Reverse Proxy terminate TCP connection after 5 minutes of inactivity
Kin Seng
ckinseng at gmail.com
Tue Feb 20 03:57:27 UTC 2024
Hi J Carter,
Thank you for your reply.
I am capturing the packet from firewall, and the filtering is as per below
for the previously attached pcap.
Source : client app -- Dest : nginx proxy , any port to any port
Source : public server -- Dest : nginx proxy , any port to any port
Source : nginx proxy -- Dest : client app , any port to any port
Source : nginx proxy -- Dest : public server , any port to any port.
Perhaps I will try to do tcpdump from the client app as well.
One more info that I notice from client app host, from the netstat command,
it shows CLOSE_WAIT for the terminated session, it seems like close_wait is
the symbol that the closing is from external ( in this case client app is
connect to nginx proxy), is this right?
On Tue, Feb 20, 2024, 10:06 AM J Carter <jordanc.carter at outlook.com> wrote:
> Hello,
>
> On Tue, 20 Feb 2024 09:40:13 +0800
> Kin Seng <ckinseng at gmail.com> wrote:
>
> > Hi J Carter,
> >
> > This is the only results from the whole 5 minutes session (intentionally
> > without any transaction to create inactivity). Is there any symptoms
> which
> > can prove that other parties are the one who Initiate the closing?
> >
>
> Packet capture is the easiest, however it looks like you have
> missing data in PCAP for some reason (like tcpdump filters).
>
> I suppose you could also perform packet capture on the client app host
> instead of on the nginx host to corroborate the data - that would show
> who sent FIN first.
>
> Also, as Roman says in adjacent thread, debug level logs will also show
> what happened.
>
> > On Tue, Feb 20, 2024, 9:33 AM J Carter <jordanc.carter at outlook.com>
> wrote:
> >
> > > Hello,
> > >
> > > On Mon, 19 Feb 2024 16:24:48 +0800
> > > Kin Seng <ckinseng at gmail.com> wrote:
> > >
> > > [...]
> > > > Please refer to the attachments for reference.
> > > >
> > > > On Mon, Feb 19, 2024 at 4:24 PM Kin Seng <ckinseng at gmail.com>
> wrote:
> > > > > After capturing the tcp packet and check via wireshark, I found
> out
> > > that
> > > > > the nginx is sending out the RST to the public server and then
> send
> > > FIN/ACK
> > > > > (refer attached pcap picture) to client application.
> > > > >
> > > > > I have tried to enable keepalive related parameters as per the
> nginx
> > > > > config above and also check on the OS's TCP tunable and i could
> not
> > > find
> > > > > any related settings which make NGINX to kill the TCP connection.
> > > > >
> > > > > Anyone encountering the same issues?
> > > > >
> > >
> > > The screenshot shows only 1 segment with FIN flag set too which is
> > > odd - there should be one from each party in close sequence. Also the
> > > client only returns an ACK, rather than FIN+ACK, which it should if
> > > nginx was the initiator of closing the connection...
> > > _______________________________________________
> > > nginx mailing list
> > > nginx at nginx.org
> > > https://mailman.nginx.org/mailman/listinfo/nginx
> > >
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20240220/85534bd8/attachment.htm>
More information about the nginx
mailing list