[PATCH 3 of 4] Silenced complaints about socket leaks on forced termination
Maxim Dounin
mdounin at mdounin.ru
Mon Jan 29 07:30:54 UTC 2024
Hello!
On Fri, Jan 26, 2024 at 04:26:21PM +0400, Sergey Kandaurov wrote:
>
> > On 27 Nov 2023, at 06:50, Maxim Dounin <mdounin at mdounin.ru> wrote:
> >
> > # HG changeset patch
> > # User Maxim Dounin <mdounin at mdounin.ru>
> > # Date 1701049787 -10800
> > # Mon Nov 27 04:49:47 2023 +0300
> > # Node ID 61d08e4cf97cc073200ec32fc6ada9a2d48ffe51
> > # Parent faf0b9defc76b8683af466f8a950c2c241382970
> > Silenced complaints about socket leaks on forced termination.
> >
> > When graceful shutdown was requested, and then nginx was forced to
> > do fast shutdown, it used to (incorrectly) complain about open sockets
> > left in connections which weren't yet closed when fast shutdown
> > was requested.
> >
> > Fix is to avoid complaining about open sockets when fast shutdown was
> > requested after graceful one. Abnormal termination, if requested with
> > the WINCH signal, can still happen though.
>
> I've been wondering about such IMHO odd behaviour and support the fix.
> There might be an opinion that once you requested graceful shutdown,
> you have to wait until it's done, but I think that requesting fast
> shutdown afterwards should be legitimate.
I tend to think that the existing behaviour might be usable in some
situations, like when one wants to look into remaining connections
after waiting for some time for graceful shutdown to complete.
Still, it is very confusing for unaware people, and I've seen lots
of reports about socket leaks which in fact aren't. And, more
importantly, with the existing behaviour when looking at a socket
leak report you never know if it's real or not. So it is
certainly worth fixing.
>
> >
> > diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c
> > --- a/src/os/unix/ngx_process_cycle.c
> > +++ b/src/os/unix/ngx_process_cycle.c
> > @@ -948,7 +948,7 @@ ngx_worker_process_exit(ngx_cycle_t *cyc
> > }
> > }
> >
> > - if (ngx_exiting) {
> > + if (ngx_exiting && !ngx_terminate) {
> > c = cycle->connections;
> > for (i = 0; i < cycle->connection_n; i++) {
> > if (c[i].fd != -1
> > @@ -963,11 +963,11 @@ ngx_worker_process_exit(ngx_cycle_t *cyc
> > ngx_debug_quit = 1;
> > }
> > }
> > + }
> >
> > - if (ngx_debug_quit) {
> > - ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, "aborting");
> > - ngx_debug_point();
> > - }
> > + if (ngx_debug_quit) {
> > + ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, "aborting");
> > + ngx_debug_point();
> > }
> >
> > /*
> > diff --git a/src/os/win32/ngx_process_cycle.c b/src/os/win32/ngx_process_cycle.c
> > --- a/src/os/win32/ngx_process_cycle.c
> > +++ b/src/os/win32/ngx_process_cycle.c
> > @@ -834,7 +834,7 @@ ngx_worker_process_exit(ngx_cycle_t *cyc
> > }
> > }
> >
> > - if (ngx_exiting) {
> > + if (ngx_exiting && !ngx_terminate) {
> > c = cycle->connections;
> > for (i = 0; i < cycle->connection_n; i++) {
> > if (c[i].fd != (ngx_socket_t) -1
>
> I think it's fine.
Thanks for looking, pushed to http://mdounin.ru/hg/nginx/.
--
Maxim Dounin
http://mdounin.ru/
More information about the nginx-devel
mailing list