killed child process

B.R. reallfqq-nginx at yahoo.fr
Sat May 20 11:11:38 UTC 2017


... and you would end up with connections serving different content (as per
different configuration) on the long run, leading potentially to an
increased number of problems.
How would you shut them down, if not gracefully?

If you want to keep long-lived connections open, do not make changes
server-side, such as asking it to reopen configuration (and thus to apply
it to every worker).
---
*B. R.*

On Fri, May 19, 2017 at 11:40 PM, Alex Samad <alex at samad.com.au> wrote:

> Yes this exactly, I ended up been schooled by support :)
>
> Seems like my understanding  of graceful shutdown / reload ..
>
> for the list and the archives
>
> No keep alive for http1.0, has to be http1.1
>
>
> client -> nginx keep alive session - these are shutdown once the current
> request is completed
> nginx -> backend server keep alive session - these are shutdown once the
> current request is completed
>
> client -> nginx - websockets ... these are kept alive until the client
> closes it
> nginx -> backend - websocket ... these are kept alive until the client
> closes it
>
>
> client -> nginx -> backend ... ssl passthrough ?  I presume these are kept
> alive until either the client or the backend closes it.
>
>
> it would be nice to allow a reload/refresh but keep keep-alive session
> open until the client closes it
>
> Alex
>
>
>
>
>
>
>
> On 19 May 2017 at 23:10, Maxim Dounin <mdounin at mdounin.ru> wrote:
>
>> Hello!
>>
>> On Fri, May 19, 2017 at 11:28:05AM +1000, Alex Samad wrote:
>>
>> > Hi
>> >
>> > so I have lots of clients on long lived tcp connections , getting rp
>> into 2
>> > back end app servers
>> >
>> > I had a line in my error log, saying one of the upstream was failed
>> caused
>> > it timeout -
>> >
>> >
>> > then I got this
>> >
>> > 2017/05/18 13:30:42 [notice] 2662#2662: exiting
>> > 2017/05/18 13:30:42 [notice] 2662#2662: exit
>> > 2017/05/18 13:30:42 [notice] 2661#2661: signal 17 (SIGCHLD) received
>> > 2017/05/18 13:30:42 [notice] 2661#2661: worker process 2662 exited with
>> > code 0
>> > 2017/05/18 13:30:42 [notice] 2661#2661: signal 29 (SIGIO) received
>> >
>> >
>> > I am not sure what initiated this ? there are no cron jobs running at
>> that
>> > time
>> >
>> > and it looks like a lot of my long lived tcp session where TCP-FIN'ed.
>> >
>> >
>> > I also checked my audit logs to see what command / process run at that
>> time
>> > ... nothing that signaled or initiated a nginx reload ...
>>
>> Try searching error log for prior messages from process 2662 (note
>> that each error message have process ID in it, "... 2662#..."), it
>> should help to identify why it exited.
>>
>> Most likley it was an old worker previously instructed to shutdown
>> gracefully due to a configuration reload, and it finished serving
>> all the existing clients and exited.  If it's the case, you should
>> see something like "gracefully shutting down" from the worker
>> somewhere earlier in logs, and "reconfiguring" from master just
>> before it.
>>
>> --
>> Maxim Dounin
>> http://nginx.org/
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20170520/9e556491/attachment-0001.html>


More information about the nginx mailing list