keep-alive to backend + non-idempotent requests = race condition?

B.R. reallfqq-nginx at
Fri Aug 26 15:07:06 UTC 2016

What about marking the upstream servers you want to update as 'down' in
their pool, reloading the configuration (HUP signal, gracefully shutting
down old workers), and waiting for the links to those servers to be clear
of any activity?
​Then upgrade and reintegrate updated servers in the pool (while disabling
others/old version, if needed).​

This kind of manual rotation is trivial.
*B. R.*

On Thu, Aug 25, 2016 at 3:44 PM, Emiel Mols <emiel.mols at> wrote:

> Hey,
> I've been haunted by this for quite some time, seen it in different
> deployments, and think might make for some good ol' mailing list discussion.
> When
> - using keep-alive connections to a backend service (eg php, rails, python)
> - this backend needs to be updatable (it is not okay to have lingering
> workers for hours or days)
> - requests are often not idem-potent (can't repeat them)
> current deployments need to close the kept-alive connection from the
> backend-side, always opening up a race condition where nginx has just sent
> a request and the connection gets closed. This leaves nginx in limbo not
> knowing if the request has been executed and can be repeated.
> When using keep-alive connections the only reliable way of closing them is
> from the client-side (in this case: nginx). I would therefor expect either
> - a feature to signal nginx to close all connections to the backend after
> having deployed new backend code.
> - an upstream keepAliveIdleTimeout config value that guarantees that
> kept-alive connections are not left lingering indefinitely long. If nginx
> guarantees it closes idle connections after 5 seconds, we can be sure that
> 5s+max_request_time after a new backend is deployed all old workers are
> gone.
> - (variant on the previous) support for a http header from the backend to
> indicate such a timeout value. It's funny that this header kind-of already
> exists in the spec <
> timeout-01.html#keep-alive>, but in practice is implemented by no-one.
> The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind
> implementing myself if someone versed in the architecture would give some
> pointers.
> Best regards,
> - Emiel
> BTW: a similar issue should exist between browsers and web servers. Since
> latency is a lot higher on these links, I can only assume it to happen a
> lot.
> _______________________________________________
> nginx mailing list
> nginx at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list