[PATCH 0 of 1] Upstream: add propagate_connection_close directive
Maxim Dounin
mdounin at mdounin.ru
Tue Jan 13 15:11:35 UTC 2015
Hello!
On Mon, Jan 05, 2015 at 05:47:28PM -0800, Shawn J. Goff wrote:
[...]
> >>I can submit a documentation patch if this patch is accepted.
> >The approach taken in the patch looks wrong, for multiple reasons,
> >in particular:
> >
> >- The upstream module isn't expected to contain it's own
> > directives, expect ones used to define upstream{} blocks.
> > Instead, there should be directives in modules implementing
> > protocols, like "proxy_foo_bar...".
>
> I had considered putting it in upstream, but thought the having it in
> location{} would give more flexibility. I'd be fine putting it in upstream{}
> instead.
>
> As far as putting it in a proxy_foo_bar module, I took a look through the
> modules here: http://nginx.org/en/docs/http/ngx_http_proxy_module.html . The
> only one I see that might be appropriate is proxy_pass; are there any others
> you were referring to?
>
> I chose to put this in the upstream module because that is what strips out
> the Connection header and sets the connection_close field in the headers_in
> struct that is specific to the upstream module.
There are multiple modules in nginx that implement various
protocols on top of the upstream module: proxy, fastcgi, scgi,
uwsgi, memcached. Depending on the protocol, an option may or may
not have sense. For example, there are "proxy_ignore_headers" and
"fastcgi_ignore_headers" directives, but no
"memcached_ignore_headers".
> >- The "Connection: close" header is a hop-by-hop http header, and
> > "propogating" it looks like a bad idea. It mixes control of the
> > nginx-to-backend connection with control of the client-to-nginx
> > connection. Instead, there should be a way to control these
> > connections separately. It may be an option to add X-Accel-...
> > header instead, similart to X-Accel-Limit-Rate. Though this
> > approach has it's own problems too, see below.
>
> It is hop-by-hop, but we're not really wanting Nginx as a separate hop; that
> is just a byproduct. Nginx on the same host as the upstream server; it's
> just there to take care of TLS for us.
Sure, in your particular case. But the behaviour you suggests
doesn't solve the problem you are trying to solve for ones who do
want nginx as a separate hop - and want, e.g., to maintain
persistent connections between nginx and backend, while being able
to selectively close connections with clients. Or, vice versa, want
to be able to don't maintain persistent connections between nginx
and a backend, while being able to maintain connections with
clients and at the same time being able to close them.
> >- It is not possible to control connections that aren't proxied
> > to backends but are handled locally - e.g., when using embedded
> > perl or even just serving static files.
> >
> >If there is a need to allow dynamic control of keepalive, I think
> >that proper way would be to extend the "keepalive_disable"
> >directive with variables support.
> >
>
> How would this work? Should I set a variable depending on whether some
> X-Accel- header is present, then set keepalive_disable per request depending
> on that variable?
All headers returned by the upstream server are available as
$upstream_http_* variables. So it should be possible to do
something like this:
keepalive_disable $upstream_http_x_connection_close;
That is, disable keepalive if the "X-Connection-Close" header is
present in the response. Or it should be possible to test the
"Connection" header returned by the upstream server, like this:
map $upstream_http_connection $close {
default 0;
~close 1;
}
keepalive_disable $close;
Hope this explains the idea.
--
Maxim Dounin
http://nginx.org/
More information about the nginx-devel
mailing list