[ANNOUNCE] ngx_http_upstream_keepalive

Grzegorz Nosek grzegorz.nosek at gmail.com
Fri Oct 24 21:28:10 MSD 2008

On pią, paź 24, 2008 at 08:56:28 +0400, Maxim Dounin wrote:
> Request to upstream is created in ngx_http_proxy_module, so no 
> filters there.

Right. But it was worth asking ;)

> You may try to use proxy_set_header though, and then use 
> proxy_hide_header to filter out Keep-Alive header from the 
> response.  It may even work - if backend handle HTTP/1.0 keepalive 
> connections and won't try to sent chunked encoding to nginx.

That was my idea too but I thought about encapsulating it somehow so
that the keepalive support would be transparent.

> But in fact nginx should be modified to support HTTP/1.1 to 
> backends.

True. It can be a pain especially when your backend is a stupid embedded
device that only talks pidgin HTTP/1.1.

> For FastCGI it should be as simple as not setting appropriate close 
> bit in request created by ngx_http_fastcgi_module (and using my 
> patches for connection closing), but I've not checked it yet.

I think keepalive support would fit best in Nginx core (not as an
external module) with some infrastructure to support it. Consumers of
the upstream functionality (memcached, fastcgi, proxy) could provide a
"want keepalive" flag which would mean that they are aware of keepalive
and handle it at the protocol level.

As Nginx is as much a proxy as it is a web server, maybe it makes sense
to make the upstream layer stackable, like:

 - session affinity support (or something; influences peer choice, can
   skip the real load balancer)
 - the real load balancer (ip_hash, rr, fair; influences peer choice)
 - keepalive support (does not influence peer choice--usually)

We could also e.g. pass the request data to the load balancer while
we're at it so it can be a bit smarter (e.g. POSTs to a single "master"
backends, GETs balanced over a pool of "slaves", a'la databases). The
common cases could then be simpler and faster than handcrafting a config

Best regards,
 Grzegorz Nosek

More information about the nginx mailing list