Connection Pooling to Backend HTTP Servers

Mansoor Peerbhoy mansoor at zimbra.com
Fri Aug 17 15:19:12 MSD 2007


Hello Igor and NGINXers,

I just wanted to bounce a few ideas regarding connection pooling for backend HTTP servers.
As of today, the `upstream' data structure identifies a collection of HTTP servers, and this upstream can be used
against the proxy_pass directive.

Further, we have modules (upstream round robin and IP hash), whose job it is to elect an upstream server for a particular
HTTP request. Round-robin does this (as the name suggests) using an internal RR counter, whereas the IP hash module does this
by hashing a subset of the client's IP address onto an index of available upstream servers.

Now, I would like to understand what would be the fundamental design issues that would come up if NGINX keeps the
connections to the backend servers alive. That means, it does not establish a fresh TCP connection to a backend HTTP server
upon each HTTP request. Rather, each NGINX worker process will establish a TCP connection to each HTTP server within the upstream
server, (just *once*), and subsequently use that same connection whenever it wants to do an HTTP request proxy_pass.

Of course, the internal algorithms that are to be used regarding the election of the server would vary slightly. Rather than
electing the HTTP server by the hash index, it would elect the HTTP connection by hash index.

Connection pooling would definitely have the advantage that we avoid the overhead of TCP connection establishment for each
HTTP request to the backend servers.

In fact, I would assume that in many cases, NGINX usually would reside on the same LAN segment as the backend servers that it
is proxying for. In this case, keeping a connection alive for very long periods of time is a completely feasible idea.

Of course, to do this, we must also understand the effect that multiple keepalive connections to the backend servers will have
on the backend server itself.

By and large, most application servers and web servers follow an asynchronous model for processing HTTP requests, so keep-alive connections to them are not a problem at all.

However, we must also think about the process model or the threading model of the HTTP server or the app server, because this
affects the affinity of a particular connection with the execution thread. For instance, many threading app servers may have a
shared work queue, and each thread waits for a semaphore that would indicate that there are requests to be read off the work queue.

However, some process-based app servers rely on the accept() and/or select() system call on the listening socket to be notified
about a new connection. Usually, once a connection is established, that socket is usually tied down to that worker process.

I was just wondering if we elect to use connection pooling for the NGINX upstream data-structure, then what will be the overall
things that we will need to take into consideration.


Regards,
Mansoor







More information about the nginx mailing list