maxconn 1 in nginx

Grzegorz Nosek grzegorz.nosek at gmail.com
Thu Oct 9 21:19:13 MSD 2008


On Thu, Oct 09, 2008 at 09:23:08AM -0700, Ezra Zygmuntowicz wrote:
> 	Yeah the haproxy behavior is what we really want for proxying to  
> rails apps. I'd like nginx to only feed a request to mongrels that are  
> not currently processing a request and any other requests will queue  
> in nginx in the efficient event loop and only dole out the requests  
> once a backend comes available or a timeout is hit.

Thinking about it, it's not a trivial problem actually. The load
balancer must answer *now* with either a backend address or an error.
AFAIK there's no mechanism to defer the decision (and thus dispatching
the request).

Even if there was a way to do that (e.g. the balancer returns
NGX_AGAIN), it wouldn't still solve the problem because Nginx core
wouldn't know when to retry, so would probably have to busy loop. The
API might include passing back an interval but it would always be a
rough estimate. Tying the retry with more load balancer interaction
(e.g. when a request ends with a call to ->peer.free, retry all pending
requests) would mostly work (for this case) but wouldn't be 100%
bulletproof, as the multiple workers don't communicate (in the usual
case; upstream_fair only uses shared memory). Attaching retries to FD
actions presents a similar problem and changes the behaviour slightly
(more often polling, more accurate timing, more CPU usage).

Regarding multiple workers, I can't see a clean solution currently,
but restricting ourselves to the one-worker case, the upstream core
could keep a queue (or rbtree or...) of pending requests per-upstream
and export a function like

ngx_http_upstream_retry(ngx_http_upstream_srv_conf_t *us);

Which would walk the queue calling ->peer.get again. The balancer
could then respond "this guy isn't ready yet, try another one"
(NGX_EAGAIN) separately from "everything is still busy, no point in
asking again" (NGX_DONE?). A nice touch would be a timer interface to
set up a hard timeout (I'm sure it can be done now but don't know the
whole API yet ;)

The queue could be also maintained in the load balancer but still Nginx
needs to be modified to accept NGX_AGAIN as a balancer verdict.

Details would need some ironing out but I think the overall idea is
workable. 

@Igor, do you think it could work?

> 	This is the ideal load balancer situation for rails apps. I;ve been  
> putting haproxy in between nginx and mongrels to do this but I would  
> much rather see the fair balancer support this use case so I have less  
> items in the chain .

BTW, that's just insane (IMVHO). If a Rails app can process requests
sequentially faster than concurrently, then why TF does it accept more
requests than it can handle? If the app had a short listen queue and ran
as a simple iterative server, overlimit requests would simply error out
with ECONNREFUSED, which can be rather gracefully handled inside Nginx.

Best regards,
 Grzegorz Nosek





More information about the nginx mailing list