how to limit concurrent uptream requests

guiguite tripoux at gmail.com
Wed Jul 29 14:36:48 MSD 2009


Hello,

I'm trying to build the following setup :

N backends servers   <--------> Nginx <---------------------> N clients

I would like Nginx to allow each backend servers to process only 10
request at the same time and return 503 if all backend servers are
"busy" (clients will understand 503 and try again later).

I tried the "upstream_fair" module
(http://nginx.localdomain.pl/wiki/UpstreamFair) on nginx 0.7.59 &
0.6.34
(debian) with the following configuration :

upstream test {
  server 10.1.0.54:80 weight=5;
  server 10.1.0.160:80 weight=5;
  server 10.1.0.161:80 weight=5;
  server 10.1.0.162:80 weight=5;
  fair weight_mode=peak;
}
server {
       listen 127.0.0.1:8888;
       server_name _;
       location / {
           proxy_pass http://test ;
       }
}

on the backend side I have a dummy php script simulating load (sleep
for 5-15 seconds then exit)

So what I understood (?! which can be often far from reality) reading
the documentation is that nginx will forward 20 requests to the 4
backends (5 max each) then reply 502 to all request over this global
limit until a backend slot is free.

So if if test my setup with siege :
$> siege -d1 -r3000 -c100 http://127.0.0.1:8888/1.php
(100 clients, 3000 requests/client, 1 sec delay)

I see only 4 request being processed in parallel by apache/php

I tried to increase worker_processes to 32 in nginx.conf but it didn't
change the behaviour.

Only "solution" I have found is to duplicate 5 times each backend
entries like this

  server 10.1.0.54:80;
  server 10.1.0.54:80;
  server 10.1.0.54:80;
  server 10.1.0.54:80;
  server 10.1.0.54:80;
  server 10.1.0.160:80;
  /* ... */
  server 10.1.0.161:80;
  /* ... */
  server 10.1.0.162:80;
  /* .... */
  fair weight_mode=peak;

And then I get 20 concurrent request running on apache/php & other
requests discarded by nginx with 502 but I doubt it is the
"good/proper way" of using upstream_fair ...

Has anyone tried to implement some similar configuration ?

Cheers,
/guillaume

Ps:sorry for terrible english





More information about the nginx mailing list