答复: limiting upstream backend connections

卫越 weiyue at taobao.com
Fri Apr 27 13:18:15 UTC 2012


I think this module will fit your demands:
https://github.com/cfsego/nginx-limit-upstream

Sunny Chen

-----邮件原件-----
发件人: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] 代表 Jeroen Ooms
发送时间: 2012年4月27日 8:33
收件人: nginx at nginx.org
主题: limiting upstream backend connections

I am hosting a CPU intensive service and use nginx as a reverse proxy
cache / load balancer to my backend(s). Basic config looks like:

upstream myservice {
    server 127.0.0.1:80;
    server 123.123.123.123:80;
}

limit_req_zone $binary_remote_addr  zone=softlimit:10m   rate=1r/s;

location /R {
    proxy_pass  http://myservice/R;
    proxy_cache mycache;
    proxy_cache_methods POST;
    proxy_cache_key "$request_uri|$request_body";
    limit_req zone=softlimit  burst=10;
}

So I use limit_req to make sure that no single user can do more than 1
req per sec to my service.

However, actually more important is that the upstream backend servers
are protected against overload. Some of the requests might take 30 or
60 seconds to return. I want to limit the total number of active
concurrent connections to the upstream back-end at, say, 10. Requests
that hit the cache and never make it to the backend should not be
counted towards this. Only active backend connections.

I had a look at limit_conn, but it looks like this limits the client
connections rather than the back-end connections, is that correct?

Thanks for advice,

Jeroen

_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you.

本电邮(包括任何附件)可能含有机密资料并受法律保护。如您不是正确的收件人,请您立即删除本邮件。请不要将本电邮进行复制并用作任何其他用途、或透露本邮件之内容。谢谢。


More information about the nginx mailing list