Limiting number of connections to upstream servers
nginx-forum at nginx.us
Tue Apr 27 20:06:57 MSD 2010
Currently we use nginx 0.7 in combination with multiple fastcgi backends that process requests dynamically without caching. We'd like to prevent a DOS attack by limiting the number of connections handled simultaneously for each backend.
I'm wondering what the best way is to do this. I'd love to be able to specify the maximum number of open connections for each upstream server individually; that seems to be the most straighforward solution. I couldn't find anything in the docs that would allow one to do this though. There's worker_processes and worker_connections, but they're global to the entire nginx server. Since the server also handles static requests (many more than dynamic fcgi requests), there's little I can do with that.
The other solution I can think of, is by having the fcgi backend processes monitor the number of connections they're handling. That has the drawback that each type of backend process must be able to do this. Also, I imagine it could happen that one backend would refuse to handle a connection, while another backend still had some open slots. Nginx itself could handle that better.
How should this best be resolved?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,79962,79962#msg-79962
More information about the nginx