<div dir="ltr"><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">First, you need to define 'user', which is not a trivial problem.<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">Unless you use the commercial subscription, it is hard to tie a connection to a session. You can use components in fornt of nginx to identify them (usually with cookies).<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">Thus 'user' in nginx FOSS usually means 'IP address'.<br><br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">Now, you have the <a href="http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html">limit_conn</a> module, as you noticed, which allows you to limit (as its name suggests) connections, and not rate (which is the job of... the <a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html">limit_req</a> module).<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">You need to set a key on the zone to the previously told value in order to identify a 'user'. Then the limit_conn directive allows you to set the connections limit.<br><br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">To make the limit number vary, you will need to automatically (periodically) re-generate some included configuration file and reload nginx configuration, as it is evaluated once before requests are processed. You might also use some lua scripting to change it on the fly, I suppose.<br><br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">However, you won't be able to do anything else by rejecting extra connections with a 503 (or another customisable HTTP code). The client will then need to try to connect again following a certain shared protocol after some conditions are met (time?).<br><br>===<br><br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">Another idea (looks like dirty, but you will decide on that):<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">If you know what time a specific request needs to complete, you can try to play with limit_req to limit connections, using the burst property of this directive to stack waiting (but not rejected) connections.<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">Say you want to limit connections to 5, but accept up to 10 of them, all that in 30s, you can use a limit of 5 associated with a rate of 2/m and a burst of 5.<br><br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">All that requires testing, provided I correctly understood your specifications.<br></div><div class="gmail_default" style="font-size:small;color:rgb(51,51,153)">My 2 cents,</div><div class="gmail_extra"><div><div class="gmail_signature"><font size="1"><span style="color:rgb(102,102,102)">---<br></span><b><span style="color:rgb(102,102,102)">B. R.</span></b><span style="color:rgb(102,102,102)"></span></font></div></div>
<br><div class="gmail_quote">On Sun, Feb 8, 2015 at 11:41 PM, ChrisAha <span dir="ltr"><<a href="mailto:nginx-forum@nginx.us" target="_blank">nginx-forum@nginx.us</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I am using nginx as a reverse proxy to a Ruby on Rails application using the<br>
unicorn server on multiple load-balanced application servers. This<br>
configuration allows many HTTP requests to be serviced in parallel. I'll<br>
call the total number of parallel requests that can be serviced 'P', which<br>
is the same as the number of unicorn processes running on the application<br>
servers.<br>
<br>
I have many users accessing the nginx server and I want to ensure that no<br>
single user can consume too much (or all) of the resources. There are<br>
existing plugins for this type of thing: limit_conn and limit_req. The<br>
problem is that it looks like these plugins are based upon the request rate<br>
(i.e. requests per second). This is a less than ideal way to limit resources<br>
because the rate at which requests are made does not equate to the amount of<br>
load the user is putting on the system. For example, if the requests being<br>
made are simple (and quick to service) then it might be OK for a user to<br>
make 20 per second. However, if the requests are complex and take a longer<br>
time to service then we may not want a user to be able to make more than 1<br>
of these expensive requests per second. So it is impossible to choose a rate<br>
that allows many quick requests, but few slow ones.<br>
<br>
Instead of limiting by rate, it would be better to limit the number of<br>
*parallel* requests a user can make. So if the total system can service P<br>
parallel requests we would limit any one user to say P/10 requests. So from<br>
the perspective of any one user our system appears to have 1/10th of the<br>
capacity that it really does. We don't need to limit the capacity to<br>
P/number_of_users because in practice most users are inactive at any point<br>
in time. We just need to ensure that no matter how many requests, fast or<br>
slow, that one user floods the system with, they can't consume all of the<br>
resources and so impact other users.<br>
<br>
Note that I don't want to return a 503 error message to a user who tries to<br>
make more than P/10 requests at once. I just want to queue the next request<br>
so that it will eventually execute, just more slowly.<br>
<br>
I can't find any existing plugin for Nginx that does this. Am I missing<br>
something?<br>
<br>
I am planning to write a plugin that will allow me to implement resource<br>
limits in this way. But I am curious if anyone can see a hole in this logic,<br>
or an alternative way to achieve the same thing.<br>
<br>
Thanks,<br>
<br>
Chris.<br>
<br>
Posted at Nginx Forum: <a href="http://forum.nginx.org/read.php?2,256517,256517#msg-256517" target="_blank">http://forum.nginx.org/read.php?2,256517,256517#msg-256517</a><br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</blockquote></div><br></div></div>