Continuing issues with limit_conn

Maxim Dounin mdounin at
Wed Apr 30 15:16:25 MSD 2008


On Tue, Apr 29, 2008 at 11:40:40AM -0400, Calomel wrote:

>Thank you for the response. I just want to make sure I understand this
>directive correctly as the Wiki seems a bit unclear. 
>The ngx_http_limit_zone_module only limits the amount of connections from a
>single ip address which are currently being processed by the Nginx daemon.

If you use limit_zone on $binary_remote_addr - yes.

>For example, if we declared "limit_conn gulag 1" and one worker_processes
>is enabled then Nginx should only process one connection at a time
>(serial). If requests are coming in faster than Nginx can process them with
>the single worker, then the client will get a error 503.

Not really.  Error 503 will be returned if request processing 
blocks at socket level and new request from same ip starts.  
This won't help if you workload is CPU or disk bound.

>But, with more workers enabled a remote ip would need to bog down 
>all of them in order to get a error 503. 

Not.  With several workers 503 will be returned if two workers process 
requests from the same ip at the same time.  But this is unlikely 
to happen with small requests.

>The ngx_http_limit_zone_module does _NOT_ limit the total amount of
>connections any single remote ip can open to the server.


>Does this sound correct?

Not really. 

Maxim Dounin

>  Nginx "how to"
>  Calomel @
>  Open Source Research and Reference
>On Tue, Apr 29, 2008 at 11:57:44AM +0400, Maxim Dounin wrote:
>>On Mon, Apr 28, 2008 at 11:29:08PM -0400, Calomel wrote:
>>>Building Nginx 0.6.29, we are also _unable_ to get limit_zone/limit_conn to
>>>work as expected.
>>>As a test we setup the relative lines in the http and server sections. 
>>>should only accept ONE concurrent connection for any single ip address.  
>>> limit_zone gulag $binary_remote_addr 1m;
>>>   server{
>>>     limit_conn gulag 1;
>>>   }
>>>When I run "ab -c 50 -n 10000 http://testbox/" the server answers all
>>>requests with response code 200. As you mentioned, this is _not_ the
>>>expected behavior.
>>>Perhaps we are missing something? The code could be at fault or perhaps
>>>something has been omitted from the Wiki and the documentation. If there is
>>>a proper solution I will make sure to document it.
>>It looks like there is some misunderstanding regarding to what 
>>limit_conn actually limits.  It limits concurrent connections 
>>*processed* by nginx (not keep-alive ones), and only after header 
>>has been received (and thus configuration for request has been
>>Since nginx is event-based, with one worker process you shouldn't 
>>expect requests to hit limit_conn unless they block at some stage 
>>(i.e. responses bigger than socket buffers if sendfile off, 
>>replies bigger than sendfile_max_chunk if sendfile on, proxy_pass 
>>...).  With many workers limit_conn may be hit without blocking, 
>>but this generally requires _very_ high concurrency for small 
>>Maxim Dounin

More information about the nginx mailing list