How to cap server load?
kevin at my.walr.us
Sat Jan 5 18:32:17 UTC 2013
Good idea of measuring actual response time, but I'm just looking to for a more crude limit of setting a maximum number of concurrent requests for the localhost server before requests are "bounced" to another backend.
But, do you know how to do the "response time" limit within NGINX? Or, do I need to do this test with scripts outside NGINX and have all load balancers that send requests to this backend do health checks (again outside NGINX) and edit configuration file and reload NGINX?
If so, I might as well put HAProxy in front of NGINX which can do what I want.
I was looking for a simple way within NGINX to see how many concurrent requests there are for the localhost backend.
Just exploring my options...
On Jan 5, 2013, at 1:20 PM, Stefan Caunter <stef at scaleengine.com> wrote:
> You need to test the response time of a sample php script. Mark the
> back end as down if it fails the response time threshold a certain
> number of times. After you back off, it should recover health if your
> algorithm is working. Remember, the database is likely to be the
> ultimate performance issue with php performance.
> Stefan Caunter
> On Sat, Jan 5, 2013 at 11:15 AM, KT Walrus <kevin at my.walr.us> wrote:
>> I really want to ensure that my web servers are not overloaded.
>> Can I do this with nginx?
>> That is, is there a variable I could test to decide whether nginx should send the request to the local PHP backend or to forward the request to other nginx servers in the server farm, based on the load of the PHP backend? Maybe a variable that contains how many concurrent requests to nginx are waiting for a response?
>> nginx mailing list
>> nginx at nginx.org
> nginx mailing list
> nginx at nginx.org
More information about the nginx