Quick performance deterioration when No. of clients increases
Steve Holdoway
steve at greengecko.co.nz
Sat Oct 19 18:10:29 UTC 2013
This is a slight oversimplification as processes in waitio are also adding to the load average. Use of programs like top, iotop, mytop etc will give you a clearer idea of what is going on, and where your bottleneck lies.
Steve
Jan-Philip Gehrcke <jgehrcke at googlemail.com> wrote:
>Hi Nikolaos,
>
>just a small follow-up on this. In your initial mail you stated
>
> > The new VM (using Nginx) currently is in testing mode and it only has
> > 1-core CPU
>
>as well as
>
> > When this performance deterioration occurs, we don't see very high CPU
> > load (Unix load peaks 2.5)
>
>These numbers already tell you that your initial tests were CPU bound. A
>simple way to describe the situation would be that you have loaded your
>system with 2.5 as much as it was able to handle "simultaneously". On
>average, 1.5 processes were in the run queue of the scheduler just
>"waiting" for a slice of CPU time.
>
>In this configuration, you observed
>
> > You can see at the load graph that as the load approaches 250 clients,
> > the response time increases very much and is already unacceptable
>
>Later on, you wrote
>
> > In the meantime, we have increased CPU power to 4 cores and the behavior
> > of the server is much better.
>
>and
>
> > Now my problem is that there seems to be a limit of performance to
> > around 1200 req/sec
>
>Do you see that the rate increased by about factor 4? No coincidence, I
>think these numbers clarify where the major bottleneck was in your
>initial setup.
>
>Also, there was this part of the discussion:
>
> > On 16/10/2013 7:10 μμ, Scott Ribe wrote:
> >
> >> Have you considered not having vastly more worker processes than you
> >> have cores? (IIRC, you have configured things that way...)
> >
> > I have (4 CPU cores and):
> >
> > worker_processes 4;
>
>
>Obviously, here you also need to consider the PHP-FPM and possibly other
>processes involved in your web stack.
>
>Eventually, what you want at all times is to have a load average below
>the actual number of cores in your machine (N) , because you want your
>machine to stay responsive, at least to internal events.
>
>If you run more processes than N that potentially create huge CPU load,
>the load average is easily pushed beyond this limit. Via a large request
>rate, your users can then drive your machine to its knees. If you don't
>spawn more than N worker processes in the first place, this helps
>already a lot in preventing such a user-driven lockup situation.
>
>Cheers,
>
>Jan-Philip
>
>
>
>
>
>
>
>
>
>
>
>On 16.10.2013 18:16, Nikolaos Milas wrote:
>> On 16/10/2013 7:10 μμ, Scott Ribe wrote:
>>
>>> Have you considered not having vastly more worker processes than you
>>> have cores? (IIRC, you have configured things that way...)
>>
>> I have (4 CPU cores and):
>>
>> worker_processes 4;
>> worker_rlimit_nofile 400000;
>>
>> events {
>> worker_connections 8192;
>> multi_accept on;
>> use epoll;
>> }
>>
>> Any ideas will be appreciated!
>>
>> Nick
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>_______________________________________________
>nginx mailing list
>nginx at nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx
>
More information about the nginx
mailing list