worker cpu balance

Igor Sysoev is at rambler-co.ru
Sun Apr 13 20:40:27 MSD 2008


On Sun, Apr 13, 2008 at 06:13:19PM +0200, Aleksandar Lazic wrote:

> On Son 13.04.2008 12:36, Igor Sysoev wrote:
> >On Sun, Apr 13, 2008 at 03:04:20AM +0200, Aleksandar Lazic wrote:
> >
> >>It works fast. Since it uses sendfile, it is as fast as Tux on large
> >>files (>= 1MB), and saturates 10 Gbps with 10% of CPU with 1MB files.
> >>
> >>However, it does not scale on multiple CPUs, whatever the number of
> >>worker_processes. I've tried 1, 2, 8, ... The processes are quite
> >>there, but something's preventing them from sharing a resource since
> >>the machine never goes beyond 50% CPU used (it's a dual
> >>core). Sometimes, "top" looks like this :
> [snipp]
> >Try
> >
> >events {
> >   accept_mutex   off;
> >   ...
> 
> Thanks but not so much changes ;-(, any further tuning options?
> 
> With mutex off 4925
> Cpu0  :  1.0%us,  2.0%sy,  0.0%ni, 74.0%id,  0.0%wa,  0.0%hi, 23.0%si,  
> 0.0%st
> Cpu1  :  3.0%us,  2.0%sy,  0.0%ni, 70.0%id,  0.0%wa,  1.0%hi, 24.0%si,  
> 0.0%st
> Mem:   2075780k total,  2053376k used,    22404k free,     4420k buffers
> Swap:  4096532k total,      192k used,  4096340k free,  1860656k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND        
>  1969 al        15   0  3364 1564  540 S   12  0.1   0:20.82 nginx          
>  1968 al        15   0  3364 1604  544 R    0  0.1   0:31.64 nginx
> 
> ###
> Without mutex settings 4976
> Cpu0  :  0.4%us,  3.2%sy,  0.0%ni, 93.2%id,  0.3%wa,  0.2%hi,  2.8%si,  
> 0.0%st
> Cpu1  :  0.5%us,  4.6%sy,  0.0%ni, 91.6%id,  0.2%wa,  0.2%hi,  2.9%si,  
> 0.0%st
> Mem:   2075780k total,  2046476k used,    29304k free,     4464k buffers
> Swap:  4096532k total,      192k used,  4096340k free,  1860656k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND        
>  2016 al        15   0  3308 1532  540 S   23  0.1   0:10.15 nginx          
>  2015 al        15   0  3308 1520  544 S    6  0.1   0:01.08 nginx
> ###
> 
> Could be the scheduler a point for optimization, or am I on the wrong
> way?!

If accept_mutex is off, then  OS scheduler choose process to accept
a new connection.

> ### lighty 5144
> Cpu0  :  7.0%us,  5.0%sy,  0.0%ni, 66.0%id,  0.0%wa,  3.0%hi, 19.0%si,  
> 0.0%st
> Cpu1  : 12.0%us,  5.0%sy,  0.0%ni, 62.0%id,  0.0%wa,  3.0%hi, 18.0%si,  
> 0.0%st
> Mem:   2075780k total,  2005220k used,    70560k free,     4484k buffers
> Swap:  4096532k total,      192k used,  4096340k free,  1866996k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND        
>  2032 root      15   0 36508 1768  900 S   24  0.1   0:02.36 lighttpd       
>  2031 root      15   0 36512 1716  900 S   19  0.1   0:01.40 lighttpd
> ###
> 
> I'am ready to help to improve nginx but for know it looks to me that
> lighty 1.5 is a little bit faster.
> 
> How ever I stay to nginx, I like it more ;-))

In last top shot lighttpd eats more CPU as compared to nginx.
This may be the cause, why lighty processes are balanced better.
It's strange that lighty eats so much CPU although it has
server.network-backend = "linux-sendfile"

I usually see the equal load on 2CPU FreeBSD 7 (low weekend load):

  PID USERNAME     THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
11224 nobody         1   4  -10   104M   101M kqread 0 336:12 10.79% nginx
11225 nobody         1   4  -10   108M   103M kqread 0 338:12 10.60% nginx


-- 
Igor Sysoev
http://sysoev.ru/en/





More information about the nginx mailing list