Access_log off impact on Requests/sec

Maxim Dounin mdounin at
Thu Jul 28 15:21:35 UTC 2016


On Thu, Jul 28, 2016 at 10:32:13AM -0400, stevewin wrote:

> I am beginning to look at NGINX performance on a development system with
> traffic driven by Wrk.  Initially I am just looking at static HTTP serving.
> I am using NGINX v1.10.1 running on the host system with Ubuntu 16.04.  Wrk
> v4.0.4 is running from a separate client platform over a private 40GB
> connection.  The CPU on the host system has 24 cores (no hyperthreading).
> I had started to look into various NGINX and kernel parameters for
> performance optimization.  One thing that I am seeing that appears odd to me
> is that when I change access_log to off (from the default of specifying a
> log location) it seems to decrease the requests/sec that I am seeing when
> connections increase (using defaults with everything else being equal). 
> Does this make sense?
> The results with defaults including ‘access_log /var/log/nginx/access.log;’
> show the Requests/sec ramping up to ~24.5K and staying there.
> The results with defaults and access_log changed to ‘access_log off;’ show
> the Requests/sec initially ramping up to ~28.5K but then decreasing down to
> ~20K and staying there.
> The NGINX config is at the bottom.
> Can someone explain possible reasons for this behavior?

Benchmarking with small number of connections and multiple worker 
processes known to be seriously affected by non-uniform 
distribution of connections between worker processes.  And various 
minor changes like switching off logs may cause unexpected results 
similar to what you observe - because they change distribution 
of connections between worker processes, and this in turn changes 
things drammatically.

Some things to try if you want to get more accurate results:

- switch off accept mutex,  It is 
  off by default since nginx 1.11.3, but you are using an older 

- try using "listen ... reuseport",  It 
  has various unwanted side-effects and I won't recommend using it 
  without a good reason, but it will ensure uniform distribution of 
  connections between workers and will give you a good idea of how 
  many requests your system can really handle in a paritcular 

Note well that there are various system and configuration limits 
that needs tuning as well, including number of worker connections 
in nginx, listen backlog, as well as number of local tcp ports 
available on the client side.  Timeouts as seen in your wrk 
results indicate that you are likely hitting at least some.

Maxim Dounin

More information about the nginx mailing list