Ngix Performace as a Reverse Proxy

Niall Gallagher - Yieldbroker Niall.Gallagher at yieldbroker.com
Fri Oct 26 04:36:41 UTC 2012


Hi,

We have been doing some testing with Nginx as a reverse proxy. We have been comparing it to a number of solutions which it easily beats, like IIS and Apache with mod_proxy etc. However, as an experiment we have been comparing it to an adapted NIO server written in Java. This seems to be out performing Nginx in the reverse proxy role by a factor of 3 times. We are convinced our configuration is wrong. Both run on the same box (at different times) with the same sysctl settings (see below). We also saw some spikes, up to 3 seconds per request at times, and some at 10 over a 1 million request test of 1000 concurrent clients.

We are using a fairly straight forward configuration for Nginx. Since we have two processors on the box we tried worker_processes of 4 with worker_connections of 6000, then we tried worker_processes of 40 with worker_connections of 5000. No change. We need to be able to support responsive Ajax requests with strategies like HTTP streaming and long polling in our setup.

Any ideas what we can do to boost our throughput and latency?

[root at dc1dmzngx02 apachebench]# uname -a
Linux dc1dmzngx02 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34 BST 2012 x86_64 x86_64 x86_64 GNU/Linux

[root at dc1dmzngx02 apachebench]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.


# High perf config
net.core.somaxconn = 12048
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_orphans = 131072

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

Thanks,
Niall
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20121026/651a97c6/attachment.html>


More information about the nginx mailing list