Nginx using variables / split_clients is very slow

larsg nginx-forum at forum.nginx.org
Tue Feb 28 11:59:00 UTC 2017


Hi,

I want to use nginx as reverse proxy for an A/B testing scenario.
The nginx should route to two versions of one backend service. The two
versions provides this service via different URL paths.
Example:
   * x% of all requests https://edgeservice/myservice should be routed to
https://1.2.3.4/myservice,
   * the other 100-x% should be routed to
https://4.3.2.1/myservice/withAnothePath.
For that I wanted to use the split_client directive that perfectly matches
our requirements.

We have a general nginx configuration that reaches a high throughput (at
least 2.000 requests / sec) - unless we specify the routing target via nginx
variables.

So, when specifying the routing target "hard coded" in the proxy_pass
directive (variant 0) or when using the upstream directive (variant 1), the
nginx routes very fast (at least 2.000 req/sec).
Once we use split_clients directive to specify the routing target (variant
2) or we set a variable statically (variant 3), the nginx ist very slow and
reaches only 20-50 requests / sec. All other config parameters are the same
for all variants.

We did some research (nginx config reference, google, this forum...) to find
a solution for this problem.
Now that we do not find any approach I wanted to ask the mailing list if you
have got any idea?
Is there a solution to increase performance when using split_clients so that
we can reach at least 1.000 requests / sec?
Or did we already reach maximum performance for this scenario?

It would be great if we could used split_clients since we are very flexible
in defining routing rules and we can route to backend services with
different URL paths.

Kind Regards
Lars


nginx 1.10.3 running on Ubunutu trusy
nginx.conf:
...
http {
...

# variant 1
upstream backend1 {
  ip_hash;
  server 1.2.3.4;
}

# variant 2
split_clients $remote_addr $backend2 {
  50%    https://1.2.3.4/myservice/;
  50%    https://4.3.2.1/myservice/withAnotherPath;
}

server {
  listen             443 ssl backlog=163840;

  # variant 3
  set $backend3 https://1.2.3.4/myservice;

  location /myservice {
     # V0) this is fast
     proxy_pass https://1.2.3.4/myservice;

	 # V1) this is fast
     proxy_pass https://backend1;

     # V2) this is slow
     proxy_pass $backend2;

     # V3) this is slow
     proxy_pass $backend3;
  }
}
}

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272657,272657#msg-272657



More information about the nginx mailing list