Nginx using variables / split_clients is very slow

Maxim Dounin mdounin at mdounin.ru
Tue Feb 28 12:46:53 UTC 2017


Hello!

On Tue, Feb 28, 2017 at 06:59:00AM -0500, larsg wrote:

> I want to use nginx as reverse proxy for an A/B testing scenario.
> The nginx should route to two versions of one backend service. The two
> versions provides this service via different URL paths.
> Example:
>    * x% of all requests https://edgeservice/myservice should be routed to
> https://1.2.3.4/myservice,
>    * the other 100-x% should be routed to
> https://4.3.2.1/myservice/withAnothePath.
> For that I wanted to use the split_client directive that perfectly matches
> our requirements.
> 
> We have a general nginx configuration that reaches a high throughput (at
> least 2.000 requests / sec) - unless we specify the routing target via nginx
> variables.
> 
> So, when specifying the routing target "hard coded" in the proxy_pass
> directive (variant 0) or when using the upstream directive (variant 1), the
> nginx routes very fast (at least 2.000 req/sec).
> Once we use split_clients directive to specify the routing target (variant
> 2) or we set a variable statically (variant 3), the nginx ist very slow and
> reaches only 20-50 requests / sec. All other config parameters are the same
> for all variants.

[...]

> # variant 1
> upstream backend1 {
>   ip_hash;
>   server 1.2.3.4;
> }
> 
> # variant 2
> split_clients $remote_addr $backend2 {
>   50%    https://1.2.3.4/myservice/;
>   50%    https://4.3.2.1/myservice/withAnotherPath;
> }
> 
> server {
>   listen             443 ssl backlog=163840;
> 
>   # variant 3
>   set $backend3 https://1.2.3.4/myservice;
> 
>   location /myservice {
>      # V0) this is fast
>      proxy_pass https://1.2.3.4/myservice;
> 
> 	 # V1) this is fast
>      proxy_pass https://backend1;
> 
>      # V2) this is slow
>      proxy_pass $backend2;
> 
>      # V3) this is slow
>      proxy_pass $backend3;

The problem is that your configuration with variables and an IP 
address implies run-time parsing of the address to create a 
run-time implicit upstream server group nginx will work with.  
This group will only be used for a single request.  But you use 
SSL to connect to upstream servers, and here comes the problem: 
SSL sessions are cached within the server group data.  As such, 
your V2 and V3 configurations does not use SSL session caching, 
and hence slow due to a full SSL handshake on each request.

Solution is to use predefined upstream blocks within your 
split_clients paths, e.g.:

    split_clients $remote_addr $backend {
        50% https://backend1;
        50% https://backend2;
    }

    upstream backend1 {
        sever 10.0.0.1;
    }

    upstream backend2 {
        sever 10.0.0.2;
    }

    proxy_pass $backend;

This way nginx will be able to choose an upstream server at 
run-time according to split_clients, and will still be able to 
cache SSL sessions (or even connections, if configured, see 
http://nginx.org/r/keepalive).

Note well that specifing URI in proxy_pass using variables may not 
do what you expect.  When using variables in proxy_pass, if an URI 
part is specified, it means full URI to be used in a request to a 
backend, not a replacement for a location prefix matched.  See 
http://nginx.org/r/proxy_pass for details.

-- 
Maxim Dounin
http://nginx.org/


More information about the nginx mailing list