Uneven load distribution inside container

BharathShankar nginx-forum at forum.nginx.org
Wed Mar 27 06:16:04 UTC 2019


I am trying out the sample gRPC streaming application at
https://github.com/grpc/grpc/tree/v1.19.0/examples/python/route_guide. 

I have modified the server to start 4 grpc server processes and
correspondingly modified the client to spawn 4 different processes to hit
the server. 
Client and server are running on different VMs on google cloud. 

Nginx.conf
----------------
user www-data;
worker_processes 1;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local]
				"$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent"';

    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    upstream backend{
        server localhost:9000 weight=1;
        server localhost:9001 weight=1;
        server localhost:9002 weight=1;
        server localhost:9003 weight=1;
    }

    server {
        listen 6666 http2; <<<<<<<<< On container I have set it to 9999

        access_log /tmp/access.log main;
        error_log /tmp/error.log error;
 
        proxy_buffering off;
        proxy_set_header X-Real-IP $remote_addr;
	  proxy_set_header X-Scheme $scheme;
        proxy_set_header Host $http_host;

        location / {
            grpc_pass grpc://backend;
        }

    }
}	

Problem Statement 
-------------------------- 
Nginx load balancing (round robin) is uneven when run inside a container
where as it is uniform when nginx is run directly inside a VM. 

Scenario 1 : Server running in VM 
---------------------------------------------- 
$ python route_guide_server.py 
pid 14287's current affinity list: 0-5 
pid 14287's new affinity list: 2 
pid 14288's current affinity list: 0-5 
pid 14288's new affinity list: 3 
pid 14286's current affinity list: 0-5 
pid 14286's new affinity list: 1 
pid 14285's current affinity list: 0-5 
pid 14285's new affinity list: 0 
Server starting in port 9003 with cpu 3 
Server starting in port 9002 with cpu 2 
Server starting in port 9001 with cpu 1 
Server starting in port 9000 with cpu 0 

Now I run the client on a different VM. 
$ python3 route_guide_client.py 
........... 
....... 

On the server we see that the requests are uniformly distributed between all
4 server processes running on different ports. For example the output on
server for above client invocation is 

Serving route chat request using 14285 << These are PIDs of processes that
are bound to different server ports. 
Serving route chat request using 14286 
Serving route chat request using 14287 
Serving route chat request using 14288 


Scenario 1 : Server running in Container 
------------------------------------------------------ 
I now spin up a container on the server VM, Install and configure nginx the
same way inside the container and use the same nginx config file except for
then nginx server listen port. 

$ sudo docker run -p 9999:9999 --cpus=4 grpcnginx:latest 
............... 

root at b81bb72fcab2:/# ls 
README.md docker_entry.sh media route_guide_client.py
route_guide_pb2_grpc.py route_guide_server.py.bak sys 
__pycache__ etc mnt route_guide_client.py.bak route_guide_pb2_grpc.pyc run
tmp 
bin home opt route_guide_db.json route_guide_resources.py run_codegen.py usr

boot lib proc route_guide_pb2.py route_guide_resources.pyc sbin var 
dev lib64 root route_guide_pb2.pyc route_guide_server.py srv 

root at b81bb72fcab2:/# python3 route_guide_server.py 
pid 71's current affinity list: 0-5 
pid 71's new affinity list: 0 
Server starting in port 9000 with cpu 0 
pid 74's current affinity list: 0-5 
pid 74's new affinity list: 3 
Server starting in port 9003 with cpu 3 
pid 72's current affinity list: 0-5 
pid 72's new affinity list: 1 
pid 73's current affinity list: 0-5 
pid 73's new affinity list: 2 
Server starting in port 9001 with cpu 1 
Server starting in port 9002 with cpu 2 

On the client VM 

$ python3 route_guide_client.py 
............ 
.............. 

Now on the server we see that the requests are only served by 2
ports/processes. 

Serving route chat request using 71 
Serving route chat request using 72 
Serving route chat request using 71 
Serving route chat request using 72 


Requesting help to resolve load distribution problem inside container.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283542,283542#msg-283542



More information about the nginx mailing list