Possible bug with Nginx

SriSagar Kadambi (srkadamb) srkadamb at cisco.com
Tue Mar 19 12:49:05 UTC 2019


So i'm guessing i might have run into a bug with nginx while i was perf testing one of my applications. Hope i'm wrong and you guys have a solution for me! 😊

Set up :

1 Nginx instance running as a tcp load balancer with streams enabled (i have attached the config output with this mail).

4 backend instances running the application

1 locust instance to simulate load (tcp requests)

Behavior :

When load is pumped in to the nginx load balancer, the requests are evenly distributed between the backend app instances. When the nginx configuration is modified to remove one instance and nginx is restarted, the new config takes effect. A new nginx worker process is spawned and the older one goes into "nginx: worker process is shutting down" mode. However, we see that on the node that was removed from the list of backend instances in the nginx config, the requests still keep coming in for quite a while (~10secs-2mins depending on the load). Point to note here is that the application processes every request within 10-20ms and terminates the tcp connection.

In essence, the worker process that is scheduled to die still keeps getting new requests, when ideally all new requests should go to the new worker thread.

Please let me know if this is expected behavior or not. If it is, how do i ensure the worker thread that is scheduled to die will get no new requests?


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20190319/6b5705bd/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: runnning_config
Type: application/octet-stream
Size: 7049 bytes
Desc: runnning_config
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20190319/6b5705bd/attachment.obj>

More information about the nginx mailing list