Nginx throttling issue?

John Melom John.Melom at
Mon Mar 26 20:21:27 UTC 2018


I am load testing our system using Jmeter as a load generator.  We execute a script consisting of an https request executing in a loop.  The loop does not contain a think time, since at this point I am not trying to emulate a “real user”.  I want to get a quick look at our system capacity.  Load on our system is increased by increasing the number of Jmeter threads executing our script.  Each Jmeter thread references different data.

Our system is in AWS with an ELB fronting Nginx, which serves as a reverse proxy for our Docker Swarm application cluster.

At moderate loads, a subset of our https requests start experiencing to a 1 second delay in addition to their normal response time.  The delay is not due to resource contention.  System utilizations remain low.  The response times cluster around 4 values:  0 millilseconds, 50 milliseconds, 1 second, and 1.050 seconds.  Right now, I am most interested in understanding and eliminating the 1 second delay that gives the clusters at 1 second and 1.050 seconds.

The attachment shows a response time scatterplot from one of our runs.  The x-axis is the number of seconds into the run, the y-axis is the response time in milliseconds.  The plotted data shows the response time of requests at the time they occurred in the run.

If I run the test bypassing the ELB and Nginx, this delay does not occur.
If I bypass the ELB, but include Nginx in the request path, the delay returns.

This leads me to believe the 1 second delay is coming from Nginx.

One possible candidate Nginx DDOS.  Since all requests are coming from the same Jmeter system, I expect they share the same originating IP address.  I attempted to control DDOS throttling by setting limit_req as shown in the nginx.conf fragment below:

http {
    limit_req_zone $binary_remote_addr zone=perf:20m rate=10000r/s;
    server {
        location /myReq {
            limit_req zone=perf burst=600;

The thinking behind the values set in this conf file is that my aggregate demand would not exceed 10000 requests per second, so throttling of requests should not occur.  If there were short bursts more intense than that, the burst value would buffer these requests.

This tuning did not change my results.  I still get the 1 second delay.

Am I implementing this correctly?
Is there something else I should be trying?

The responses are not large, so I don’t believe limit_req is the answer.
I have a small number of intense users, so limit_conn does not seem likely to be the answer either.


John Melom
Performance Test Engineer
Spōk, Inc.
+1 (952) 230 5311 Office
John.Melom at<mailto:John.Melom at>

[cid:image001.jpg at 01D1E1AF.34FE1C10]<>

NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 3300 bytes
Desc: image003.jpg
URL: <>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rawRespScatterplot.png
Type: image/png
Size: 43349 bytes
Desc: rawRespScatterplot.png
URL: <>

More information about the nginx mailing list