<div dir="ltr">
<span style="color:rgb(0,0,0);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Even though it shouldn't be reaching your limits, limit_req<span> does delay in 1 second increments which sounds like it could be responsible for this. You should see error log entries if this happens (severity warning). Have you tried without the limit_req option? You can also use the nodelay option to avoid the delaying behavior.</span></span><br><div><span style="color:rgb(0,0,0);font-family:arial,sans-serif;font-size:12.8px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><span><br></span></span></div><div><span style="text-align:start;text-indent:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><font color="#000000"><span style="font-size:12.8px"><a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req">http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req</a></span></font><br></span></div><div><span style="text-align:start;text-indent:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><font color="#000000"><span style="font-size:12.8px"><br></span></font></span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 5, 2018 at 6:45 AM, Peter Booth <span dir="ltr"><<a href="mailto:peter_booth@me.com" target="_blank">peter_booth@me.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">John,<br>
<br>
I think that you need to understand what is happening on your host throughout the duration of the test. Specifically, what is happening with the tcp connections. If you run netstat and grep for tcp and do this in a loop every say five seconds then you’ll see how many connections peak get created.<br>
If the thing you are testing exists in production then you are lucky. You can do the same in production and see what it is that you need to replicate.<br>
<br>
You didn’t mention whether you had persistent connections (http keep alive) configured. This is key to maximizing scalability. You did say that you were using SSL. If it were me I’d use a load generator that more closely resembles the behavior of real users on a website. Wrk2, Tsung, httperf, Gatling are examples of some that do. Using jmeter with zero think time is a very common anti pattern that doesn’t behave anything like real users. I think of it as the lazy performance tester pattern.<br>
<br>
Imagine a real web server under heavy load from human beings. You will see thousands of concurrent connections but fewer concurrent requests in flight. With the jmeter zero think time model then you are either creating new connections or reusing them - so either you have a shitload of connections and your nginx process starts running out of file handles or you are jamming requests down a single connection- neither of which resemble reality.<br>
<br>
If you are committed to using jmeter for some reason then use more instances with real thinktimes. Each instance’s connection wil have a different source port<br>
<br>
Sent from my iPhone<br>
<div class="HOEnZb"><div class="h5"><br>
> On Apr 4, 2018, at 5:20 PM, John Melom <<a href="mailto:John.Melom@spok.com">John.Melom@spok.com</a>> wrote:<br>
><br>
> Hi Maxim,<br>
><br>
> I've looked at the nstat data and found the following values for counters:<br>
><br>
>> nstat -az | grep -I listen<br>
> TcpExtListenOverflows 0 0.0<br>
> TcpExtListenDrops 0 0.0<br>
> TcpExtTCPFastOpenListenOverflo<wbr>w 0 0.0<br>
><br>
><br>
> nstat -az | grep -i retra<br>
> TcpRetransSegs 12157 0.0<br>
> TcpExtTCPLostRetransmit 0 0.0<br>
> TcpExtTCPFastRetrans 270 0.0<br>
> TcpExtTCPForwardRetrans 11 0.0<br>
> TcpExtTCPSlowStartRetrans 0 0.0<br>
> TcpExtTCPRetransFail 0 0.0<br>
> TcpExtTCPSynRetrans 25 0.0<br>
><br>
> Assuming the above "Listen" counters provide data about the overflow issue you mention, then there are no overflows on my system. While retransmissions are happening, it doesn't seem they are related to listen queue overflows.<br>
><br>
><br>
> Am I looking at the correct data items? Is my interpretation of the data correct? If so, do you have any other ideas I could investigate?<br>
><br>
> Thanks,<br>
><br>
> John<br>
><br>
> -----Original Message-----<br>
> From: nginx [mailto:<a href="mailto:nginx-bounces@nginx.org">nginx-bounces@nginx.<wbr>org</a>] On Behalf Of John Melom<br>
> Sent: Tuesday, March 27, 2018 8:52 AM<br>
> To: <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> Subject: RE: Nginx throttling issue?<br>
><br>
> Maxim,<br>
><br>
> Thank you for your reply. I will look to see if "netstat -s" detects any listen queue overflows.<br>
><br>
> John<br>
><br>
><br>
> -----Original Message-----<br>
> From: nginx [mailto:<a href="mailto:nginx-bounces@nginx.org">nginx-bounces@nginx.<wbr>org</a>] On Behalf Of Maxim Dounin<br>
> Sent: Tuesday, March 27, 2018 6:55 AM<br>
> To: <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> Subject: Re: Nginx throttling issue?<br>
><br>
> Hello!<br>
><br>
>> On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote:<br>
>><br>
>> I am load testing our system using Jmeter as a load generator.<br>
>> We execute a script consisting of an https request executing in a<br>
>> loop. The loop does not contain a think time, since at this point I<br>
>> am not trying to emulate a “real user”. I want to get a quick look at<br>
>> our system capacity. Load on our system is increased by increasing<br>
>> the number of Jmeter threads executing our script. Each Jmeter thread<br>
>> references different data.<br>
>><br>
>> Our system is in AWS with an ELB fronting Nginx, which serves as a<br>
>> reverse proxy for our Docker Swarm application cluster.<br>
>><br>
>> At moderate loads, a subset of our https requests start experiencing<br>
>> to a 1 second delay in addition to their normal response time. The<br>
>> delay is not due to resource contention.<br>
>> System utilizations remain low. The response times cluster around 4<br>
>> values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050<br>
>> seconds. Right now, I am most interested in understanding and<br>
>> eliminating the 1 second delay that gives the clusters at 1 second and<br>
>> 1.050 seconds.<br>
>><br>
>> The attachment shows a response time scatterplot from one of our runs.<br>
>> The x-axis is the number of seconds into the run, the y-axis is the<br>
>> response time in milliseconds. The plotted data shows the response<br>
>> time of requests at the time they occurred in the run.<br>
>><br>
>> If I run the test bypassing the ELB and Nginx, this delay does not<br>
>> occur.<br>
>> If I bypass the ELB, but include Nginx in the request path, the delay<br>
>> returns.<br>
>><br>
>> This leads me to believe the 1 second delay is coming from Nginx.<br>
><br>
> There are no magic 1 second delays in nginx - unless you've configured something explicitly.<br>
><br>
> Most likely, the 1 second delay is coming from TCP retransmission timeout during connection establishment due to listen queue overflows. Check "netstat -s" to see if there are any listen queue overflows on your hosts.<br>
><br>
> [...]<br>
><br>
> --<br>
> Maxim Dounin<br>
> <a href="http://mdounin.ru/" rel="noreferrer" target="_blank">http://mdounin.ru/</a><br>
> ______________________________<wbr>_________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a><br>
><br>
> ______________________________<wbr>__<br>
> NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email.<br>
> ______________________________<wbr>_________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a><br>
><br>
> ______________________________<wbr>__<br>
> NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email.<br>
> ______________________________<wbr>_________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a><br>
______________________________<wbr>_________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/<wbr>mailman/listinfo/nginx</a></div></div></blockquote></div><br></div>