<div dir="ltr">Hi Valentin,<div><br></div><div>Do you mean that because NGINX is fast enough and the network environment is good, so NGINX can process only one request at every moment?</div><div><br></div><div>My purpose is to figure out the priority mechanism. It seems only when there is limited capacity for sending, priority will be used to select streams for transmitting frames. I misunderstood this part before.</div><div><br></div><div>Your explanation is quite clear. I will try to do a real benchmark. Thank you.</div><div><br></div><div>Best regards,</div><div><br></div><div>Shengtuo Hu</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 27, 2015 at 9:55 PM, Valentin V. Bartenev <span dir="ltr"><<a href="mailto:vbart@nginx.com" target="_blank">vbart@nginx.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On Tuesday 27 October 2015 17:33:22 Shengtuo Hu wrote:<br>
> Hi,<br>
><br>
> I got some problems when testing priority feature in Nginx 1.9.5.<br>
><br>
> Here is the detail about my test:<br>
> <a href="https://gist.github.com/h1994st/f21cdb7125a65c8f7182" rel="noreferrer" target="_blank">https://gist.github.com/h1994st/f21cdb7125a65c8f7182</a><br>
><br>
> Why are the response sequence from these two servers different?<br>
><br>
> The response sequence of nghttp server is my expected result.<br>
><br>
> Could you tell me the related part in nginx source code? I want to figure<br>
> out this problem. It has bothered me for a long time.<br>
><br>
<br>
</span>The response sequence exactly matches the request sequence, which isn't<br>
surprising if we consider the sizes of files.<br>
<br>
When NGINX receives the request for "/" (stream ID 13), there's no other<br>
active streams at the moment, and nginx write the whole response down to<br>
the socket. Then nginx receives the request for "/main.css" and since<br>
there no other active streams, nginx starts processing it. Because the<br>
file is also very small, the whole response is written to the socket,<br>
again. The same happens to other requests. In your synthetic test case<br>
there's just no concurrency at all, at every moment from the server point<br>
of view there's either one or zero active streams.<br>
<br>
I'd suggest you to do a real benchmark with real website over real<br>
network (or at least emulate it using netem). I did, and found that<br>
nginx is the fastest HTTP/2 webserver at the moment in terms of first<br>
painting and page load times.<br>
<br>
wbr, Valentin V. Bartenev<br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
</blockquote></div><br>
</div></div>