Hi guys, i found something interesting.<br><br>here is our web server architectures. NGX1, NGX2 and NGX3 has the same nginx.conf file. In our production environment, there are more nginx and Resin server.<br><br> LVS<br>
/ | \<br> / | \<br>NGX1 NGX2 NGX3<br> |\ \<br> | \ \ <br> | \ \<br>Resin1 Resin2 Resin3<br><br>when network traffic is directed to lvs(round robin request dispatching method), all the first hundreds of requests go to Server Resin1。Resin1 is <span id="result_box" class="short_text" lang="en"><span class="hps">overload</span></span>ed and crashed. Then request goes to Resin2, and Resin2 dies, and then Resin3...<br>
<br>There are 9 NGX servers(of the same weights) and 8 nginx worker processes on each NGX server. So all the first 8*9 requests all go to Resin1. Currently we just patched the code to randomlize nginx round robin peers to<span id="result_box" class="short_text"><span style="background-color: rgb(255, 255, 255);" title="临时解决这个问题"> solve this problem</span></span> <span id="result_box" class="short_text"><span style="background-color: rgb(255, 255, 255);" title="临时解决这个问题">temporarily。<br>
<br>Any one has a good idea? Thx.<br></span></span>