<div id="reply-content">
Yep, im certain this is not an nginx problem as others have also pointed out.</div><div id="reply-content"><br></div><div id="reply-content">Two ways of solving an interface limitation problem.</div><div id="reply-content">1. Change ur load balancing algo to per packet load balancing. This will split up te traffic much more evenly on multiple interfaces however, i would not recommend this as you run the risk of dropped/ out of state pkt and errors... There are only few conditions that this would work with a high level of success.</div><div id="reply-content"><br></div><div id="reply-content">2. 10gig interfaces. You can pickup a cisco 3750x on amazon for a few hundred $ these days and add a 10gig card to it.</div><div id="reply-content"><br></div><div id="reply-content">Id say to check your sysctl and ulimit settings however kts clear that issiue is only active when pushing over 1gig/sec.</div><div id="reply-content"><br></div><div id="reply-content">Simple test: use two different sources and test your download at the same time. If you can dictate the source ip addresses, use an even and odd last octet so to aid in a better balanced return traffic path. Monitor both switch port interfaces and you should see the total traffic > 1gig/sec without problems... More controlled test would be to setup each interface with its own ip and force reply traffic to staybon each nic.</div><div id="reply-content"><br></div><div id="reply-content">Feel free to drop me an email offlist if you need anymore help.</div><div id="reply-content"><br></div><div id="1DD77C5FDA324906B4B5DD9114686A1F">-- <br>Payam Chychi<br>Solution Architect<br><div><br></div></div>
<p style="color: #A0A0A8;">On Sunday, January 31, 2016 at 12:56 PM, Reinis Rozitis wrote:</p>
<blockquote type="cite" style="border-left-style:solid;border-width:1px;margin-left:0px;padding-left:10px;">
<div id="quoted-message-content"><div><blockquote type="cite"><div><div>Yes, we recently tried downloading file over FTP and encountered the same </div><div>slow transfer rate.</div></div></blockquote><div><br></div><div>Then it's not really a nginx issue, it seems you just hit the servers </div><div>(current) network limit.</div><div><br></div><div><br></div><blockquote type="cite"><div><div>zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not </div><div>hardware controller as FreeBSD recommends to use HBA in order to directly </div><div>access all drives for scrubbing and data-integrity purposes.</div><div>Do you recommend Hardware-Raid ?</div></div></blockquote><div><br></div><div>Well no, exactly opposite - ZFS works best(better) with bare disks than </div><div>hardware raid between.</div><div>Just usually theese HBAs come with IR (internal raid) firmware - also you </div><div>mentioned Raid 10 in your setup while in ZFS-world such term isn't often </div><div>used (rather than mirrored pool or raidz(1,2,3..)).</div><div><br></div><div><br></div><div>As to how to solve the bandwidth issue - from personal experience I have </div><div>more success (less trouble/expensive etc) with interface bonding on the </div><div>server itself (for Linux with balance-alb which doesn't require any specific </div><div>switch features or port configuration) rather than relying on the switches. </div><div>The drawback is that a single stream/download can't go beyond one physical </div><div>interface speed limit (for 1Gbe = ~100MB/s) but the total bandwidth cap is </div><div>roughly multiplied by the device count bonded together.</div><div><br></div><div>Not a BSD guy but I guess the lagg in loadbalance mode would do the same ( </div><div>https://www.freebsd.org/cgi/man.cgi?lagg ).</div><div><br></div><div>rr </div><div><br></div><div>_______________________________________________</div><div>nginx mailing list</div><div>nginx@nginx.org</div><div>http://mailman.nginx.org/mailman/listinfo/nginx</div></div></div>
</blockquote>
<div>
<br>
</div>