Nginx Slow download over 1Gbps load !!

Payam Chychi pchychi at
Sun Jan 31 22:35:58 UTC 2016

Yep, im certain this is not an nginx problem as others have also pointed out.

Two ways of solving an interface limitation problem.
1. Change ur load balancing algo to per packet load balancing. This will split up te traffic much more evenly on multiple interfaces however, i would not recommend this as you run the risk of dropped/ out of state pkt and errors... There are only few conditions that this would work with a high level of success.

2. 10gig interfaces. You can pickup a cisco 3750x on amazon for a few hundred $ these days and add a 10gig card to it.

Id say to check your sysctl and ulimit settings however kts clear that issiue is only active when pushing over 1gig/sec.

Simple test: use two different sources and test your download at the same time. If you can dictate the source ip addresses, use an even and odd last octet so to aid in a better balanced return traffic path. Monitor both switch port interfaces and you should see the total traffic > 1gig/sec without problems... More controlled test would be to setup each interface with its own ip and force reply traffic to staybon each nic.

Feel free to drop me an email offlist if you need anymore help.

Payam Chychi
Solution Architect

On Sunday, January 31, 2016 at 12:56 PM, Reinis Rozitis wrote:

> > Yes, we recently tried downloading file over FTP and encountered the same 
> > slow transfer rate.
> > 
> Then it's not really a nginx issue, it seems you just hit the servers 
> (current) network limit.
> > zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not 
> > hardware controller as FreeBSD recommends to use HBA in order to directly 
> > access all drives for scrubbing and data-integrity purposes.
> > Do you recommend Hardware-Raid ?
> > 
> Well no, exactly opposite - ZFS works best(better) with bare disks than 
> hardware raid between.
> Just usually theese HBAs come with IR (internal raid) firmware - also you 
> mentioned Raid 10 in your setup while in ZFS-world such term isn't often 
> used (rather than mirrored pool or raidz(1,2,3..)).
> As to how to solve the bandwidth issue - from personal experience I have 
> more success (less trouble/expensive etc) with interface bonding on the 
> server itself (for Linux with balance-alb which doesn't require any specific 
> switch features or port configuration) rather than relying on the switches. 
> The drawback is that a single stream/download can't go beyond one physical 
> interface speed limit (for 1Gbe = ~100MB/s) but the total bandwidth cap is 
> roughly multiplied by the device count bonded together.
> Not a BSD guy but I guess the lagg in loadbalance mode would do the same ( 
> ).
> rr 
> _______________________________________________
> nginx mailing list
> nginx at

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list