Nginx Slow download over 1Gbps load !!

shahzaib shahzaib shahzaib.cb at gmail.com
Sun Jan 31 18:48:56 UTC 2016


The server is using ports 18 and 19 and those port are configured with
speed 1000



LH26876_SW2#sh run int g 0/18

!

interface GigabitEthernet 0/18

description LH28765_3

no ip address

speed 1000

!

port-channel-protocol LACP

  port-channel 3 mode active

no shutdown

LH26876_SW2#sh run int g 0/19

!

interface GigabitEthernet 0/19

description LH28765_3

no ip address

speed 1000

!

port-channel-protocol LACP

  port-channel 3 mode active

no shutdown

LH26876_SW2#


------------------------------

Is it alright ?

Regards.

Shahzaib

On Sun, Jan 31, 2016 at 11:18 PM, shahzaib shahzaib <shahzaib.cb at gmail.com>
wrote:

> Hi,
>
> Thanks a lot for response. Now i am doubting that issue is on network
> layer as i can examine lots of retransmitted packets in netstat -s output.
> Here is the server's status :
>
> http://prntscr.com/9xa6z2
>
> Following is the thread with same mentioned issue :
>
>
> http://serverfault.com/questions/218101/freebsd-8-1-unstable-network-connection
>
> This is what he said in thread :
>
> "I ran into a problem with Cisco Switchs forcing Negotiation of network
> speeds. This caused intermittent errors and retransmissions. The result was
> file transfers being really slow. May not be the cases, but you can turn of
> speed negotiation with miitools (if I recall correctly, been a long time).
> "
>
> >>Can you replicate using ftp, scp?
> Yes, we recently tried downloading file over FTP and encountered the same
> slow transfer rate.
>
> >>What's the output of zpool iostat (and the overal zpool/zfs
> configuration)?Also do you have ZFS on top of hardware raid ? In general
> just 12 SATA disks won't have a lot of IOps (especially random read) unless
> it all hits ZFS Arc (can/should be monitored), even more if there is a
> hardware raid underneath (in your place would flash the HBA with IT
> firmware so you get plain jbods managed by ZFS).
>
> zpool iostat is quite stable yet. We're using HBA LSI-9211 , so its not
> hardware controller as FreeBSD recommends to use HBA in order to directly
> access all drives for scrubbing and data-integrity purposes. Do you
> recommend Hardware-Raid ? Following is the scrnshot of ARC status :
>
> http://prntscr.com/9xaf9p
>
> >>How is your switch configured? How are the links negotiated, make sure
> both sides of both links are full duplex 1gig. Look for crc or input errors
> on the interface side.
> On My side, i can see that both interfaces have Fully-Duplex port.
> Regarding crc / input errors, is there any command i can use to check that
> on FreeBSD ?
>
> Regards.
> Shahzaib
>
>
> On Sun, Jan 31, 2016 at 11:04 PM, Payam Chychi <pchychi at gmail.com> wrote:
>
>> Hi,
>>
>> Forget the application layer being the problem until you have
>> successfully replicated the problem in several different setups.
>>
>> Are you monitoring both links utilization levels? Really sounds like a
>> network layer problem or something with your ip stack.
>>
>> Can you replicate using ftp, scp?
>>
>> How is your switch configured? How are the links negotiated, make sure
>> both sides of both links are full duplex 1gig. Look for crc or input errors
>> on the interface side.
>>
>> How many packets are you pushing? Make sure the switch isnt activating
>> unicast limiting.
>>
>> Lots of things to check... Would help if you can help us understand what
>> tests youve done to determine its nginx.
>>
>> Thanks
>>
>> --
>> Payam Chychi
>> Network Engineer / Security Specialist
>>
>> On Sunday, January 31, 2016 at 9:33 AM, Reinis Rozitis wrote:
>>
>> This is a bit out of scope of nginx but ..
>>
>> could be network issue or LACP issue but doesn't looks like it is
>>
>>
>> How did you determine this?
>> Can you generate more than 1 Gbps (without nginx)?
>>
>>
>> 12 x 3TB SATA Raid-10 (HBA LSI-9211)
>> ZFS FileSystem with 18TB usable space
>>
>>
>> Please i need guidance to handle with this problem, i am sure that some
>> value needs to tweak.
>>
>>
>> What's the output of zpool iostat (and the overal zpool/zfs
>> configuration)?
>>
>> Also do you have ZFS on top of hardware raid ?
>>
>> In general just 12 SATA disks won't have a lot of IOps (especially random
>> read) unless it all hits ZFS Arc (can/should be monitored), even more if
>> there is a hardware raid underneath (in your place would flash the HBA
>> with
>> IT firmware so you get plain jbods managed by ZFS).
>>
>> rr
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20160131/4180a178/attachment.html>


More information about the nginx mailing list