PINE64
ROCKPRO64 with 10GbE NICs - Printable Version

+- PINE64 (https://forum.pine64.org)
+-- Forum: ROCKPRO64 (https://forum.pine64.org/forumdisplay.php?fid=98)
+--- Forum: General Discussion on ROCKPRO64 (https://forum.pine64.org/forumdisplay.php?fid=99)
+--- Thread: ROCKPRO64 with 10GbE NICs (/showthread.php?tid=6964)

Pages: 1 2


RE: ROCKPRO64 with 10GbE NICs - ddimension - 03-03-2019

(02-24-2019, 03:35 AM)ddimension Wrote:
(12-18-2018, 04:24 PM)ddimension Wrote:
(12-17-2018, 06:44 AM)H.HSEL Wrote: I bought a RockPro64 and am trying to build a budget 10GbE storage.
After changing its MTU from 1500 (default) to 9000, I performed iperf and I got a result around 2.70Gbps, though it seems that the result is not stable, sometimes the value going down to 2.30Gbps or lower.
I don't know why but manually assining iperf processes to a specific core (taskset -c 5 iperf -s, for example) seems to offer better and stable results that is 3.00Gbps.

Hi!

I also tested aquantia cards and came to the result, that the driver has poor performance, i.e. missing offloading capabilities. I also have an tehuti tn4xxx network card. This one gave me about 9GBit/s single threaded iperf3 performance, a lot better. Even vlan offloading works.

If you want to give it a try, you have define RX_REUSE_PAGES in tn40.h

After long testing I experienced OOM on higher loads. The problem is the ARM memory management and DMA capabilities.
You should set no coherent_pool in kernel args and provide a big area for contingous memory (which will be used by DMA) like:
cma=512M

Perhaps a smaller area possible.

BTW, I use 4 lanes:
Code:
01:00.0 Ethernet controller: Tehuti Networks Ltd. TN9510 10GBase-T/NBASE-T Ethernet Adapter
    Subsystem: Tehuti Networks Ltd. Ethernet Adapter
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 233
    Region 0: Memory at fa000000 (64-bit, prefetchable) [size=64K]
    Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Address: 00000000fee30040  Data: 0000
    Capabilities: [78] Power Management version 3
        Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [80] Express (v2) Endpoint, MSI 00
        DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <64ns, L1 <2us
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
        DevCtl:    Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
            MaxPayload 256 bytes, MaxReadReq 512 bytes
        DevSta:    CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
        LnkCap:    Port #1, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <512ns, L1 <2us
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range A, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
        LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [100 v1] Virtual Channel
        Caps:    LPEVC=0 RefClk=100ns PATEntryBits=1
        Arb:    Fixed- WRR32- WRR64- WRR128-
        Ctrl:    ArbSelect=Fixed
        Status:    InProgress-
        VC0:    Caps:    PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
            Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
            Ctrl:    Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
            Status:    NegoPending- InProgress-
    Kernel driver in use: tn40xx
    Kernel modules: tn40xx

After days of testing I errors come up again. Best working NIC I found is the Intel DA-520. This is rock solid, but a bit slower.

I would like to use RDMA, so did anybody try mellanox cards ?


RE: ROCKPRO64 with 10GbE NICs - igorp - 08-14-2019

10GB network testings on Rock64PRO with Aquantia based card. Porting latest upstream driver to 4.4.y was not very successful (too much troubles) ... only with 5.3.y, default MTU ->

[Image: EB6IgMDXoAA6py9?format=png&name=small]

On the other side also Aquantia connected via Thunderbolt/USB-c gen2 to stock Debian Buster (Dell XPS13)

https://twitter.com/armbian/status/1161515847124488198


RE: ROCKPRO64 with 10GbE NICs - stuartiannaylor - 08-14-2019

(08-14-2019, 12:09 AM)igorp Wrote: 10GB network testings on Rock64PRO with Aquantia based card. Porting latest upstream driver to 4.4.y was not very successful (too much troubles) ... only with 5.3.y, default MTU ->

[Image: EB6IgMDXoAA6py9?format=png&name=small]

On the other side also Aquantia connected via Thunderbolt/USB-c gen2 to stock Debian Buster (Dell XPS13)

https://twitter.com/armbian/status/1161515847124488198

That is what I have been meaning to check as been wondering what is the cap for the the RockPro64 I/O.
The of bench is still sort of useless as I think that is memcpy network speed and its network transfer to nowhere.
Do you have an nvme, sata raid or maybe usb that you can nfs or samba test or even wget?
Its great that it can do 6.56Gb/s but what does it leave left for other system activity.

I am not dissing your you amazing bench but really interested in what the max would be in terms of i/o of some real-world applications?
From file server to cluster would be really great to see some benches as I have been extremely curious of the limits.

Dunno if you used the new Ayufan Images but I tried them yesterday and they are looking absolutely amazing.
I have some 2.5Gbe Usb that also seem to work far better on the new kernels.
https://github.com/ayufan-rock64/linux-mainline-kernel/releases/tag/5.3.0-rc4-1118-ayufan


RE: ROCKPRO64 with 10GbE NICs - igorp - 08-14-2019

> I am not dissing your you amazing bench but really interested in what the max would be in terms of i/o of some real-world applications?

ASAP  Tongue

> Dunno if you used the new Ayufan Images but I tried them yesterday and they are looking absolutely amazing.

No, its Armbian with relatively small changes.