ROCKPRO64 with 10GbE NICs
#11
(02-24-2019, 03:35 AM)ddimension Wrote:
(12-18-2018, 04:24 PM)ddimension Wrote:
(12-17-2018, 06:44 AM)H.HSEL Wrote: I bought a RockPro64 and am trying to build a budget 10GbE storage.
After changing its MTU from 1500 (default) to 9000, I performed iperf and I got a result around 2.70Gbps, though it seems that the result is not stable, sometimes the value going down to 2.30Gbps or lower.
I don't know why but manually assining iperf processes to a specific core (taskset -c 5 iperf -s, for example) seems to offer better and stable results that is 3.00Gbps.

Hi!

I also tested aquantia cards and came to the result, that the driver has poor performance, i.e. missing offloading capabilities. I also have an tehuti tn4xxx network card. This one gave me about 9GBit/s single threaded iperf3 performance, a lot better. Even vlan offloading works.

If you want to give it a try, you have define RX_REUSE_PAGES in tn40.h

After long testing I experienced OOM on higher loads. The problem is the ARM memory management and DMA capabilities.
You should set no coherent_pool in kernel args and provide a big area for contingous memory (which will be used by DMA) like:
cma=512M

Perhaps a smaller area possible.

BTW, I use 4 lanes:
Code:
01:00.0 Ethernet controller: Tehuti Networks Ltd. TN9510 10GBase-T/NBASE-T Ethernet Adapter
    Subsystem: Tehuti Networks Ltd. Ethernet Adapter
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Interrupt: pin A routed to IRQ 233
    Region 0: Memory at fa000000 (64-bit, prefetchable) [size=64K]
    Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Address: 00000000fee30040  Data: 0000
    Capabilities: [78] Power Management version 3
        Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
        Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
    Capabilities: [80] Express (v2) Endpoint, MSI 00
        DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <64ns, L1 <2us
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
        DevCtl:    Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
            MaxPayload 256 bytes, MaxReadReq 512 bytes
        DevSta:    CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
        LnkCap:    Port #1, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <512ns, L1 <2us
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
        LnkSta:    Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
        DevCap2: Completion Timeout: Range A, TimeoutDis+, LTR-, OBFF Not Supported
        DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
        LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
             Compliance De-emphasis: -6dB
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
             EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
    Capabilities: [100 v1] Virtual Channel
        Caps:    LPEVC=0 RefClk=100ns PATEntryBits=1
        Arb:    Fixed- WRR32- WRR64- WRR128-
        Ctrl:    ArbSelect=Fixed
        Status:    InProgress-
        VC0:    Caps:    PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
            Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
            Ctrl:    Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
            Status:    NegoPending- InProgress-
    Kernel driver in use: tn40xx
    Kernel modules: tn40xx

After days of testing I errors come up again. Best working NIC I found is the Intel DA-520. This is rock solid, but a bit slower.

I would like to use RDMA, so did anybody try mellanox cards ?
#12
10GB network testings on Rock64PRO with Aquantia based card. Porting latest upstream driver to 4.4.y was not very successful (too much troubles) ... only with 5.3.y, default MTU ->

[Image: EB6IgMDXoAA6py9?format=png&name=small]

On the other side also Aquantia connected via Thunderbolt/USB-c gen2 to stock Debian Buster (Dell XPS13)

https://twitter.com/armbian/status/1161515847124488198
#13
(08-14-2019, 12:09 AM)igorp Wrote: 10GB network testings on Rock64PRO with Aquantia based card. Porting latest upstream driver to 4.4.y was not very successful (too much troubles) ... only with 5.3.y, default MTU ->

[Image: EB6IgMDXoAA6py9?format=png&name=small]

On the other side also Aquantia connected via Thunderbolt/USB-c gen2 to stock Debian Buster (Dell XPS13)

https://twitter.com/armbian/status/1161515847124488198

That is what I have been meaning to check as been wondering what is the cap for the the RockPro64 I/O.
The of bench is still sort of useless as I think that is memcpy network speed and its network transfer to nowhere.
Do you have an nvme, sata raid or maybe usb that you can nfs or samba test or even wget?
Its great that it can do 6.56Gb/s but what does it leave left for other system activity.

I am not dissing your you amazing bench but really interested in what the max would be in terms of i/o of some real-world applications?
From file server to cluster would be really great to see some benches as I have been extremely curious of the limits.

Dunno if you used the new Ayufan Images but I tried them yesterday and they are looking absolutely amazing.
I have some 2.5Gbe Usb that also seem to work far better on the new kernels.
https://github.com/ayufan-rock64/linux-m...118-ayufan
#14
> I am not dissing your you amazing bench but really interested in what the max would be in terms of i/o of some real-world applications?

ASAP  Tongue

> Dunno if you used the new Ayufan Images but I tried them yesterday and they are looking absolutely amazing.

No, its Armbian with relatively small changes.


Possibly Related Threads…
Thread Author Replies Views Last Post
  How to use ROCKPro64 PCI-e X4 to M.2/NGFF NVMe SSD Interface Card jykeith123 0 45 11-23-2024, 08:15 AM
Last Post: jykeith123
  Installing Wifi drive for the RockPro64 John45595 0 649 02-10-2024, 11:32 PM
Last Post: John45595
Wink You don't sell case and fan for "ROCKPro64 4GB Single Board Computer"? Clea 1 1,235 09-17-2023, 12:00 AM
Last Post: tllim
  Want to check maximum toggle speed in Rockpro64 board. kundanjha 0 930 08-14-2023, 07:55 AM
Last Post: kundanjha
  Unable to boot Armbian on new RockPro64 mooseball 5 5,264 07-14-2023, 08:59 AM
Last Post: rockjonn
  Hardware fix for software sound problem on Rockpro64 Ricks Rockpro 0 1,172 04-06-2023, 03:59 PM
Last Post: Ricks Rockpro
  No sound on Rockpro64 with OpenWrt Patrice 1 1,761 04-06-2023, 02:46 PM
Last Post: Ricks Rockpro
  Cant get rockpro64 working brasilikum 3 2,488 03-19-2023, 06:22 AM
Last Post: runyor
  RockPro64 Stopped working WarpLover 5 3,589 02-06-2023, 10:10 AM
Last Post: diizzy
Lightbulb ROCKPro64 + SSD case model for 3D print Spater 0 1,288 01-27-2023, 07:43 PM
Last Post: Spater

Forum Jump:


Users browsing this thread: 1 Guest(s)