Multiport Gigabit Card?
#11
(04-22-2020, 09:14 AM)pgwipeout Wrote:
(04-22-2020, 07:20 AM)kuleszdl Wrote: Hi, I tried the following card:

   HP NC360T PCI Express Dual Port Gigabit Server Adapter

It uses the 82571EB chipset. I am using the Debian unstable Kernel and, unfortunately, the kernel panics during boot when the card is plugged in. However, I found a very interesting project that provides a free firmware for quad-cards with the BCM5719 chip (it replaces the blobs in the card itself):

https://github.com/meklort/bcm5719-fw

Also, these cards look MUCH more promising to me as they have just one chip instead of four like most Intel NICs and are rated at 4W maximum (instead of 10W that the Intel quad cards seem to use). I am planning to get one and see how well this works.

Edit - could this be related to the following?:

https://forum.pine64.org/showthread.php?tid=8374

Good Morning,

It may be the hardware issue, but do note there is an issue with the rk3399 pcie controller that is currently unmitigated.
See the LKML thread here : https://lore.kernel.org/linux-pci/CAMdYz...gmail.com/
Also see this for additional information : https://lkml.org/lkml/2020/4/6/320

TLDR: We found the rk3399 throws either a synchronous error or a SError when a pcie device sends an unknown message.
The error type is determined by which cpu cluster handles the message.
We hijacked the arm64 error handling and processed it ourselves, and that corrects the issue, but it's not a good fix.
In the end, it was determined that significant changes to how arm64 handles pcie errors in the linux kernel need to happen.

(05-01-2020, 01:45 PM)kuleszdl Wrote: Thanks a lot @pgwipeout - I had overlooked this.

I applied the hack and also enabled PCIe gen2 mode via the already discussed link-speed change in the dts and ... the kernel boots now with the PCIe card inserted!

Even without the taskset-call all NICs are now recognized (I removed the Mac addresses from the output):

Code:
root@devuan:~# lspci
00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100
01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.1 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.2 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.3 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

root@devuan:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
   link/ether ***************** brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether ***************** brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether ***************** brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether ***************** brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether ***************** brd ff:ff:ff:ff:ff:ff

Power consumption is at 5.3 Watts with one of the four links active. This sounds quite promising, I will now try the other Dualport card I bought previously.

Update: I tried the Intel Dualport-NIC (on HP NC360T), however, the ports don't get recognized automatically.

I wonder if it was the change to PCIe gen 2 that made it work correctly?
Do you notice if it is running at native speed or downgraded under lspci -vvv?
  Reply
#12
Hard to tell without recompiling. This is what I'm getting in lspci -vvv regarding the speed:

Code:
root@devuan:~# lspci -vvv|grep -i speed
                LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  Reply
#13
(05-01-2020, 05:35 PM)kuleszdl Wrote: Hard to tell without recompiling. This is what I'm getting in lspci -vvv regarding the speed:

Code:
root@devuan:~# lspci -vvv|grep -i speed
                LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

Nope, it's operating at the maximum speed it can.
If it wasn't you would see a (downgraded) next to the Speed 2.5GT/s.
  Reply
#14
I now added the two patches discussed in this thread (the hack from the lkml and the PCIe gen2 enablement) and added the tg3 driver, then built a custom OpenWRT image. Very good news: It boots up fine!

Then I have setup a small testbed with four Lenovo Thinkpad X200(s) machines (all with an Intel NIC) and ran a couple of benchmarks and power measures (using a simple wall meter):

Code:
        | 0-Links | 1-Link | 2-Links | 3-Links | 4-Links
--------------------------------------------------------
Idle    | 5.3 W   | 5.8 W  | 6.5 W   | 7.3 W   | 8.1 W
Load    | -       | 6.4 W  | 7.6 W   | 8.6 W   | 9.4 W
Speed   | -       | 940    | 930     | 890     | 630
(Mbit/s)

The limiting factor in terms of speed seems to be the NIC or maybe thermals (mine lacks a cooler). The CPU load with 4 iperf3-tests running was still at around 30%, so lots of CPU cycles left on the rockpro64 side. Interesting note: When the speed dropped, it dropped on all four machines to the same level.

During the tests, the rockpro64 itself was very stable (no crashes or errors in dmesg).

Here are two pictures illustrating the setup:


.jpg   overview-web.jpg (Size: 446.54 KB / Downloads: 476)


.jpg   detail-web.jpg (Size: 262.15 KB / Downloads: 518)

@tllim Firewall-case welcome! Apart from a let's say smaller version of the NAS case I was also thinking about a tall case where the NIC is placed using a PCIe riser card.

As a next step I plan to put the rockpro64 in the NAS case and replace my existing router to see how stable the whole thing runs in a more realistic 24/7-setting. Another goal is to replace the proprietary firmware on the NICs by the free alternative I mentioned earlier in this thread.
  Reply
#15
(05-01-2020, 09:01 PM)kuleszdl Wrote: I now added the two patches discussed in this thread (the hack from the lkml and the PCIe gen2 enablement) and added the tg3 driver, then built a custom OpenWRT image. Very good news: It boots up fine!

Then I have setup a small testbed with four Lenovo Thinkpad X200(s) machines (all with an Intel NIC) and ran a couple of benchmarks and power measures (using a simple wall meter):

Code:
        | 0-Links | 1-Link | 2-Links | 3-Links | 4-Links
--------------------------------------------------------
Idle    | 5.3 W   | 5.8 W  | 6.5 W   | 7.3 W   | 8.1 W
Load    | -       | 6.4 W  | 7.6 W   | 8.6 W   | 9.4 W
Speed   | -       | 940    | 930     | 890     | 630
(Mbit/s)

The limiting factor in terms of speed seems to be the NIC or maybe thermals (mine lacks a cooler). The CPU load with 4 iperf3-tests running was still at around 30%, so lots of CPU cycles left on the rockpro64 side. Interesting note: When the speed dropped, it dropped on all four machines to the same level.

During the tests, the rockpro64 itself was very stable (no crashes or errors in dmesg).

Here are two pictures illustrating the setup:





@tllim Firewall-case welcome! Apart from a let's say smaller version of the NAS case I was also thinking about a tall case where the NIC is placed using a PCIe riser card.

As a next step I plan to put the rockpro64 in the NAS case and replace my existing router to see how stable the whole thing runs in a more realistic 24/7-setting. Another goal is to replace the proprietary firmware on the NICs by the free alternative I mentioned earlier in this thread.

Hmmm, both 3 and 4 are pegged at roughly 2500 Mbps. 

This matches up how much I've been able to get out of my 10Gig card, but I don't have the hardware to saturate it.
I wonder if the 2.5G is the maximum throughput of PCIe controller, which would be pretty disappointing.

Edit:
Never mind, I've plugged everything I have into the 10gig switch directly, and I'm able to get over 3Gbps in client mode.
It would seem the rk3399 doesn't have the power to generate more than that with iperf3 in server mode.
Also, people have seen 800MBps on SSDs, which is more than double this.

I'll need to test again though once I get a full 10gig trunk going.

BTW, here's my setup:

Cisco 2960s
x520 Dual Port Intel



.jpg   20200502_145455.jpg (Size: 125.27 KB / Downloads: 499)


.jpg   20200502_145524.jpg (Size: 169.13 KB / Downloads: 487)


.png   Annotation 2020-05-02 145603.png (Size: 305.79 KB / Downloads: 533)
  Reply
#16
Cool, looks like you are running yours already in production - or are you still testing the setup?

Regarding the 3 GBit/s maximum - well this might be due to the fact that your NIC has two ports. Assuming that the observed 800 MB/s when running a single NVMe card in x4 mode is the maximum you can get from this SoC atm, this would support this theory since

2 * 3 GBit/s = 6 GBit/s =~ 750 MB/s


Personally, I rather need a setup with 3-4 one GBit/s links. Therefore, the rockpro64 seems to be a good match for that. But apart from the speed, the power consumption is not that satisfactory for me when compared to x86_64 alternatives like the APU series that are more or less on the same level.
  Reply
#17
So after much swearing and crawling around under my house, I have a ten gig trunk running between my two switches.
As such, I've gotten the rockpro64 over 4.21Gbps and it still had room to spare.

(05-02-2020, 03:10 PM)kuleszdl Wrote: Cool, looks like you are running yours already in production - or are you still testing the setup?

Regarding the 3 GBit/s maximum - well this might be due to the fact that your NIC has two ports. Assuming that the observed 800 MB/s when running a single NVMe card in x4 mode is the maximum you can get from this SoC atm, this would support this theory since

2 * 3 GBit/s = 6 GBit/s =~ 750 MB/s


Personally, I rather need a setup with 3-4  one GBit/s links. Therefore, the rockpro64 seems to be a good match for that. But apart from the speed, the power consumption is not that satisfactory for me when compared to x86_64 alternatives like the APU series that are more or less on the same level.

I test in production, sometimes to the ire of my partner.
As for the power/speed/cost compared to an x64 alternative, all I can say is this:
The rockpro64 4gb board is $70.
$20 in accessories to get a fully functional computer.
A 10gig two port intel nic goes for $60 on ebay.
So $150 for a router capable of serving multi-gig internet and 10gig nas.

The cheapest brand name x64 board is a NUC.
$500 for a thunderbolt 3 capable i3 nuc.
~$200 for a thunderbolt pcie dock.
$60 for a dual port 10gig nic.
So $760 minimum to do the same with x64.
I've priced out mini amd builds with an itx motherboard, no case, and minimum psu, and I still come about $500 bucks.

And to top it off, the rockpro64 has a completely open source firmware stack.
For a device that is facing the internet, that is gold for cyber security.
  Reply
#18
I haven't purchased any hardware yet. I'm looking for a SBC with two ethernet ports and wifi.

Reading through the links provided by pgwipeout, I hesitate buying a rockpro64 + a (random) PCIe ethernet card.

The cleanest solution would be not requiring any PCIe cards. Are there linux-friendly SOCs supporting two ethernet ports? What are SBCs built around them?
  Reply
#19
I guess all SoCs that support more than one NIC provide the second one either via USB or internally via PCIe. There are some boards build like this, however, I am not sure the Pine64 forums are the right place to discuss other boards.
  Reply
#20
(05-30-2020, 10:31 AM)kuleszdl Wrote: I guess all SoCs that support more than one NIC provide the second one either via USB or internally via PCIe.

Which ones?


(05-30-2020, 10:31 AM)kuleszdl Wrote: There are some boards build like this

Which ones?


(05-30-2020, 10:31 AM)kuleszdl Wrote: however, I am not sure the Pine64 forums are the right place to discuss other boards.

Why not Huh
  Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  AW-CM256SM wifi card mtek 4 3,374 10-30-2023, 12:00 PM
Last Post: diederik
  Which SATA card should I use my NAS server RAID5 Louysa 3 1,593 09-24-2023, 04:40 AM
Last Post: JPT223
  Full Size Bracket for the NVME PCI-E Card publiclewdness 0 1,617 12-31-2021, 12:53 AM
Last Post: publiclewdness
  Rockpro64 Sata Card kills itself jerry110 33 50,292 10-20-2021, 04:36 AM
Last Post: fieni
  Right direction SATA card corax 2 3,529 09-15-2021, 12:46 PM
Last Post: corax
  Industrial memory card m.ekstrom 1 2,767 03-23-2021, 05:55 AM
Last Post: barray
  Carte PCIe bloquant le démarrage du Noyau / PCIe card blocking Kernel boot dml-pc 4 5,822 02-17-2021, 11:35 AM
Last Post: dml-pc
  Won't boot with SATA card rjzak 4 6,515 01-09-2021, 05:56 PM
Last Post: kuleszdl
  case for RockPro64 + PCI-E M.2 NVMe Card? AndyOfLinux 9 14,740 01-05-2021, 05:05 PM
Last Post: kuleszdl
  Cheap 4-port SATA card working with RockPro64 4Gb andyburn 6 11,987 08-10-2020, 08:36 PM
Last Post: zer0sig

Forum Jump:


Users browsing this thread: 1 Guest(s)