Multiport Gigabit Card?
#1
I'm just playing here looking to get more bandwidth.....
Anyone know if adding a 2/4 port gigabit pcie card using a Intel pro 1000 chipset and binding the interfaces together would work?
IE would the rockpro64 allow the bandwidth through or would there be a bottleneck that wouldn't make it worth it?
Thanks
Neeko
  Reply
#2
(07-28-2018, 11:26 PM)jerry110 Wrote: I'm just playing here looking to get more bandwidth.....
Anyone know if adding a 2/4 port gigabit pcie card using a Intel pro 1000 chipset and binding the interfaces together would work?
IE would the rockpro64 allow the bandwidth through or would there be a bottleneck that wouldn't make it worth it?
Thanks
Neeko

I have an I350-T4 and a Brocade switch to test LACP bonding this week, I'll post results when I do.

Another option would be 10GbE, but I haven't managed to get the RockPro64 to detect any of my Mellanox cards so I haven't been able to test 10GbE bandwidth.
  Reply
#3
I just verified that the x520 dual port works with the rockpro64 on mainline.
  Reply
#4
Hi, I tried the following card:

   HP NC360T PCI Express Dual Port Gigabit Server Adapter

It uses the 82571EB chipset. I am using the Debian unstable Kernel and, unfortunately, the kernel panics during boot when the card is plugged in. However, I found a very interesting project that provides a free firmware for quad-cards with the BCM5719 chip (it replaces the blobs in the card itself):

https://github.com/meklort/bcm5719-fw

Also, these cards look MUCH more promising to me as they have just one chip instead of four like most Intel NICs and are rated at 4W maximum (instead of 10W that the Intel quad cards seem to use). I am planning to get one and see how well this works.

Edit - could this be related to the following?:

https://forum.pine64.org/showthread.php?tid=8374
  Reply
#5
(04-22-2020, 07:20 AM)kuleszdl Wrote: Hi, I tried the following card:

   HP NC360T PCI Express Dual Port Gigabit Server Adapter

It uses the 82571EB chipset. I am using the Debian unstable Kernel and, unfortunately, the kernel panics during boot when the card is plugged in. However, I found a very interesting project that provides a free firmware for quad-cards with the BCM5719 chip (it replaces the blobs in the card itself):

https://github.com/meklort/bcm5719-fw

Also, these cards look MUCH more promising to me as they have just one chip instead of four like most Intel NICs and are rated at 4W maximum (instead of 10W that the Intel quad cards seem to use). I am planning to get one and see how well this works.

Edit - could this be related to the following?:

https://forum.pine64.org/showthread.php?tid=8374

Good Morning,

It may be the hardware issue, but do note there is an issue with the rk3399 pcie controller that is currently unmitigated.
See the LKML thread here : https://lore.kernel.org/linux-pci/CAMdYz...gmail.com/
Also see this for additional information : https://lkml.org/lkml/2020/4/6/320

TLDR: We found the rk3399 throws either a synchronous error or a SError when a pcie device sends an unknown message.
The error type is determined by which cpu cluster handles the message.
We hijacked the arm64 error handling and processed it ourselves, and that corrects the issue, but it's not a good fix.
In the end, it was determined that significant changes to how arm64 handles pcie errors in the linux kernel need to happen.
  Reply
#6
I received the BCM5719-based quadport NIC today. I tried it with the vendor firmware and without the alternative dtb first. Unfortunately, this resulted in a kernel panic on boot. I plan to investigate this further.
  Reply
#7
(04-25-2020, 09:21 AM)kuleszdl Wrote: I received the BCM5719-based quadport NIC today. I tried it with the vendor firmware and without the alternative dtb first. Unfortunately, this resulted in a kernel panic on boot. I plan to investigate this further.

Looking forward on your investigate result.
  Reply
#8
Please find attached the kernel log with the crash when the PCIe card is inserted:


.txt   crash.txt (Size: 6.47 KB / Downloads: 433)

I am running the current 5.6 Mainline kernel from Debian unstable.

I found a report about similar issues on the Manjaro forums:

https://forum.manjaro.org/t/freezes-on-r...4/97978/85

I tried limiting the number of CPU cores as suggested there by appending

Code:
maxcpus=1

to the kernel command line. However, this did not work either. I am getting basically the following error now:

Code:
Internal error: synchrononous external abort: 96000210 [#1] SMP


Any ideas?
  Reply
#9
(04-30-2020, 06:49 PM)kuleszdl Wrote: Please find attached the kernel log with the crash when the PCIe card is inserted:



I am running the current 5.6 Mainline kernel from Debian unstable.

I found a report about similar issues on the Manjaro forums:

https://forum.manjaro.org/t/freezes-on-r...4/97978/85

I tried limiting the number of CPU cores as suggested there by appending

Code:
maxcpus=1

to the kernel command line. However, this did not work either. I am getting basically the following error now:

Code:
Internal error: synchrononous external abort: 96000210 [#1] SMP


Any ideas?


This is exactly the error described above:
https://forum.pine64.org/showthread.php?...2#pid64622
Quote:It may be the hardware issue, but do note there is an issue with the rk3399 pcie controller that is currently unmitigated.
See the LKML thread here : https://lore.kernel.org/linux-pci/CAMdYz...gmail.com/
Also see this for additional information : https://lkml.org/lkml/2020/4/6/320

TLDR: We found the rk3399 throws either a synchronous error or a SError when a pcie device sends an unknown message.
The error type is determined by which cpu cluster handles the message.
We hijacked the arm64 error handling and processed it ourselves, and that corrects the issue, but it's not a good fix.
In the end, it was determined that significant changes to how arm64 handles pcie errors in the linux kernel need to happen.

There's a hack in the mailing list to disable SError handling (https://lkml.org/lkml/diff/2020/4/27/1041/1) , then you can load the pcie module manually with:
Code:
taskset -c 4 modprobe pcie_rockchip_host


But this is nothing more than a hack, in the end the pcie controller doesn't handle certain error sequences correctly which is a hardware bug.
  Reply
#10
Thanks a lot @pgwipeout - I had overlooked this.

I applied the hack and also enabled PCIe gen2 mode via the already discussed link-speed change in the dts and ... the kernel boots now with the PCIe card inserted!

Even without the taskset-call all NICs are now recognized (I removed the Mac addresses from the output):

Code:
root@devuan:~# lspci
00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100
01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.1 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.2 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.3 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

root@devuan:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether ***************** brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ***************** brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ***************** brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ***************** brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ***************** brd ff:ff:ff:ff:ff:ff

Power consumption is at 5.3 Watts with one of the four links active. This sounds quite promising, I will now try the other Dualport card I bought previously.

Update: I tried the Intel Dualport-NIC (on HP NC360T), however, the ports don't get recognized automatically.
  Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  AW-CM256SM wifi card mtek 4 3,307 10-30-2023, 12:00 PM
Last Post: diederik
  Which SATA card should I use my NAS server RAID5 Louysa 3 1,551 09-24-2023, 04:40 AM
Last Post: JPT223
  Full Size Bracket for the NVME PCI-E Card publiclewdness 0 1,603 12-31-2021, 12:53 AM
Last Post: publiclewdness
  Rockpro64 Sata Card kills itself jerry110 33 50,054 10-20-2021, 04:36 AM
Last Post: fieni
  Right direction SATA card corax 2 3,494 09-15-2021, 12:46 PM
Last Post: corax
  Industrial memory card m.ekstrom 1 2,736 03-23-2021, 05:55 AM
Last Post: barray
  Carte PCIe bloquant le démarrage du Noyau / PCIe card blocking Kernel boot dml-pc 4 5,765 02-17-2021, 11:35 AM
Last Post: dml-pc
  Won't boot with SATA card rjzak 4 6,482 01-09-2021, 05:56 PM
Last Post: kuleszdl
  case for RockPro64 + PCI-E M.2 NVMe Card? AndyOfLinux 9 14,629 01-05-2021, 05:05 PM
Last Post: kuleszdl
  Cheap 4-port SATA card working with RockPro64 4Gb andyburn 6 11,902 08-10-2020, 08:36 PM
Last Post: zer0sig

Forum Jump:


Users browsing this thread: 1 Guest(s)