I'm just playing here looking to get more bandwidth.....
Anyone know if adding a 2/4 port gigabit pcie card using a Intel pro 1000 chipset and binding the interfaces together would work?
IE would the rockpro64 allow the bandwidth through or would there be a bottleneck that wouldn't make it worth it?
Thanks
Neeko
(07-28-2018, 11:26 PM)jerry110 Wrote: I'm just playing here looking to get more bandwidth.....
Anyone know if adding a 2/4 port gigabit pcie card using a Intel pro 1000 chipset and binding the interfaces together would work?
IE would the rockpro64 allow the bandwidth through or would there be a bottleneck that wouldn't make it worth it?
Thanks
Neeko
I have an I350-T4 and a Brocade switch to test LACP bonding this week, I'll post results when I do.
Another option would be 10GbE, but I haven't managed to get the RockPro64 to detect any of my Mellanox cards so I haven't been able to test 10GbE bandwidth.
I just verified that the x520 dual port works with the rockpro64 on mainline.
04-22-2020, 07:20 AM
(This post was last modified: 04-22-2020, 07:39 AM by kuleszdl.)
Hi, I tried the following card:
HP NC360T PCI Express Dual Port Gigabit Server Adapter
It uses the 82571EB chipset. I am using the Debian unstable Kernel and, unfortunately, the kernel panics during boot when the card is plugged in. However, I found a very interesting project that provides a free firmware for quad-cards with the BCM5719 chip (it replaces the blobs in the card itself):
https://github.com/meklort/bcm5719-fw
Also, these cards look MUCH more promising to me as they have just one chip instead of four like most Intel NICs and are rated at 4W maximum (instead of 10W that the Intel quad cards seem to use). I am planning to get one and see how well this works.
Edit - could this be related to the following?:
https://forum.pine64.org/showthread.php?tid=8374
04-22-2020, 09:14 AM
(This post was last modified: 04-22-2020, 09:19 AM by pgwipeout.
Edit Reason: Fix the error type and add additional information link
)
(04-22-2020, 07:20 AM)kuleszdl Wrote: Hi, I tried the following card:
HP NC360T PCI Express Dual Port Gigabit Server Adapter
It uses the 82571EB chipset. I am using the Debian unstable Kernel and, unfortunately, the kernel panics during boot when the card is plugged in. However, I found a very interesting project that provides a free firmware for quad-cards with the BCM5719 chip (it replaces the blobs in the card itself):
https://github.com/meklort/bcm5719-fw
Also, these cards look MUCH more promising to me as they have just one chip instead of four like most Intel NICs and are rated at 4W maximum (instead of 10W that the Intel quad cards seem to use). I am planning to get one and see how well this works.
Edit - could this be related to the following?:
https://forum.pine64.org/showthread.php?tid=8374
Good Morning,
It may be the hardware issue, but do note there is an issue with the rk3399 pcie controller that is currently unmitigated.
See the LKML thread here : https://lore.kernel.org/linux-pci/CAMdYz...gmail.com/
Also see this for additional information : https://lkml.org/lkml/2020/4/6/320
TLDR: We found the rk3399 throws either a synchronous error or a SError when a pcie device sends an unknown message.
The error type is determined by which cpu cluster handles the message.
We hijacked the arm64 error handling and processed it ourselves, and that corrects the issue, but it's not a good fix.
In the end, it was determined that significant changes to how arm64 handles pcie errors in the linux kernel need to happen.
I received the BCM5719-based quadport NIC today. I tried it with the vendor firmware and without the alternative dtb first. Unfortunately, this resulted in a kernel panic on boot. I plan to investigate this further.
(04-25-2020, 09:21 AM)kuleszdl Wrote: I received the BCM5719-based quadport NIC today. I tried it with the vendor firmware and without the alternative dtb first. Unfortunately, this resulted in a kernel panic on boot. I plan to investigate this further.
Looking forward on your investigate result.
Please find attached the kernel log with the crash when the PCIe card is inserted:
crash.txt (Size: 6.47 KB / Downloads: 433)
I am running the current 5.6 Mainline kernel from Debian unstable.
I found a report about similar issues on the Manjaro forums:
https://forum.manjaro.org/t/freezes-on-r...4/97978/85
I tried limiting the number of CPU cores as suggested there by appending
to the kernel command line. However, this did not work either. I am getting basically the following error now:
Code: Internal error: synchrononous external abort: 96000210 [#1] SMP
Any ideas?
(04-30-2020, 06:49 PM)kuleszdl Wrote: Please find attached the kernel log with the crash when the PCIe card is inserted:
I am running the current 5.6 Mainline kernel from Debian unstable.
I found a report about similar issues on the Manjaro forums:
https://forum.manjaro.org/t/freezes-on-r...4/97978/85
I tried limiting the number of CPU cores as suggested there by appending
to the kernel command line. However, this did not work either. I am getting basically the following error now:
Code: Internal error: synchrononous external abort: 96000210 [#1] SMP
Any ideas?
This is exactly the error described above:
https://forum.pine64.org/showthread.php?...2#pid64622
Quote:It may be the hardware issue, but do note there is an issue with the rk3399 pcie controller that is currently unmitigated.
See the LKML thread here : https://lore.kernel.org/linux-pci/CAMdYz...gmail.com/
Also see this for additional information : https://lkml.org/lkml/2020/4/6/320
TLDR: We found the rk3399 throws either a synchronous error or a SError when a pcie device sends an unknown message.
The error type is determined by which cpu cluster handles the message.
We hijacked the arm64 error handling and processed it ourselves, and that corrects the issue, but it's not a good fix.
In the end, it was determined that significant changes to how arm64 handles pcie errors in the linux kernel need to happen.
There's a hack in the mailing list to disable SError handling ( https://lkml.org/lkml/diff/2020/4/27/1041/1) , then you can load the pcie module manually with:
Code: taskset -c 4 modprobe pcie_rockchip_host
But this is nothing more than a hack, in the end the pcie controller doesn't handle certain error sequences correctly which is a hardware bug.
05-01-2020, 01:45 PM
(This post was last modified: 05-01-2020, 01:51 PM by kuleszdl.)
Thanks a lot @ pgwipeout - I had overlooked this.
I applied the hack and also enabled PCIe gen2 mode via the already discussed link-speed change in the dts and ... the kernel boots now with the PCIe card inserted!
Even without the taskset-call all NICs are now recognized (I removed the Mac addresses from the output):
Code: root@devuan:~# lspci
00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100
01:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.1 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.2 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
01:00.3 Ethernet controller: Broadcom Limited NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
root@devuan:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether ***************** brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ***************** brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ***************** brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ***************** brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ***************** brd ff:ff:ff:ff:ff:ff
Power consumption is at 5.3 Watts with one of the four links active. This sounds quite promising, I will now try the other Dualport card I bought previously.
Update: I tried the Intel Dualport-NIC (on HP NC360T), however, the ports don't get recognized automatically.
|