PCIe Armbian and Ayufan for nvme ssd
#1
Hi all,

  I made a test on nvme ssd performance.
 
- 2 rockpro 64
    - one with Armbian (tried 5.9.14 and 5.10 rc7)
    - one with Ayufan’s kernel (5.9.)
- 2 pcie to nvme cards
    - a black one from Pine64 (gen2)
    - a red one  (perhaps gen3)
- 2 ssd samsung evo plus

The result :


Code:
+-------+-----------+---------+
|       |  Armbian  |  Ayufan |
+-------+-----------+---------+
| black |  OK       |  OK     |
| red   |  KO       |  OK     |
+-------+-----------+---------+


The only problem seems coming from the red card which does not work with Armbian )module failed). Moreover the performance seems degraded with Armbian.
If we look at the dmesg we notice that capacity are different :

Ayufan :

Code:
[    4.078013] pci 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4

Armbian :

Code:
[    2.700909] pci 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x4 link

Note that
- for Armbian I activated the pcie-gen2 in the armbian-config tool.
- the result Ayufan + black is similar to the Ayufan + red

I just wonder what differs between the two kernels... Ayufan secret ingredient ? Or am I doing things wrong ?



log details



with Ayufan + red

dmesg :
Code:
rock64@chen:/mnt$ dmesg | grep -i pci
[    0.006736] PCI/MSI: /interrupt-controller@fee00000/interrupt-controller@fee20000 domain created
[    1.611973] vcc3v3_pcie: supplied by vcc12v_dcin
[    1.633826] PCI: CLS 0 bytes, default 64
[    2.919080] rockchip-pcie f8000000.pcie: host bridge /pcie@f8000000 ranges:
[    2.919131] rockchip-pcie f8000000.pcie:      MEM 0x00fa000000..0x00fbdfffff -> 0x00fa000000
[    2.919160] rockchip-pcie f8000000.pcie:       IO 0x00fbe00000..0x00fbefffff -> 0x00fbe00000
[    2.919705] rockchip-pcie f8000000.pcie: bus-scan-delay-ms in device tree is 1000 ms
[    2.919883] rockchip-pcie f8000000.pcie: no vpcie12v regulator found
[    2.920029] rockchip-pcie f8000000.pcie: supply vpcie1v8 not found, using dummy regulator
[    2.920153] rockchip-pcie f8000000.pcie: supply vpcie0v9 not found, using dummy regulator
[    3.004174] rockchip-pcie f8000000.pcie: wait 1000 ms (from device tree) before bus scan
[    4.071716] rockchip-pcie f8000000.pcie: PCI host bridge to bus 0000:00
[    4.071738] pci_bus 0000:00: root bus resource [bus 00-1f]
[    4.071755] pci_bus 0000:00: root bus resource [mem 0xfa000000-0xfbdfffff]
[    4.071774] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus address [0xfbe00000-0xfbefffff])
[    4.071834] pci 0000:00:00.0: [1d87:0100] type 01 class 0x060400
[    4.071962] pci 0000:00:00.0: supports D1
[    4.071977] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    4.076939] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    4.077169] pci 0000:01:00.0: [144d:a808] type 00 class 0x010802
[    4.077262] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    4.077428] pci 0000:01:00.0: Max Payload Size set to 256 (was 128, max 256)
[    4.078013] pci 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
[    4.106343] pci_bus 0000:01: busn_res: [bus 01-1f] end is updated to 01
[    4.106380] pci 0000:00:00.0: BAR 14: assigned [mem 0xfa000000-0xfa0fffff]
[    4.106403] pci 0000:01:00.0: BAR 0: assigned [mem 0xfa000000-0xfa003fff 64bit]
[    4.106457] pci 0000:00:00.0: PCI bridge to [bus 01]
[    4.106475] pci 0000:00:00.0:   bridge window [mem 0xfa000000-0xfa0fffff]
[    4.106672] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
[    4.106945] pcieport 0000:00:00.0: PME: Signaling with IRQ 231
[    4.298192] ehci-pci: EHCI PCI platform driver
[    4.377616] ohci-pci: OHCI PCI platform driver
[    5.163253] nvme nvme0: pci function 0000:01:00.0
and iozone :

Code:
rock64@chen:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    ...
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    96710   151834   159335   160318    64290   114505                                                         
          102400      16   222798   330544   332777   336250   191985   326480                                                         
          102400     512  1060460  1144479  1045592  1077316  1015649  1138965                                                         
          102400    1024  1175145  1189215  1069909  1102438  1070189  1186475                                                         
          102400   16384  1434425  1443247  1304941  1359970  1357994  1433817                                                         

with Armbian + black

Code:
rock64@ondine:/mnt/tmp$ dmesg | grep -i pci
[    0.007076] PCI/MSI: /interrupt-controller@fee00000/interrupt-controller@fee20000 domain created
[    1.246712] vcc3v3_pcie: supplied by vcc12v_dcin
[    1.399192] PCI: CLS 0 bytes, default 64
[    2.627404] rockchip-pcie f8000000.pcie: host bridge /pcie@f8000000 ranges:
[    2.627433] OF: /pcie@f8000000: Missing device_type
[    2.627466] rockchip-pcie f8000000.pcie:      MEM 0x00fa000000..0x00fbdfffff -> 0x00fa000000
[    2.627488] rockchip-pcie f8000000.pcie:       IO 0x00fbe00000..0x00fbefffff -> 0x00fbe00000
[    2.628625] rockchip-pcie f8000000.pcie: supply vpcie1v8 not found, using dummy regulator
[    2.628778] rockchip-pcie f8000000.pcie: supply vpcie0v9 not found, using dummy regulator
[    2.694122] rockchip-pcie f8000000.pcie: PCI host bridge to bus 0000:00
[    2.694136] pci_bus 0000:00: root bus resource [bus 00-1f]
[    2.694147] pci_bus 0000:00: root bus resource [mem 0xfa000000-0xfbdfffff]
[    2.694159] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff] (bus address [0xfbe00000-0xfbefffff])
[    2.694216] pci 0000:00:00.0: [1d87:0100] type 01 class 0x060400
[    2.694369] pci 0000:00:00.0: supports D1
[    2.694378] pci 0000:00:00.0: PME# supported from D0 D1 D3hot
[    2.699724] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    2.699984] pci 0000:01:00.0: [144d:a808] type 00 class 0x010802
[    2.700080] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    2.700259] pci 0000:01:00.0: Max Payload Size set to 256 (was 128, max 256)
[    2.700909] pci 0000:01:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x4 link at 0000:00:00.0 (capable of 31.504 Gb/s with 8.0 GT/s PCIe x4 link)
[    2.714404] pci_bus 0000:01: busn_res: [bus 01-1f] end is updated to 01
[    2.714435] pci 0000:00:00.0: BAR 14: assigned [mem 0xfa000000-0xfa0fffff]
[    2.714453] pci 0000:01:00.0: BAR 0: assigned [mem 0xfa000000-0xfa003fff 64bit]
[    2.714502] pci 0000:00:00.0: PCI bridge to [bus 01]
[    2.714516] pci 0000:00:00.0:   bridge window [mem 0xfa000000-0xfa0fffff]
[    2.714721] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
[    2.715032] pcieport 0000:00:00.0: PME: Signaling with IRQ 78
[    2.715414] pcieport 0000:00:00.0: AER: enabled with IRQ 78
[    2.754347] nvme nvme0: pci function 0000:01:00.0
[    2.809211] ehci-pci: EHCI PCI platform driver
[    2.843136] ohci-pci: OHCI PCI platform driver

rock64@ondine:/mnt/tmp$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -
Code:
random    random    bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read    write    read  rewrite      read  fwrite frewrite    fread  freread
          102400      4    90237  127839  138090  138675    63069  126696                                                               
          102400      16  259529  325606  332023  334611  194739  321260                                                               
          102400    512  502510  659573  596504  607196  589030  675306                                                               
          102400    1024  716099  715984  659933  672370  659097  727990                                                               
          102400  16384  814970  821047  774083  794068  791828  818859                                                               

iozone test complete.
  Reply
#2
The problem may come from the SPI (I kept Ayufan’s for both). Then I copied the bootloader to the spi (with the armbian-config) and the pcie is now in gen2 and performance is alike now.
But the red is still not recognized
  Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Mainline U-Boot with SPI, NVMe and SATA boot support sigmaris 72 30,235 03-20-2021, 12:20 PM
Last Post: sigmaris
  Never ever mix up CPU board connector with 8-pin PCIE connector LMM 0 224 02-11-2021, 10:18 AM
Last Post: LMM
  fan continously runs on kernel 5.8 but I need to use 5. kernel for PCIe sata card GreyLinux 4 1,224 10-20-2020, 10:52 AM
Last Post: GreyLinux
  PCIe x4 mode - Lane 3 failure acdkyn 1 1,125 04-24-2020, 01:32 PM
Last Post: kuleszdl
  Considering buying and question on the PCIe slot MobileJAD 25 20,292 11-07-2019, 03:10 PM
Last Post: hmuller
  RockPro64 pcie Gpu Davidos 1 1,318 06-09-2019, 02:09 PM
Last Post: guannais
  ROCKPRO64 suitable for experimenting with custom NVMe drivers? spirom 2 1,174 05-14-2019, 01:18 PM
Last Post: axelf
Wink Hola Soy nuevo. Comparto ARMBIAN SITEA 3 1,426 04-27-2019, 03:58 PM
Last Post: javiergbarroso
  login to GUI desktop hangs when PCIe SSD is plugged in grimace 3 1,409 04-24-2019, 04:24 PM
Last Post: Luke
  PCIe SSD drive still not working ?? dr_ju_ju 7 3,096 04-15-2019, 04:16 PM
Last Post: tllim

Forum Jump:


Users browsing this thread: 1 Guest(s)