[Jan 14] KDE Neon (20190113-1500) - Pinebook1080P / Pinebook |[Jan 14] Q4OS (2.7-r1) - Pinebook1080P / Pinebook | [Dec 07] DietPi(v6.18) - Pinebook / Manjaro KDE (preview3) / Manjaro LXQT (preview3) - Pinebook |[Dec 06] Armbian Debian Stretch (5.67) / Armbian Ubuntu 18.04 Bionic Desktop (5.67) - RockPro64 |[Dec 05] DietPi(v6.18) - 1080P Pinebook

Project Inspiration | Get Started | IRC Logs | Forum Rules/Policy


Any benchmarks for non-pine64 pci to NVMe adapters?
#1
I have two things I would like to find benchmarks/information for. I apologize ahead of time if they are in another thread. I couldn't find them via google or the search function on this forum.

1. Does using a higher quality pci to NVMe adapter improve the performance? Ideally, the PCI lanes should each be able to contribute 400 MB/s for a total of 1.6 GB/s. The only benchmarks i've seen cap out at 540 MB/s write and 1.3 GB/s read. The SSD being tested in those is rated to be able to achieve much higher performance in both areas. I don't know if the difference is due to internal limitations of the processor or the PCI adapter itself. Hence my question.

2. I would also like to know if using a USB type C to ethernet adapter improves the network performance compared to using the built-in ethernet port.
Reply
#2
Samsung 970 Pro NVMe M.2 500GB with pine64 PCIe NVMe adapter.


Code:
[email protected]_0:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa.

    Run began: Sat Jul 28 12:08:50 2018

    Include fsync in write timing
    O_DIRECT feature enabled
    Auto Mode
    File size set to 102400 kB
    Record Size 4 kB
    Record Size 16 kB
    Record Size 512 kB
    Record Size 1024 kB
    Record Size 16384 kB
    Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    83920   146526   171217   172733    56965   145921                                                          
          102400      16   271229   414900   454454   460018   193626   413496                                                          
          102400     512  1021580  1033256  1007794  1057973   990788  1075201                                                          
          102400    1024  1066333  1107758  1038792  1079089  1048932  1116344                                                          
          102400   16384   918513  1418530  1433672  1529740  1523500  1389826                                                          

iozone test complete.


with

Code:
[email protected]_0:/mnt$ uname -a
Linux rockpro64v2_0 4.18.0-rc5-1050-ayufan-ge70bd2ab8802 #1 SMP PREEMPT Thu Jul 26 08:33:14 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
Sorry for any mistakes. English is not my native language

1. RP64 v2.0 / PCIe NVMe as root / sd-card as boot / 2,5 Zoll HDD 1TB (USB3) using as Webserver .... (Armbian 5.67.181217 nightly)
2. RP64 v2.1 / PCIe SATA / sd-card / 2 * 2,5 Zoll 2TB HDD (raid1) / using as NAS / Kernel 4.19.0-rc4-1071-ayufan
3. RP64 v2.1 / Corsair GTX USB3 as root / sd-card as boot (ARMBIAN 5.67.181217 nightly)

https://forum.frank-mankel.org/category/14/rockpro64


Reply
#3
(09-09-2018, 04:19 PM)crhawle Wrote: 1. Does using a higher quality pci to NVMe adapter improve the performance? Ideally, the PCI lanes should each be able to contribute 400 MB/s for a total of 1.6 GB/s. The only benchmarks i've seen cap out at 540 MB/s write and 1.3 GB/s read. The SSD being tested in those is rated to be able to achieve much higher performance in both areas. I don't know if the difference  is due to internal limitations of the processor or the PCI adapter itself. Hence my question.
I have 2 different PCIe/NVMe adapters. Neither noticably has any chip on them, not sure what else would materially affect quality. Both work fine, never felt either of them was faster or slower than anything else.
But for sure NVMe results (as per Bullet64 above) on iozone are hugely variable on ROCKPro64, in particular down to the kernel in use. 4.18 kernel results are significantly better than anything I have seen on 4.4 (as in 2 or 3 times better) which can make comparisons meaningless. Let alone the NVMe devices themselves.
ROCKPro64 v2.1 2GB, SM961 128GB NVMe for rootfs, HDMI video & sound, Bluetooth keyboard & mouse
Started Bionic minimal - now Cosmic, Openbox desktop for general purpose daily PC.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Fitting nvme card in open enclosure vbb 0 93 12-29-2018, 02:54 PM
Last Post: vbb
  PCI-E X4 to M.2/NGFF NVMe SSD - support? AndyOfLinux 19 1,185 08-20-2018, 02:36 PM
Last Post: dukla2000
  RockPro64 and PINE64 WIFI Module Compatability sdwhwk 4 678 06-09-2018, 04:20 AM
Last Post: pfeerick

Forum Jump:


Users browsing this thread: 1 Guest(s)