PINE64
Any benchmarks for non-pine64 pci to NVMe adapters? - Printable Version

+- PINE64 (https://forum.pine64.org)
+-- Forum: ROCKPRO64 (https://forum.pine64.org/forumdisplay.php?fid=98)
+--- Forum: RockPro64 Hardware and Accessories (https://forum.pine64.org/forumdisplay.php?fid=102)
+--- Thread: Any benchmarks for non-pine64 pci to NVMe adapters? (/showthread.php?tid=6513)



Any benchmarks for non-pine64 pci to NVMe adapters? - crhawle - 09-09-2018

I have two things I would like to find benchmarks/information for. I apologize ahead of time if they are in another thread. I couldn't find them via google or the search function on this forum.

1. Does using a higher quality pci to NVMe adapter improve the performance? Ideally, the PCI lanes should each be able to contribute 400 MB/s for a total of 1.6 GB/s. The only benchmarks i've seen cap out at 540 MB/s write and 1.3 GB/s read. The SSD being tested in those is rated to be able to achieve much higher performance in both areas. I don't know if the difference is due to internal limitations of the processor or the PCI adapter itself. Hence my question.

2. I would also like to know if using a USB type C to ethernet adapter improves the network performance compared to using the built-in ethernet port.


RE: Any benchmarks for non-pine64 pci to NVMe adapters? - Bullet64 - 09-10-2018

Samsung 970 Pro NVMe M.2 500GB with pine64 PCIe NVMe adapter.


Code:
rock64@rockpro64v2_0:/mnt$ sudo iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa.

    Run began: Sat Jul 28 12:08:50 2018

    Include fsync in write timing
    O_DIRECT feature enabled
    Auto Mode
    File size set to 102400 kB
    Record Size 4 kB
    Record Size 16 kB
    Record Size 512 kB
    Record Size 1024 kB
    Record Size 16384 kB
    Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    83920   146526   171217   172733    56965   145921                                                          
          102400      16   271229   414900   454454   460018   193626   413496                                                          
          102400     512  1021580  1033256  1007794  1057973   990788  1075201                                                          
          102400    1024  1066333  1107758  1038792  1079089  1048932  1116344                                                          
          102400   16384   918513  1418530  1433672  1529740  1523500  1389826                                                          

iozone test complete.


with

Code:
rock64@rockpro64v2_0:/mnt$ uname -a
Linux rockpro64v2_0 4.18.0-rc5-1050-ayufan-ge70bd2ab8802 #1 SMP PREEMPT Thu Jul 26 08:33:14 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux



RE: Any benchmarks for non-pine64 pci to NVMe adapters? - dukla2000 - 09-10-2018

(09-09-2018, 04:19 PM)crhawle Wrote: 1. Does using a higher quality pci to NVMe adapter improve the performance? Ideally, the PCI lanes should each be able to contribute 400 MB/s for a total of 1.6 GB/s. The only benchmarks i've seen cap out at 540 MB/s write and 1.3 GB/s read. The SSD being tested in those is rated to be able to achieve much higher performance in both areas. I don't know if the difference  is due to internal limitations of the processor or the PCI adapter itself. Hence my question.
I have 2 different PCIe/NVMe adapters. Neither noticably has any chip on them, not sure what else would materially affect quality. Both work fine, never felt either of them was faster or slower than anything else.
But for sure NVMe results (as per Bullet64 above) on iozone are hugely variable on ROCKPro64, in particular down to the kernel in use. 4.18 kernel results are significantly better than anything I have seen on 4.4 (as in 2 or 3 times better) which can make comparisons meaningless. Let alone the NVMe devices themselves.