12-20-2019, 06:49 AM
(12-20-2019, 05:49 AM)Arwen Wrote: Hmm, if the eMMC modules use noticably less energy, perhaps we need to re-think the NVMe adapater. Go for a PCIe controller chip that lets us add another eMMC slot. This adapter would be smaller, and almost certainly use less power than the average NVMEe drive in M.2 form factor. Here is a one companies datasheet;
http://www.iwavejapan.co.jp/product/iW%2...t-R1.0.pdf
My preference would be a chip that did 2 x eMMC or UFS, (the newer standard which replaces eMMC). In the end, the fact that the NVMe adpater board is both optional, and reasonably easy to install makes having 2 types of storage cards possible. The end user chooses which they want.
In my case, I want a mirror drive to my OS. So I would be using both builtin and PCIe devices at the same time. (The OS partition would be limited to perhaps 30GB of both devices.)
I'm not sure if the rk3399 PCIe controller even supports PCIe bridge devices, and it probably wouldn't be power efficient to use one, but it would be awesome if we could break out the 4 lanes into multiple m.2 slots. I'd kill for both eMMC/UFS/SATA storage AND a slot for a fully-featured 2x2 AC wifi adapter on a breakout board.
Also, something to think about WRT mirrored flash storage: if you're using identical mirrored drives (eMMC/SSD/whatever) there's a pretty high likelihood of your workload killing both right around the same time. Either due to some feature interaction between the workload and the storage controller exposing a flaw in the storage controller design or simply wearing the endurance of both drives out at the same time. Ideally you want two storage devices with similar performance/endurance characteristics that aren't actually identical to reduce your risk.
That said I'd love it if the next iteration of the PBP had two eMMC sockets, that'd allow you to either run redundant drives using LVM/ZFS/btrfs or boot multiple operating systems. That would be awesome.