SSD for PBP
I know, some people may have miseed it, so it should be a sticky thread.

BTW, I made some speed test and before I used NVMe as system and boot drive, hdparm test was about 600 MB/s, now when I boot from it, 300-400 MB/s.

Boot process and common work is not much faster anyway.

I use 240 GB Toshiba OCZ RC100.
(12-16-2019, 12:50 AM)Wizzard Wrote: I know, some people may have miseed it, so it should be a sticky thread.

BTW, I made some speed test and before I used NVMe as system and boot drive, hdparm test was about 600 MB/s, now when I boot from it, 300-400 MB/s.

Boot process and common work is not much faster anyway.

I use 240 GB Toshiba OCZ RC100.

Disk io isn't really the performance bottleneck on the PBP most of the time so you're spot on, using NVMe over eMMC won't be much faster. Realistically you'll only see a performance improvement during boot or during extremely disk access heavy workloads like compilation.
Cost is an issue for those who want more storage. The 128GB eMMC Module in the Pine Store is $55. A Kingston 250GB A2000 is $40 on Amazon. Add in $7 for the adapter, and SSD is twice the capacity for less money.
(12-16-2019, 12:50 AM)Wizzard Wrote: I know, some people may have miseed it, so it should be a sticky thread.

BTW, I made some speed test and before I used NVMe as system and boot drive, hdparm test was about 600 MB/s, now when I boot from it, 300-400 MB/s.

Boot process and common work is not much faster anyway.

I use 240 GB Toshiba OCZ RC100.

Thread stuck.
(12-18-2019, 05:15 PM)zaius Wrote: Cost is an issue for those who want more storage.  The 128GB eMMC Module in the Pine Store is $55.   A Kingston 250GB A2000 is $40 on Amazon.  Add in $7 for the adapter, and SSD is twice the capacity for less money.

My main issue of waiting to install a PCIe SSD in my Pinebook Pro has been a mainstream solution to booting from this Drive.

I do think I could make it physically fit as DanielT did, (another thread)   "cutting the adapter card" and "using an alternate method" to secure the Drive to the adapter card.   (When using the M.2 2280 model cards)

BUT,     recently I have seen others having a problem with power to this SSD drive,  I am wondering if the PCIe module draws much more
Power than the eMMC does  ?

The specifications on the maximum power of the PCIe modules is very spotty at best, and some have NO Listed power requirements ?
 
Has anyone looked into the power requirements of the original 60gb (or optional 128gb) eMMC  modules ?

A BIG PLUS to the Pinebook Pro, is its relatively long battery life,     It would be a shame to drastically reduce the run time,
  just to get a small improvement in boot up time.  (?)

I purchased an ADATA XPG SX6000 LITE PCIe module in anticipation to improving my PBP's performance,  It appears to have lower power requirements than some of the available PCIe modules...
     BUT if it needs more power than the eMMC,  maybe I should just find a different purpose for this PCIe module ?

           THANKS for any advice and/or suggestions  !
      LINUX = CHOICES
         **BCnAZ**
               Idea
   Donate to $upport
your favorite OS Team
Hmm, if the eMMC modules use noticably less energy, perhaps we need to re-think the NVMe adapater. Go for a PCIe controller chip that lets us add another eMMC slot. This adapter would be smaller, and almost certainly use less power than the average NVMEe drive in M.2 form factor. Here is a one companies datasheet;

http://www.iwavejapan.co.jp/product/iW%2...t-R1.0.pdf

My preference would be a chip that did 2 x eMMC or UFS, (the newer standard which replaces eMMC). In the end, the fact that the NVMe adpater board is both optional, and reasonably easy to install makes having 2 types of storage cards possible. The end user chooses which they want.

In my case, I want a mirror drive to my OS. So I would be using both builtin and PCIe devices at the same time. (The OS partition would be limited to perhaps 30GB of both devices.)
--
Arwen Evenstar
Princess of Rivendale
(12-20-2019, 05:49 AM)Arwen Wrote: Hmm, if the eMMC modules use noticably less energy, perhaps we need to re-think the NVMe adapater. Go for a PCIe controller chip that lets us add another eMMC slot. This adapter would be smaller, and almost certainly use less power than the average NVMEe drive in M.2 form factor. Here is a one companies datasheet;

http://www.iwavejapan.co.jp/product/iW%2...t-R1.0.pdf

My preference would be a chip that did 2 x eMMC or UFS, (the newer standard which replaces eMMC). In the end, the fact that the NVMe adpater board is both optional, and reasonably easy to install makes having 2 types of storage cards possible. The end user chooses which they want.

In my case, I want a mirror drive to my OS. So I would be using both builtin and PCIe devices at the same time. (The OS partition would be limited to perhaps 30GB of both devices.)

I'm not sure if the rk3399 PCIe controller even supports PCIe bridge devices, and it probably wouldn't be power efficient to use one, but it would be awesome if we could break out the 4 lanes into multiple m.2 slots. I'd kill for both eMMC/UFS/SATA storage AND a slot for a fully-featured 2x2 AC wifi adapter on a breakout board.

Also, something to think about WRT mirrored flash storage: if you're using identical mirrored drives (eMMC/SSD/whatever) there's a pretty high likelihood of your workload killing both right around the same time. Either due to some feature interaction between the workload and the storage controller exposing a flaw in the storage controller design or simply wearing the endurance of both drives out at the same time. Ideally you want two storage devices with similar performance/endurance characteristics that aren't actually identical to reduce your risk.

That said I'd love it if the next iteration of the PBP had two eMMC sockets, that'd allow you to either run redundant drives using LVM/ZFS/btrfs or boot multiple operating systems. That would be awesome.
Yes, I tend to use different storage devices for each half, just to change the failure risk. For example, my FreeNAS has 2 x 4TB WD Reds & 2 x 4TB WD Red Pros, in a RAID-Z2, (aka RAID-6). So, it's much less likely to have more than 2 disks fail at the same time. Plus, for the extra paranoia in me, I bought them from different vendors and at different times. So hopefull, different batches. But then again, I've experienced multiple lemon hard drives, mostly at work, but a few at home.

As for the RK3399 PCIe controller supporting PCIe bridge devices, I would think that would be more of a software issue. (But I could be wrong.)

However, it's highly un-likely we can get an adapter board that has both a PCIe splitter for a M.2 slot for WiFi, and storage expansion too, (which would require a controller as well, unless it was NVMe / M.2).

My preference for file systems is now ZFS. (Yes, even on Linux. Even on my existing laptop.) Just too many good features, (like the built in mirroring).
--
Arwen Evenstar
Princess of Rivendale
(12-20-2019, 05:49 AM)Arwen Wrote: Hmm, if the eMMC modules use noticably less energy, perhaps we need to re-think the NVMe adapater. Go for a PCIe controller chip that lets us add another eMMC slot. This adapter would be smaller, and almost certainly use less power than the average NVMEe drive in M.2 form factor. Here is a one companies datasheet;

http://www.iwavejapan.co.jp/product/iW%2...t-R1.0.pdf

My preference would be a chip that did 2 x eMMC or UFS, (the newer standard which replaces eMMC). In the end, the fact that the NVMe adpater board is both optional, and reasonably easy to install makes having 2 types of storage cards possible. The end user chooses which they want.

In my case, I want a mirror drive to my OS. So I would be using both builtin and PCIe devices at the same time. (The OS partition would be limited to perhaps 30GB of both devices.)

Just wondering out Loud :
   IF it would be possible to add a second eMMC through the PCIe board connection,  do you think it would be possible to utilize as a raid configuration ? ?
      LINUX = CHOICES
         **BCnAZ**
               Idea
   Donate to $upport
your favorite OS Team
(12-21-2019, 07:43 PM)bcnaz Wrote:
(12-20-2019, 05:49 AM)Arwen Wrote: Hmm, if the eMMC modules use noticably less energy, perhaps we need to re-think the NVMe adapater. Go for a PCIe controller chip that lets us add another eMMC slot. This adapter would be smaller, and almost certainly use less power than the average NVMEe drive in M.2 form factor. Here is a one companies datasheet;

http://www.iwavejapan.co.jp/product/iW%2...t-R1.0.pdf

My preference would be a chip that did 2 x eMMC or UFS, (the newer standard which replaces eMMC). In the end, the fact that the NVMe adpater board is both optional, and reasonably easy to install makes having 2 types of storage cards possible. The end user chooses which they want.

In my case, I want a mirror drive to my OS. So I would be using both builtin and PCIe devices at the same time. (The OS partition would be limited to perhaps 30GB of both devices.)

Just wondering out Loud :
   IF it would be possible to add a second eMMC through the PCIe board connection,  do you think it would be possible to utilize as a raid configuration ? ?
Yes... That's what I would be doing, RAID-1, (aka Mirroring).

It would have to be software based, but that is my preference if I can use ZFS or BTRFS.
--
Arwen Evenstar
Princess of Rivendale


Forum Jump:


Users browsing this thread: 1 Guest(s)