Hardkernel 256GB eMMC with lvm2 on PBP
#1
I have this installed & running now. That sure took longer than I expected it to.

The Hardkernel 256GB emmc has some sort of an orange bump on the bottom, opposite the socket, that the Pine64 64GB emmc which it replaced did not.

There was a sticky pad on the PBP mainboard underneath the emmc to help keep it in place while it's being bumped around; I had to scrape it off with a spudger to get the 256 installed. The new emmc still doesn't sit quite level, it angles up slightly due to the bump; I hope it isn't going to come loose.

Using the Pine-branded emmc-to-USB adapter which I got from the Pine store several years ago, the 256 GB emmc consistently gave read errors whenever it was being accessed raw, which greatly slowed down operations like mounting & partitioning. It's not doing that in the emmc slot, and the 64 isn't doing that on the adapter. For example, this sort of command would trigger a read error if it wasn't read from cache, but reading from a filesystem would not:
$ sudo dd if=/dev/sda count=20480 | md5sum
The errors were recoverable and only showed up in dmesg & syslog, but made it slow.

I wanted to partition the new storage myself and rsync over my files, instead of reinstalling. Things I learned in the process:
- gpt's partition table & backup partition table occupy the first 64 and last 64 sectors of the main address space of the medium; thats why the skip=64 when writing bootloaders.
- The emmc boot partitions (mmcblk2boot0 and mmcblk2boot1) are NOT being used. The convention is to leave the first 30 MiB in the main address space unallocated so there's space for bootloaders and start the first partition at 62500 sectors; with alignment to 2048 sectors as parted recommends, the first partition starts at 32*2048 = 65536 sectors.
- Stock uboot can only boot from FAT filesystems. Using ext4 for your /boot partition makes the medium unbootable. I only tested that FAT16 works.
- If you want the root partition on lvm, as I did, you need to rebuild the initramfs with the lvm2 hook included (instructions) and specify the root fs as e.g. root=/dev/VG1/LV1. u-boot doesn't understand lvm by itself; the tools in the initramfs are needed to mount it.
- Boot order is supposed to be USB > SD > eMMC but with the particular boot loader versions I was using, the USB wouldn't boot at all without the internal emmc disabled. Failure to boot off the USB adapter doesn't necessarily mean it won't boot when slotted.
- The last revision of tow-boot before the maintainer bailed doesn't work on my PBP so it's u-boot only.

In principle, lvm should be a great fit for an all-solid state system like the PBP. Logically contiguous blocks on the medium aren't physically contiguous, so a partition table designed for rotating disks that forces you to keep partitions completely contiguous rather than just reasonably unfragmented, as allocation in extents is intended to do, doesn't make sense.

But what about in practice? I had a hard time finding any performance numbers for lvm other than "works for me", some synthetic benchmarks, and an ancient study using lvm1 which found that performance with small files was terrible.

Life on arm means compiling a lot. I don't want it to slow down big builds.

I've been benchmarking by building haveno's git repository, which is a convenient size. It goes from 260M to 5.1G over the course of a build, and takes about 7-10 minutes to do it. 98% Java code, build managed with Gradle.

Testing `make skip-tests` starting with all dependencies resolved & cached on disk, a gradle daemon running, parallel builds enabled & build caching off, and the linux disk cache freshly flushed, it took about 7m 20s to build on the old emmc without lvm, and 7m 3s on the new emmc with lvm2. Still to test is building on the new emmc in the non-lvm partition I left for comparison, and building from heavily fragmented logical volumes.
  Reply
#2
Second run on LVM, ext4: BUILD SUCCESSFUL in 7m 7s (427 s)
Non-LVM partition, same MMC, ext4: BUILD SUCCESSFUL in 6m 57s (417 s)
So lvm took 2.4% longer.

The old emmc might have been slower just because it was older. Flash is supposed to get slower as it wears.

The amount of writing that this build does is still concerning. I found one thread on serverfault where a comment said,
https://serverfault.com/questions/238033...nder-linux
Quote:"BEWARE: The ext4 lifetime_write_kbytes and session_write_kbytes are not representative of SSD memory cell writes. I've seen one system report 15TB for ext4 lifetime_write_kbytes, while the SSD itself reported only 1TB via SMART attribute 241. I suspect ext4 might be adding the entire size of the filesystem's free space to those statistics every time fstrim runs, even though the SSD controller obviously wouldn't repeatedly erase all those available blocks if they hadn't been touched since the last erase. tune2fs -l output appears to be similarly misleading."

And that is not my observation at all. I had 'watch -n5 iostat -h' running during the builds, and saw kB_wrtn go up by 5.8 G - 5.9 G, which is in line with the change in size of the directory. ext4's lifetime writes field is only updated on remount; since I had non-lvm on a separate partition, I could easily check it, and it showed +6 GB written, rounded to the GB.

What write amplification factor does the emmc multiply that by? There doesn't appear to be any way to know. Real SSDs support smart monitoring, which let you check NAND writes vs host writes. MMCs don't have any public interface to check that.

Short version, the PBP 2.0 should have a built-in M.2 slot so you don't have to try to cram an adapter in there, and it needs more RAM. The PineTab's SoC is supposed to support up to 32 GB RAM; with even 8 GB, you could do the build on a tmpfs and it would have enough left for the compiler to squeak through. With 16+ GB, it would be comfortable. Modern mega-projects like this show very clearly why 64 GB emmc & 4 GB RAM aren't enough if you want to do actual software development rather than just building packages occasionally for personal use. The PBP's processor is actually powerful enough to do the build in < 4 m with build caching enabled, but you shouldn't have to worry about how many build cycles you're doing per day. The .jar files are already compressed, so using zram to make a compressed tmpfs, while easy to set up, didn't help.
  Reply
#3
Whilst it doesn't go as far as you'd like; the best thing I ever did (from the compiling a lot perspective) was buy a RockPro 64 and stop doing large compiling on the PBP.
:wq



[ SRA accepts you ]
  Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  256GB EMMC! sallyhaj 3 1,585 11-19-2024, 10:06 AM
Last Post: ameridroid
  eMMC failed Dendrocalamus64 9 9,231 11-12-2021, 11:08 AM
Last Post: Dendrocalamus64
  Upgrade eMMC card - boot problems alpopa 0 2,683 09-16-2020, 12:31 PM
Last Post: alpopa
  128GB eMMC support dgkPBP 7 10,407 09-08-2020, 07:36 AM
Last Post: KC9UDX
  Manjaro 20.06 requires multiple boots from eMMC to launch successfully on PBP Spectre 0 2,543 07-09-2020, 06:12 PM
Last Post: Spectre
  eMMC to USB adapter "very" slow at reading data talpa 3 6,093 07-08-2020, 08:55 AM
Last Post: talpa
  Internal installation of additional eMMC Besouro 7 10,218 06-06-2020, 09:33 PM
Last Post: xmixahlx
  Boot problems with new 128GB eMMC module Kareema 1 3,775 05-31-2020, 10:36 AM
Last Post: Kareema
Star Could you please add 128GB eMMC as a build option for this preorder? Dendrocalamus64 3 5,233 03-23-2020, 04:54 AM
Last Post: xmixahlx
  Emmc to MicroSD adapter won't fit in recessed slot of pbp sunjam 3 5,090 03-04-2020, 01:20 PM
Last Post: zaius

Forum Jump:


Users browsing this thread: 1 Guest(s)