07-27-2019, 12:31 PM
(This post was last modified: 08-07-2019, 07:00 PM by Arwen.)
I like the snapshot, built-in mirroring, and other features of ZFS & BTRFS. By preference, I'd use ZFS, it's more mature, and has more features than BTRFS. But, either one would give me alternate boot environments, so OS upgrades can be less painful. Been using ZFS on Linux for 4.5 years, and I like it. (Used BTRFS before then, and alternate boot partitions before BTRFS.)
Anyway, with a dedicated "/boot" partition, ZFS on Linux allows all the features. Including native ZFS encryption, (pool wide or dataset / zVol).
Has anyone here used ZFS or BTRFS on ARM64?
I guess it's time to un-pack my old media server, (a slower, low power quad core AMD64), and practice / test the latest OpenZFS features like pool or dataset encryption. (After all, PBP is a laptop, made for travel...)
--
Arwen Evenstar
Princess of Rivendale
I like Butter for mount -o compress. Never tried anything other than ext3/4 for the root partition though. F2FS works well on my Rasperry Pi too... I even made a video:
https://youtu.be/UZ6R4zciWeM
(07-27-2019, 02:22 PM)InsideJob Wrote: I like Butter for mount -o compress. Never tried anything other than ext3/4 for the root partition though. F2FS works well on my Rasperry Pi too... I even made a video:
https://youtu.be/UZ6R4zciWeM BTRFS worked reasonably well for root FS.
Initially I had to use the sub-volume numeric ID for booting, (when booting non-default root FS).
But, within a year, (2012 or 2013), they started supporting sub-volume names for root FS. That
made things easier. I could add the grub entry first, and then make the snapshot. Thus, both boot
environments had the updated grub entry. Otherwise I had to manually copy it over because I
could not predict what the snapshot's sub-volume numberic ID was before hand.
--
Arwen Evenstar
Princess of Rivendale
I'm not an expert but for ZFS you need 8 gb ram per storage terabyte only for it.
ZFS is not for an standard laptop use.
(07-29-2019, 03:13 AM)erchache2000 Wrote: I'm not an expert but for ZFS you need 8 gb ram per storage terabyte only for it.
ZFS is not for an standard laptop use.
That 8GB of RAM per 1TB of disk is for servers, like file servers. And it was never a hard & fast rule. Sun's original recomendation for installing Solaris 10 with ZFS root was 512MB for SPARC and a bit higher for x64. My old laptop, (x64), has 2GB of memory and runs with ZFS root just fine.
All that said, a bit more RAM helps with file system caching.
Now de-duplication is a real memory eater. Never use de-dup unless you are a hard core server admin.
--
Arwen Evenstar
Princess of Rivendale
(07-27-2019, 12:31 PM)Arwen Wrote: Has anyone here used ZFS or BTRFS on ARM64?
Been using Sailfish OS-based smartphone for quite a while, thus BTRFS a lot.
(Jolla1 phone uses BTRFS for it's root+home, exactly for the snapshotting reasons as it enables simpler "roll-back")
(BTRFS is still an option for microSD storage on Sailfish X on Sony Xperia X and XA2 and I use that).
I also use it a lot on Raspberry Pi (but it's not in-kernel, so root partition is still limited to F2FS for booting reasons) (also technically the RPi are still running 32-bit kernels, even on chipset where ARM64 is supported).
It basically works, specially on more modern kernels where the bugs have been ironed out.
Compared to ZFS it has the advantage to being mainline kernel supported.
Compared to ZFS it uses also a lot less RAM.
The caveats:
- we're speaking of embed devices with not a lot of eMMC space. Specially on older 16GB phone it was possible to run out of allocable chunks. It's a good advice to use "single" instead of "dup" for metadata. It's also good to regularily do some maintenance (balancing).
64GB and 128GB eMMC should be less prone to that. More modern kernels have better auto-balancing and auto-defrag capabilities.
Choose a distro that has spent the time to develop necessary maintenance tools. e.g.: openSuse and check their "btrfs-maintenance" package.
(You can set it to periodically "scrub", "balance" and TRIM the freespace).
(Though I've carved my own scripts for the Debian based Raspbian)
- it's a CoW system. You DO NOT run fsck on a cow system. You either simply roll back to an older still good copy (modern kernel semi automate that for you), or fall-back to "btrfs restore -sxmS" to extract still readable data if the filesystem has become completely unmountable. Forget about fsck/btrfs repair. Not worth your time. (CoW and - in one case of hardware corruption - btrfs restore have saved me bigtime).
- it's a CoW system: on large files with random writes (like databases, like VMs, like torrents) the file can pretty quickly become a large maze of pointers to new copies. Recent kernels have auto-defrag as an option to Btrfs, though it's not as good (supposedly) as ZFS'. On the other hand, "chattr +C" is your friend when creating files to flag them as "CoW disabled/inplace modification" (Note: currently you can't flag an already existing file. You need to create a new one, flag it and then transfert the data).
- it's an embed system, you might get stuck with a slightly oldish kernel. Check what features are actually availble *and considered stable* in the kernel (According to tables on BTRFS' wiki) before turning them on. e.g.: Zstd might not be stable in the LTS kernel we'll get on PineBook Pro at the beginning.
- as of now RAID-5/6 still ARE NOT considered stable (According to tables on BTRFS' wiki) - though it's not easy to have enough devices to begin with on a laptop.
Don't be scared of the list, I'm just nit-picking on details.
@ DrYak, do you have any experience with BTRFS Mirroring?
I've heard that in the past, if one mirror died, and you rebooted, that caused BTRFS to be un-mountable.
Kinda silly for RAID-1 behaviour, so I am assuming it was a bug and is being / has been fixed.
My thought was to mirror the eMMC with a NVME SSD, as I am a bit paranoid about data loss.
--
Arwen Evenstar
Princess of Rivendale
It's difficult to argue the use cases for either file system with the characteristics of the PineBook Pro laptop.
These file systems are meant for scenarios where there is redundancy in the form of multiple drives.
Trying to apply ZFS or Btrfs to eMMC alone may be worse than not using either system - it could be all working fine one moment but corruption in the wrong place means total loss of everything and no way to fix it at all - in similar situations, other file systems could recover.
Adding a single NVMe to the mix improves matters a little but only a little and it's still far from ideal. Obviously, you could hold two copies for a pool on the NVMe drive, for half the effective storage capacity with every block stored twice in the pool. But when the drive fails everything will be lost.
Much better to implement these file systems on a separate computer with multiple drives and take regular backups from the PineBook Pro to this computer. And run a lightweight file system on the PineBook Pro that can recover from errors. This is a much safer scenario.
08-01-2019, 08:29 AM
(This post was last modified: 08-03-2019, 03:06 AM by Arwen.
Edit Reason: Corrected spelling, and re-worded travel section.
)
(07-31-2019, 10:26 AM)lot378 Wrote: It's difficult to argue the use cases for either file system with the characteristics of the PineBook Pro laptop.
These file systems are meant for scenarios where there is redundancy in the form of multiple drives.
Trying to apply ZFS or Btrfs to eMMC alone may be worse than not using either system - it could be all working fine one moment but corruption in the wrong place means total loss of everything and no way to fix it at all - in similar situations, other file systems could recover.
Adding a single NVMe to the mix improves matters a little but only a little and it's still far from ideal. Obviously, you could hold two copies for a pool on the NVMe drive, for half the effective storage capacity with every block stored twice in the pool. But when the drive fails everything will be lost.
Much better to implement these file systems on a separate computer with multiple drives and take regular backups from the PineBook Pro to this computer. And run a lightweight file system on the PineBook Pro that can recover from errors. This is a much safer scenario.
A couple of things about ZFS. All metadata, (directory entries, etc...), have a minimum of 2 copies, even without any Mirroring. With Mirroring, I'd end up with 2 copies PER sub-mirror, 4 total. Critical metadata has even more copies. So file system corruption is less likely with ZFS than most file systems.
I've actually booted a MicroSDXC card with ZFS on Linux. Quite usable. Yes, slow, but it was only for on-line backup and recovery media.
Already have a NAS with ZFS for backups and general, long term storage.
The reasons to use ZFS out-weigh the negatives. I WANT to detect file system corruption. Been bit a few times in the past where either bad disk blocks or damaged file system blocks caused un-known corruption. Then I had trouble recovering. With ZFS I'd at least know about it, even if I could not recover that pool.
Yes, Mirroring across eMMC & NVME may seem a bit silly. But, at present 2 of my Linux computers mirror SSD to a piece of HD. It works. Bad blocks or complete failure of either device is not catastrophic. My current laptop, (now slow due to Intel's new Meltdown & friends upgrade), uses a single 1TB SATA SSD with 2 partitions mirrored for the OS.
Natually this would be an experiment. If I find ZFS or BTRFS un-reliable, (either because it's ARM64, or slower CPU), I can go back to EXT4 or XFS. If I travel before I consider the experiment done, then I'd likely take my old laptop along as a backup device.
--
Arwen Evenstar
Princess of Rivendale
(07-31-2019, 10:26 AM)lot378 Wrote: These file systems are meant for scenarios where there is redundancy in the form of multiple drives.
Trying to apply ZFS or Btrfs to eMMC alone may be worse than not using either system - it could be all working fine one moment but corruption in the wrong place means total loss of everything and no way to fix it at all - in similar situations, other file systems could recover.
I can't speak about ZFS, but BTRFS is not only meant for redundancy. RAID1 and DUP are among the possibilities. But it's also about providing subvolumes, snapshots, CoW, compression, checksum on everything, etc.
All these valid features even on small devices (Jolla even used it for that exact puprose on their first smartphone).
Thankfully, due to the way BTRFS has data laid out there is no such thing as a single wrong place that loses everything. I've been through some flash media corruption (including hiqh quality SD from reputable brand names - but you now, bad luck happens), and each time I've still been able to recover nearly everything (btrfs restore is your friend).
(07-31-2019, 07:41 AM)Arwen Wrote: @DrYak, do you have any experience with BTRFS Mirroring?
I've heard that in the past, if one mirror died, and you rebooted, that caused BTRFS to be un-mountable.
Kinda silly for RAID-1 behaviour, so I am assuming it was a bug and is being / has been fixed.
Saddly I don't have enough experience for a definitive answer. I do use RAID1 profile on multiple BTRFS installation (and its the default for metadata on multiple drive), but I haven't had an instance of dying drive on any of those instance.
On the other hand, official BTRFS documentation now considers RAID0 and RAID1 stable (as opposed to RAID5 and 6 which aren't yet), and thus I expect that RAID1 related corruption a thing of the past.
|