08-07-2019, 04:32 AM
(07-31-2019, 10:26 AM)lot378 Wrote: These file systems are meant for scenarios where there is redundancy in the form of multiple drives.
Trying to apply ZFS or Btrfs to eMMC alone may be worse than not using either system - it could be all working fine one moment but corruption in the wrong place means total loss of everything and no way to fix it at all - in similar situations, other file systems could recover.
I can't speak about ZFS, but BTRFS is not only meant for redundancy. RAID1 and DUP are among the possibilities. But it's also about providing subvolumes, snapshots, CoW, compression, checksum on everything, etc.
All these valid features even on small devices (Jolla even used it for that exact puprose on their first smartphone).
Thankfully, due to the way BTRFS has data laid out there is no such thing as a single wrong place that loses everything. I've been through some flash media corruption (including hiqh quality SD from reputable brand names - but you now, bad luck happens), and each time I've still been able to recover nearly everything (btrfs restore is your friend).
(07-31-2019, 07:41 AM)Arwen Wrote: @DrYak, do you have any experience with BTRFS Mirroring?
I've heard that in the past, if one mirror died, and you rebooted, that caused BTRFS to be un-mountable.
Kinda silly for RAID-1 behaviour, so I am assuming it was a bug and is being / has been fixed.
Saddly I don't have enough experience for a definitive answer. I do use RAID1 profile on multiple BTRFS installation (and its the default for metadata on multiple drive), but I haven't had an instance of dying drive on any of those instance.
On the other hand, official BTRFS documentation now considers RAID0 and RAID1 stable (as opposed to RAID5 and 6 which aren't yet), and thus I expect that RAID1 related corruption a thing of the past.