06-16-2020, 01:20 PM
First, to mention that I've been running successfully Manjaro atop of a BTRFS root partition for quite some time now.
What's needed:
Actually, BTRFS *will detect it, too*.
For BTRFS to not detect it, you would need to have the corrupted data somewhat magically matching the corrupted checksum on the corrupted disk block. It's not theoretically impossible but chances are *EXTREMELY unlikely* if you picked up a decent checksum algorithm when formatting your BTRFS drive.
You'll get error messages such as "checksum error, expected 'deadbeefh' got '00000000h' instead"
(source: my experience with a miswritten flash block on an SD card due to power-failure on my Raspberry Pi).
Cue-in current backlash against WD for silently sneaking SMR drives in their NAS line-up which behave poorly on ZFS RAIDz2. :-D
(Of course, this is due to RAIDz2 and RAID6 need to do read/recompute parity/write cycles, whenever they write something. Which leads to write amplification by the underlying SMR relocation scheme. It's not inherent to ZFS, it's common on anything with parity (including mdadm RAID6) that doesn't have tiered caching like BCacheFS to group everything in a single append-write to the backed)
What's needed:
- edit /etc/mkinitcpio.conf to add btrfs into the initrd
MODULES=(btrfs)BINARIES=("/usr/bin/btrfs") - then run mkinitcpio to create a new initrd with the necessary drivers.
- backup your old EXT4 partition (if it is not on a different device, e.g.: SD card vs eMCC).
- create and format a new BTRFS partition
- (optionnally: create subvolumes to your liking)
- modify /boot/extlinux/extlinux.conf so that it points out to the new partition created before (e.g.: I called mine ROOT_MNJRO)
root=LABEL=ROOT_MNJRO - copy all the files from the old partition to the new one, preserving all permissions, ACL, xattr, hardlinks, etc.:
rsync -avPSHAX --inplace /mnt/sdcard/ /mnt/emmc/
(08-09-2019, 02:39 PM)Arwen Wrote: Oh, I forgot another way BTRFS and ZFS differ, BTRFS stores the checksum with the blocks, (data or metadata). {...}
This difference is not too important until you run across mis-written data. {...} With BTRFS, a scrub of the data may not detect this.
Actually, BTRFS *will detect it, too*.
For BTRFS to not detect it, you would need to have the corrupted data somewhat magically matching the corrupted checksum on the corrupted disk block. It's not theoretically impossible but chances are *EXTREMELY unlikely* if you picked up a decent checksum algorithm when formatting your BTRFS drive.
You'll get error messages such as "checksum error, expected 'deadbeefh' got '00000000h' instead"
(source: my experience with a miswritten flash block on an SD card due to power-failure on my Raspberry Pi).
(08-09-2019, 02:39 PM)Arwen Wrote: This became important to me when I selected the first 8TB disk of reasonable price, the Seagate Archive SMR, (Shingled Magnetic Recording). This type of disk ALWAYS relocates data blocks on write, (somewhat like flash does). Using ZFS was a no-brainer on that disk. It gave me both data validation, (for the more complex firmware Seagate had to use), and snapshots so I can store and release incremental backups easily.
Cue-in current backlash against WD for silently sneaking SMR drives in their NAS line-up which behave poorly on ZFS RAIDz2. :-D
(Of course, this is due to RAIDz2 and RAID6 need to do read/recompute parity/write cycles, whenever they write something. Which leads to write amplification by the underlying SMR relocation scheme. It's not inherent to ZFS, it's common on anything with parity (including mdadm RAID6) that doesn't have tiered caching like BCacheFS to group everything in a single append-write to the backed)