05-04-2025, 01:50 PM
(This post was last modified: 05-04-2025, 02:30 PM by Dendrocalamus64.)
My PBP is still on a brand new Hardkernel 256GB emmc so it's 0x01.
My Rockpro64 has a 128GB emmc from Pine64. The filesystem on that dates from August 2021, lifetime writes 1409 GB.
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x02
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01
https://www.cnx-software.com/2019/08/16/...sh-memory/
I don't think the one I had fail was from writing too much, I think it was a bug in the on-emmc firmware that corrupted the internal data structures leading to sudden failure. There does not have to be any warning, data on emmcs needs to be kept backed up at all times.
Although I hadn't set up zram yet back then and was swapping to the emmc, now that I have, iostat -h shows me that the kB_wrtn/s for zram0 is less than for mmcblk2, 9.9k vs 14.5k. With my usage, swapping to the emmc would less than double the data written.
This is with the following config changes applied:
https://wiki.archlinux.org/title/Zram#Op...ap_on_zram
IE, a higher swappiness (180) than the default (60).
However, vm.watermark_boost_factor = 0 prevents periodic "swap storms".
I don't have any data on how much that increased usage. I guess I could run with the default again to collect it.
The default value of watermark_boost_factor is terrible, it's what causes the PBP to lock up in out-of-memory conditions instead of having the OOM killer actually work.
Fwd: Debian 11: Tuning kernel parameters swappiness and watermark_boost_factor to stop SWAP Storm
My Rockpro64 has a 128GB emmc from Pine64. The filesystem on that dates from August 2021, lifetime writes 1409 GB.
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x02
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01
https://www.cnx-software.com/2019/08/16/...sh-memory/
Quote:The Extended Device Specific Data, formerly named Extended Card Specific Data, is where health reports are made available. It contains:
a vendor proprietary health report, 32 bytes long
device lifetime estimation type A, providing health status in increments of 10%
This refers to SLC blocks in our eMMC.
device life time estimation type B, providing health status in increments of 10%
This refers to MLC blocks in our eMMC.
pre-EOL info, reflecting device lifetime by average reserved blocks
returns values normal, warning (80% of reserved blocks consumed) and urgent (90% of reserved blocks consumed)
I don't think the one I had fail was from writing too much, I think it was a bug in the on-emmc firmware that corrupted the internal data structures leading to sudden failure. There does not have to be any warning, data on emmcs needs to be kept backed up at all times.
Although I hadn't set up zram yet back then and was swapping to the emmc, now that I have, iostat -h shows me that the kB_wrtn/s for zram0 is less than for mmcblk2, 9.9k vs 14.5k. With my usage, swapping to the emmc would less than double the data written.
This is with the following config changes applied:
https://wiki.archlinux.org/title/Zram#Op...ap_on_zram
IE, a higher swappiness (180) than the default (60).
However, vm.watermark_boost_factor = 0 prevents periodic "swap storms".
I don't have any data on how much that increased usage. I guess I could run with the default again to collect it.
The default value of watermark_boost_factor is terrible, it's what causes the PBP to lock up in out-of-memory conditions instead of having the OOM killer actually work.
Fwd: Debian 11: Tuning kernel parameters swappiness and watermark_boost_factor to stop SWAP Storm