4 better solutions than DD Images
Hi forum,

I guess this is more of a suggestion to the release manager for pine64 and less of a question.

While I appreciate the dd image format's simplicity, it would make a lot more sense to distribute images lest wastefully.

Part of the reason I mention that is because the dd image that is distributed has a corrupt partition table, so it can't be copied to an SD card via more efficient means.

I run into this problem regularly and I myself am a fan of a couple of methods:

1) the bash binary


This is more useful if you want to add obscurity (not security) to the images you distribute

2) a zip file, along with 2 bash scripts that implement *an API*

This method is a bit more complex.

script 1: flash.sh

usage: flash.sh [--device|-d <device>] [--tmpdir|-t <tempdir>]

device  - a block device e.g. a default could be /dev/mmcblk0, exported as $DEVICE
tempdir - a temporary directory e.g. see man mktemp, exported as $TMPDIR

in turn, script 1 sources script 2, and script 2 exposes a few shell functions.

script 2: details.sh

fetch(): fetches all source files to $TMPDIR specified by script1
            urls are encoded in the details.sh file
            small files (e.g. patch files) can be stored in the .zip file locally
            large files can be fetched from the internet

prepare(): applies patches
                 partitions device
                 resizes partitions as appropriate
                 a really compact and scriptable way to specify a partition layout that can fill up space to the end of the disk (so you don't need to have 8GB, 16GB, and 32GB files) is using sfdisk

flash(): could use dd, or not
            a more efficient means for Pine64 would be to distribute .ext4 files or .vfat files that can be mounted as loop devices (using losetup)

3) fastboot

This is arguably the standard way of distributing Android factory images and is described in detail at the link below


4) There is always yocto too.. ¯\_(ツ)_/¯

Really, there must be a better alternative than sitting here, waiting for 26 billion zeros to be written to an sd card.


It's more than just simple, it's raw. In most Linux distros you can loop mount a dd image directly from the GUI file manager by simply right clicking the file, no CLI required. And you can write the image using standard GUI tools like gnome-disk-utility.

What I do when imaging my SD cards is put it in my Linux laptop and use gparted to shrink the last partition on the disk. I then use gparted again to get the last sector number of the last partition and add one to it (for sector zero.) Pass this number to dd as a count= parameter and you wont have to image the unallocated space at the end. If you want to gz the img output file you can also cat /dev/zero > zerofile then delete that file to zero the free space in your shrunk partition. That should help improve your compression ratio.

Of course I only do this for my "clean" initial/base image. Backing up files in /home dir after that is usually as that's needed for me but you might need more than that.

Forum Jump:

Users browsing this thread: 1 Guest(s)