10-20-2020, 05:37 PM
(This post was last modified: 10-20-2020, 05:59 PM by ab1jx.)
This is probably not the normal way to do it, but I installed the mrfixit image to an SD, updated it to Buster, seems to work fine. Now I'm trying to copy it to my 1 TB Intel SSD.
I made a 1 GB (generous) FAT32 partition and a 200 GB ext4 with gparted on the SSD. Then following piclone's example I used cp -ar to copy the files in the partitions on the SD to the SSD. Then edited the root= specification in extlinux.conf on it to the nvme path.
But it doesn't boot, it boots the eMMC instead. My notes:
Code: ssd:
Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 2048 2050047 2048000 1000M b W95 FAT32
/dev/nvme0n1p2 2050048 411650047 409600000 195.3G 83 Linux
sd:
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 32768 163839 131072 64M c W95 FAT32 (LBA)
/dev/sda2 * 262144 62333951 62071808 29.6G 83 Linux
Mounted the sd on /sd
mounted the ssd on /mnt
cp -ar /sd/* /mnt
umount /sd
umount /mnt
mount /dev/nvme0n1p2 /mnt
mount /dev/sda2 /sd
cp -ar /sd/* /mnt
[wait]
root@pbp:/usr# umount /mnt
root@pbp:/usr# umount /sd
root@pbp:/usr#
mount /dev/nvme0n1p1 /mnt
joe /mnt/extlinux/extlinux.conf
root=/dev/nvme0n1p2 twice including the bak, important
I copied a Raspberry Pi SD to a USB hard drive and got it booting this way once. Maybe I forgot something here.
Duh, forgot up update the /etc/fstab on the SSD, working on that but it's time to eat.
On a non-booted image,,, dev, proc, sys, run are empty directories
Right, traditionally there there are fstab entries for mount points. I just tried
Code: /dev/nvme0n1p1 /boot vfat defaults 0 2
/dev/nvme0n1p2 / ext4 defaults,noatime 0 1
over an ssh connection and rebooted it, but I'm still seeing the emmc.
I don't know anything about uboot but I was hoping to fake it by copying a working sd.
10-21-2020, 06:27 AM
(This post was last modified: 10-21-2020, 08:25 AM by ab1jx.)
OK, this seems odd. I created the partitions in gparted but remembered that I hadn't specified mountpoints in it. There's no place to do that. There is for the eMMC, I don't see why one has them and the other doesn't.
Notice this doesn't show mount point options
The exclamation point has to do with not finding superblocks. Gparted isn't quite right in places, I knew that.
There are superblocks:
Code: mke2fs -n /dev/nvme0n1p2
mke2fs 1.43.4 (31-Jan-2017)
/dev/nvme0n1p2 contains a ext4 file system labelled 'Linux'
last mounted on /mnt on Wed Oct 21 00:39:06 2020
Proceed anyway? (y,N) y
Creating filesystem with 51200000 4k blocks and 12804096 inodes
Filesystem UUID: 8b8f8fc5-edb8-465f-b3d3-cc7d3907bbbc
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
There's nothing I have to do to set the boot order as I recall.
Actually I'm not sure what mke2fs is really doing here. I think it isn't actually finding superblocks, it's calculating the locations based on drive size, block size, etc. It's more theoretical than an observation.
Probably the right thing to do is study man pages, blow away this partition, write a little script that feeds all the right arguments to mke2fs (and fdisk), then copy the contents in from the sd again. It usually takes me a few tries before I get a drive set up the way I want it. Seeing that eMMCs are starting to fail ups the priority, I can't use one of those forever.
10-21-2020, 01:42 PM
(This post was last modified: 10-21-2020, 04:32 PM by ab1jx.
Edit Reason: add wiki link
)
Well, that didn't help,it still boots the eMMC. I have a serial console adapter somewhere. Seems like there should be some log files.
The official answer of course is at the wiki, I just hadn't looked at it in a while. https://wiki.pine64.org/index.php?title=...root_drive
I was so impressed with the 180 year MTBF that I bought another of these Intel SSDs, the 2 TB version, and put it in a USB housing I got from NewEgg. That's probably as close to permanent storage as you can get since even CDs and DVDs develop bad spots after years. It's plugged into a Raspberry Pi downstairs. I have 3 or 4 1-2 TB hard drives I've put in USB adapters, they can float around. And when they aren't mounted they aren't spinning or wearing out. I can stick any of my old hard drives into a USB adapter temporarily to retrieve a file so I've let my dinosaur computers die off.
10-21-2020, 06:53 PM
(This post was last modified: 10-21-2020, 07:16 PM by ab1jx.
Edit Reason: looking in /dev
)
Tinkering, I see there's an nvme-cli deb out there, so I installed it. And duh, it has a man page. It looks quite well evolved:
Code: nvme-1.0
usage: nvme <command> [<device>] [<args>]
The following are all implemented sub-commands:
list List all NVMe devices and namespaces on machine
id-ctrl Send NVMe Identify Controller
id-ns Send NVMe Identify Namespace, display structure
list-ns Send NVMe Identify List, display structure
create-ns Creates a namespace with the provided parameters
delete-ns Deletes a namespace from the controller
attach-ns Attaches a namespace to requested controller(s)
detach-ns Detaches a namespace from requested controller(s)
list-ctrl Send NVMe Identify Controller List, display structure
get-ns-id Retrieve the namespace ID of opened block device
get-log Generic NVMe get log, returns log in raw format
fw-log Retrieve FW Log, show it
smart-log Retrieve SMART Log, show it
smart-log-add Retrieve additional SMART Log, show it
error-log Retrieve Error Log, show it
get-feature Get feature and show the resulting value
set-feature Set a feature and show the resulting value
format Format namespace with new block format
fw-activate Activate new firmware slot
fw-download Download new firmware
admin-passthru Submit arbitrary admin command, return results
io-passthru Submit an arbitrary IO command, return results
security-send Submit a Security Send command, return results
security-recv Submit a Security Receive command, return results
resv-acquire Submit a Reservation Acquire, return results
resv-register Submit a Reservation Register, return results
resv-release Submit a Reservation Release, return results
resv-report Submit a Reservation Report, return results
dsm Submit a Data Set Management command, return results
flush Submit a Flush command, return results
compare Submit a Compare command, return results
read Submit a read command, return results
write Submit a write command, return results
write-zeroes Submit a write zeroes command, return results
write-uncor Submit a write uncorrectable command, return results
reset Resets the controller
subsystem-reset Resets the controller
show-regs Shows the controller registers. Requires admin character device
discover Discover NVMeoF subsystems
connect-all Discover and Connect to NVMeoF subsystems
connect Connect to NVMeoF subsystem
disconnect Disconnect from NVMeoF subsystem
version Shows the program version
help Display this help
See 'nvme help <command>' for more information on a specific command
The following are all installed plugin extensions:
intel Intel vendor specific extensions
lnvm LightNVM specific extensions
memblaze Memblaze vendor specific extensions
See 'nvme <plugin> help' for more information on a plugin
nvme list shows me:
Code: nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 BTNH93841ZZZ1P0B INTEL SSDPEKNW010T8 1 1.02 TB / 1.02 TB 512 B + 0 B 002Cnvme-1.0
From what I'd read I was thinking I'd have to download and build something. It even has SMART. Don't know if there's any support for USB-connected nvme. There's also nothing in there for booting from an nvme device so I'll probably still have to build something.
Oh, OK, in /dev I see:
Code: nvme0
nvme0n1
nvme0n1p1
nvme0n1p2
nvme0n1p5
Those correspond to my partitions, so /boot is on /dev/nvme0n1p1 and the root (/) is on /dev/nvme0n1p5. But I knew that before installing nvme-cli, and they're already in /boot/extlinux/extlinux.conf and /etc/fstab. I just did a locate nvme and it was some sort of revelation.
My SMART log:
Code: Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning : 0
temperature : 26 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
data_units_read : 5,584
data_units_written : 295,839
host_read_commands : 112,035
host_write_commands : 250,270,384
controller_busy_time : 57
power_cycles : 159
power_on_hours : 205
unsafe_shutdowns : 30
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 0 C
Temperature Sensor 2 : 0 C
Temperature Sensor 3 : 0 C
Temperature Sensor 4 : 0 C
Temperature Sensor 5 : 0 C
Temperature Sensor 6 : 0 C
Temperature Sensor 7 : 0 C
Temperature Sensor 8 : 0 C
10-31-2020, 04:42 PM
(This post was last modified: 10-31-2020, 04:45 PM by ab1jx.
Edit Reason: add url
)
But I didn't get much farther. I have my nvme in my fstab as noauto (mount) and I boot from my eMMC, which I made a backup of. I guess that's good enough for now. I'm on the arm debian mailing list and I think I could install official Debian on here if I rummage around and find my serial console cable. Supposedly once it's installed it can drive the video, the standard Debian installer works. Not sure about the nvme though.
https://www.debian.org/releases/buster/a...01.en.html
11-29-2020, 05:16 PM
(This post was last modified: 12-23-2020, 09:18 AM by ab1jx.)
Just tinkering with something else, if I boot from a mrfixit image, I can see the nvme and mount and use the partitions on it. I was quite surprised by this. I don't know if it contains a driver or if I managed to install the driver somewhere else.
The same thing also happens with Daniel Thompson's Bullseye installed on an SD. What I'd really like is Bullseye booting from the nvme. But I don't want GPT and I don't want umpteen partitions. And I'd like to not have any Manjaro in there. But for now I mount my 700 gb partition on the nvme as /data and it's fairly workable. Maybe I should manually do a debootstrap onto it. I left a 1 GB partition for /boot, I have a 10 GB swap and a 4 GB hibernate. 196 GB to install the rest of Linux into, but leave my 700 GB alone.
|