Hey, I just wanted to add some data toward answering the original question "will an SSD noticeably improve performance?"
Code:
rock64@pinebookpro:~$ iostat -h mmcblk1
Linux 4.4.190-1233-rockchip-ayufan-gd3f1be0ed310 (pinebookpro) 12/11/2019 _aarch64_ (6 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
22.9% 0.7% 8.9% 0.2% 0.0% 67.3%
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
mmcblk1
26.51 128.4k 259.7k 58.5G 118.3G
rock64@pinebookpro:~$ uptime
22:58:42 up 5 days, 12:32, 3 users, load average: 0.48, 0.42, 0.42
So we're looking at 5 days uptime booted off the eMMC doing a variety of things from the desktop. The iowait % is the big number to pay attention to here, this is the time that the machine spent with a cpu core active waiting on the disk to complete an io operation.
Two tenths of a percent of the time this was the case, this very clearly demonstrates that disk io isn't the performance bottleneck on the PBP under typical desktop use conditions. If you have a usage pattern that requires extremely heavy disk access (constantly compiling, for example) then you'll benefit more from an NVMe drive, otherwise you really won't see much improvement most of the time.
There are also a handful of ways to mitigate the need for an SSD even if you require fast disk access; as an example chromium seems to like fast storage for it's user cache, so I've setup a 550MB zram mountpoint for that. If you need persistence for fast cache data you can combine tmpfs (or zram, which will be a tad slower but much more memory efficient) with overlayfs using the memory device as an upper layer and storage on the emmc as a persistent lower layer.
Anyway, long story short, 99.8% of the time over 5 days of desktop use, browsing heavily, code editing, watching netflix, etc my PBP wouldn't have seen any real improvement from using NVMe over eMMC.