Hi All,
So I've written another install tutorial this time using SkiffOS, the first one hopefully helped a few users. I discovered SkiffOS on the Rockpro64 software release page however I was surprised to see literally no posts in the forum about it. I contacted the developer on Discord and with his endless help and patience , I have a working server running Docker and several containers.
UPDATE 19 JUNE 2022: After a lot of debugging and a lot of hard work on part @paralin1 , the Rockpro64 is on the 5.18 kernel and great news , anyone who uses the Marvell 88SE9230 SATA card its now working perfectly out of the box with SkiffOS, the udev rule is no longer required as its included within the SkiffOS Rockpro64 config
UPDATE: @paralin1 is currently working on providing pre-built images , for users who want to try SkiffOS without compiling their own build, a link will be attached when the images are available
SkiffOS Github page
The Docker containers I have are
now this sounds like a lot of containers for an arm board, but at idle which is most of the time, the Rockpro64 sits at 1% CPU use.
Now before I begin, SkiffOS describes itself as:
The main thing to focus on, I think is that it is based on Buildroot, this is an immutable OS, so you have to compile the build on your daily machine(i.e laptop or desktop) and add any additional software packages and configuration options at build time. The benefits of an immutable system are best described above, but the main thing is "once setup" you can reproduce the build with ease and it will barely need any tinkering once its up an and running, plus it provides the security that should anything go wrong a quick reboot will restore the System back to its initial boot state.
SkiffOS has a "persist" partition that stores all the files and folders that you want to persist after a reboot. This can be where all the docker configuration files and states reside.
SkiffOS also has the option for a CORE container this is a container that will provide an OS environment more familiar to most, making it easier to interact with the persist partition, you can choose between many core environments such as Alpine, Debian, Gentoo or even Ubuntu with a desktop environment. All this is best explained on the Github page, I however didn't need a core container as once setup the only interaction I need is with the containers themselves.
So lets begin! to start with I would create a Projects Directory on you daily machine, preferably in you Home directory and then Git Clone the repo from Github into the projects directory.
Now I created a configuration directory that has all my docker container configs, files and scripts, I called it my_docker_config this allows me to export it as an option when compiling SkiffOS during build time but of course you could create these files and folders and copy and paste their contents one at a time to the persist partition after first boot, but this defeats the benefit of having a reproducible build. So a little work now, to create a config directory will really pay off.
I have a udev rule that allows my marvell 88SE9230 Sata card to work and a fan script that starts the fan and keeps it at a constant speed, these are best explained in my other tutorial, however these are the types of things I would also want to have in my configuration directory. I also have a couple of systemd .mount files to mount my ssd's on startup similar to how an "fstab" file would work.
This is how my my_docker_config directory tree looks
you can see I keep my docker configuration file in mydocker a fan script under "opt" and the rest of my config in "etc" this is all whats called a root overlay , all these file and directories will be overlaid upon boot and be placed in their respective folders in the live environment . You can also see I have "authorized_keys" under skiff this is where you can place your public ssh keys , so you have ssh access upon boot.
my udev rule named 99-marvell.rules looks like this
(This udev rule is no longer required it is included in SkiffOS config for the Rockpro64, its shown to inform only):
My fan_script.sh looks like this:
this will of course need a "fan speed service" to start from boot and a sym link to the service in the multi-user.target.wants directory as shown in the directory tree
the service aptly named fan-speed.service looks like this :
one of my SSD mount files looks like this:
(please note the name of the mount file must be the path to the mount point with each folder separated by a hyphen i.e /mnt/mydrive/here would be mnt-mydrive-here.mount)
(also note that a sym link to the .mount file must also be in multi-user.target.wants directory)
here is the basic config for Radicale :
and the snikket.conf file please edit with your email and domain name for snikket (please leave the ports commented out until later) :
and the Caddyfile required for caddy
(please edit the above with the domain names you have chosen for your services)
and finally my docker-compose.yml file for all my containers apart from Portainer which needs to access the docker daemon via a unix socket
(please edit to suit your needs):
after you have your configuration directory you can move it to /Projects/SkiffOS/configs/apps/ this will make it an exportable option in the next step.
So on to the installation!
Hopefully you still in the Projects directory with the freshly git cloned SkiffOS and your custom config directory moved to its place in the apps folder. As per the instruction on the SkiffOs github page you need to export all the options you want for the build, I have my board requirement, docker compose (which also installs docker) and my custom config directory.
this looks like this :
OK so I should mention that if you require any additional software such as "htop" installed then now is the time to select it this is done by typing
please note! all these commands will only work if you are in the the SkiffOS directory located in the previously created Projects directory
This make command will create an ncurses menu that will allow you to select additional software, system configs kernel options .etc
The target packages option will allow you to select additional packages such as htop.
The / key will allow you to search all the options and space will allow you to select the option current highlighted . (remember to save before you exit the ncurses menu)
Now to finish up the build process.
This last command will begin the compiling process, this can take some time depending on your hardware, so now is the time to get a coffee!
(please note that when you recompile, for example after an update to SkiffOS this will be alot shorter timespan, as a previously compiled image will exist and only updated options and packages will be compiled )
To copy the compiled image to an SD card or emmc (although I currently haven't tested if emmc works)
You will need to be root to perform these next commands
(Additional note whilst using Arch I had no issue with the format command above, however after switching to a Ubuntu based OS I noticed it failed with an error, the solution was to edit he formatting script and add a P to every partition and the fatlabel )
see the addition of the p after every curly bracket
now your ready to boot, but before you do I would create the Duckdns or other DDNS provider entries for the services you will expose to the internet (Please see my other tutorial for assistance), in my case Home assistant , Radicale and the Snikket XMPP server, so 3 entries in total.
I will point out for completeness sake that if your following along with this tutorial and you are wanting to trial Snikket as an XMPP server it is not recommended to use a Dynamic Domain Name Service, preferably you will require a static IP address from your internet provider and you should use the recommended DNS providers suggested on the Snikket website . I have a static IP but I am using Duckdns as a free DNS which works perfectly for my use case.
When you boot for the first time it may take a few minutes to provide SSH, if it doesnt provide SSH after 15 minutes, the SSH keys provided in the config file are either incorrect or you have an issue and the SD has failed to boot. (Connecting a serial console could help you resolve your issue see here )
After you have successfully booted move your mydocker config directory to the persist partition
now to get Portainer up and running
You will need to setup Portainer to your requirements (This docker command will only get it installed and running)
Next pull all the container images for your services (This command will only work if your in the directory where your docker-compose file is located)
in my case:
So If your following along and are using the Radicale container, you will need to run a "one time" command to create a user password using htpasswd this can be done using a "temporary container", the container is removed after the command completes.
next if your using the samba container for a NAS and your would prefer to use password hashes rather than a plain text password, then run this docker command for each user and fill in the requested details.
This will print to screen a hash for the specified user which needs to be added to the docker-compose.yml file as shown above where it says ACCOUNT_username
Now is the time to open the required ports on your router, Caddy only requires 80 and 443 for the reverse proxy service however Snikket requires quite a few port as shown here
Snikket requires a configuration file as shown above, however to use with a reverse proxy (Because Snikket gets its own lets encrypt certificates) you will need to change the ports in the .conf file (i.e uncomment) to something different from 80 and 443 which are in use by the Caddy container.
Before you uncomment these ports in the config I would start the Snikket containers, with out the reverse proxy running, that way the Snikket server can obtain its own lets encrypt certs without issue, the certs will renew without issue, with the newly altered ports in my case 5080 and 5443, from now on, no need to edit again.
Run these commands then edit the .conf file and uncomment the alternative ports.
OK so your almost there, If you have a backup of your Homeassistant install, now is the time to copy it to the homeassistant directory otherwise once started the home assistant container will be from scratch and will need to be setup from new.
Finally !
This will start the containers in daemon mode (i.e in the background)
If all is working you should see the HomeAssistant login screen, the Radicale login screen and the Snikket admin login screen at the addresses you specified with you DNS. Of course you will see all other service via their local network address/IP
I would now create an admin invite for yourself to access the Snikket admin page
this will generate a URL for your first admin password setup
All services will need configuring/setup to suit your needs but all containers should be running and healthy, this can be checked in the Portainer UI.
Congrats!!
as a final note!
How to update SkiffOS, if there has been a new release or if you have changed the config on your build / require additional packages
Note! you may need to re-build fully if too much has changed
(it which case the make clean command should be used before compiling):
Then compile:
Then push the updated image:
So I've written another install tutorial this time using SkiffOS, the first one hopefully helped a few users. I discovered SkiffOS on the Rockpro64 software release page however I was surprised to see literally no posts in the forum about it. I contacted the developer on Discord and with his endless help and patience , I have a working server running Docker and several containers.
UPDATE 19 JUNE 2022: After a lot of debugging and a lot of hard work on part @paralin1 , the Rockpro64 is on the 5.18 kernel and great news , anyone who uses the Marvell 88SE9230 SATA card its now working perfectly out of the box with SkiffOS, the udev rule is no longer required as its included within the SkiffOS Rockpro64 config
UPDATE: @paralin1 is currently working on providing pre-built images , for users who want to try SkiffOS without compiling their own build, a link will be attached when the images are available
SkiffOS Github page
The Docker containers I have are
- a Simple Samba container for a NAS
- a Radicale container for caldav calendar and contacts sync
- a Home assistant container for home automation
- a EspHome container for managing my home automation Sonoff light switches
- a Jellyfin container as a media server
- a Caddy server container for my reverse proxy and lets encrypt certs renewal
- a Snikket server container for a xmpp server thats simple an easy to get setup
- and finally a Portainer container to manage all my containers with a nice UI
now this sounds like a lot of containers for an arm board, but at idle which is most of the time, the Rockpro64 sits at 1% CPU use.
Now before I begin, SkiffOS describes itself as:
Quote:SkiffOS is a lightweight operating system for any Linux-compatible computer, ranging from RPi, Odroid, NVIDIA Jetson, to Desktop PCs, Laptops (i.e. Apple MacBook), Phones (PinePhone), Containers, or Cloud VMs. It is:
- Adoptable: any userspace can be imported/exported to/from container images.
- Familiar: uses simple Makefile and KConfig language for configuration.
- Flexible: supports all major OS distributions inside containers.
- Portable: containers can be moved between machines of similar CPU type.
- Reliable: changes inside user environments cannot break the host boot-up.
- Reproducible: a given Skiff Git tree will always produce identical output.
Uses Buildroot to produce a minimal "single-file" host OS as a standardized base cross-platform operating system "shim" for hosting containers. Most Linux platforms have widely varying requirements for kernel, firmware, and additional hardware support packages. The immutable SkiffOS host system contains everything needed to support the hardware, cleanly separated from the applications
The main thing to focus on, I think is that it is based on Buildroot, this is an immutable OS, so you have to compile the build on your daily machine(i.e laptop or desktop) and add any additional software packages and configuration options at build time. The benefits of an immutable system are best described above, but the main thing is "once setup" you can reproduce the build with ease and it will barely need any tinkering once its up an and running, plus it provides the security that should anything go wrong a quick reboot will restore the System back to its initial boot state.
SkiffOS has a "persist" partition that stores all the files and folders that you want to persist after a reboot. This can be where all the docker configuration files and states reside.
SkiffOS also has the option for a CORE container this is a container that will provide an OS environment more familiar to most, making it easier to interact with the persist partition, you can choose between many core environments such as Alpine, Debian, Gentoo or even Ubuntu with a desktop environment. All this is best explained on the Github page, I however didn't need a core container as once setup the only interaction I need is with the containers themselves.
So lets begin! to start with I would create a Projects Directory on you daily machine, preferably in you Home directory and then Git Clone the repo from Github into the projects directory.
Code:
mkdir Projects
cd Projects/
git clone https://github.com/skiffos/SkiffOS.git
Now I created a configuration directory that has all my docker container configs, files and scripts, I called it my_docker_config this allows me to export it as an option when compiling SkiffOS during build time but of course you could create these files and folders and copy and paste their contents one at a time to the persist partition after first boot, but this defeats the benefit of having a reproducible build. So a little work now, to create a config directory will really pay off.
I have a udev rule that allows my marvell 88SE9230 Sata card to work and a fan script that starts the fan and keeps it at a constant speed, these are best explained in my other tutorial, however these are the types of things I would also want to have in my configuration directory. I also have a couple of systemd .mount files to mount my ssd's on startup similar to how an "fstab" file would work.
This is how my my_docker_config directory tree looks
Code:
my_docker_config
└── root_overlay
├── etc
│ ├── skiff
│ │ └── authorized_keys
│ │ └── my-key.pub
│ ├── systemd
│ │ └── system
│ │ ├── fan-speed.service
│ │ ├── mnt-ssd1.mount
│ │ ├── mnt-ssd2.mount
│ │ └── multi-user.target.wants
│ │ ├── fan-speed.service -> ../fan-speed.service
│ │ ├── mnt-ssd1.mount -> ../mnt-ssd1.mount
│ │ └── mnt-ssd2.mount -> ../mnt-ssd2.mount
│ └── udev
│ └── rules.d
│ └── 99-marvell.rules
└── opt
├── mydocker
│ ├── caddy
│ │ ├── caddy_data
│ │ └── Caddyfile
│ ├── docker-compose.yml
│ ├── esphome
│ │ └── config
│ ├── homeassistant
│ │ ├── automations.yaml
│ │ ├── configuration.yaml
│ │ ├── groups.yaml
│ │ └── www
│ │ ├── picone.PNG
│ │ └── pictwo.PNG
│ ├── jellyfin
│ ├── radicale
│ │ ├── config
│ │ │ └── config
│ │ ├── data
│ │ └── log
│ └── snikket
│ ├── acme_challenges
│ ├── snikket.conf
│ └── snikket_data
└── rockpro64_fan
└── fan_script.sh
you can see I keep my docker configuration file in mydocker a fan script under "opt" and the rest of my config in "etc" this is all whats called a root overlay , all these file and directories will be overlaid upon boot and be placed in their respective folders in the live environment . You can also see I have "authorized_keys" under skiff this is where you can place your public ssh keys , so you have ssh access upon boot.
my udev rule named 99-marvell.rules looks like this
(This udev rule is no longer required it is included in SkiffOS config for the Rockpro64, its shown to inform only):
Code:
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x1b4b", ATTR{device}=="0x9230", RUN+="/bin/bash -c 'echo %k > /sys/bus/pci/drivers/ahci/bind'"
My fan_script.sh looks like this:
Code:
#!/bin/bash
rmmod pwm-fan
echo 0 > /sys/class/pwm/pwmchip1/export
echo 110 > /sys/class/pwm/pwmchip1/pwm0/duty_cycle
echo 500 > /sys/class/pwm/pwmchip1/pwm0/period
echo 1 > /sys/class/pwm/pwmchip1/pwm0/enable
this will of course need a "fan speed service" to start from boot and a sym link to the service in the multi-user.target.wants directory as shown in the directory tree
the service aptly named fan-speed.service looks like this :
Code:
[Unit]
Description=change fan speed at startup
After=basic.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/opt/rockpro64_fan/fan_script.sh
[Install]
WantedBy=multi-user.target
one of my SSD mount files looks like this:
(please note the name of the mount file must be the path to the mount point with each folder separated by a hyphen i.e /mnt/mydrive/here would be mnt-mydrive-here.mount)
(also note that a sym link to the .mount file must also be in multi-user.target.wants directory)
Code:
[Unit]
Description=mounting Files from SSD
[Mount]
Where=/mnt/ssd1
What=/dev/disk/by-label/storage1
[Install]
WantedBy=multi-user.target
here is the basic config for Radicale :
Code:
[server]
hosts = 0.0.0.0:5232
max_connections = 15
# 100 Megabyte
max_content_length = 100000000
# 30 seconds
timeout = 20
[auth]
# Average delay after failed login attempts in seconds
delay = 600
type = htpasswd
htpasswd_filename = /etc/radicale/users
# encryption method used in the htpasswd file
htpasswd_encryption = md5
[storage]
filesystem_folder = /data/collections
and the snikket.conf file please edit with your email and domain name for snikket (please leave the ports commented out until later) :
Code:
# The primary domain of your Snikket instance
SNIKKET_DOMAIN=mysnikket.duckdns.org
# An email address where the admin can be contacted
# (also used to register your Let's Encrypt account to obtain certificates)
SNIKKET_ADMIN_EMAIL=myemailaddress@mail.com
#SNIKKET_TWEAK_HTTP_PORT=5080
#SNIKKET_TWEAK_HTTPS_PORT=5443
and the Caddyfile required for caddy
Code:
(log_common) {
log {
output file /var/log/caddy/{args.0}.access.log
}
}
myradicale.duckdns.org {
handle_path /radicale* {
reverse_proxy localhost:5232 {
header_up X-Script-Name /radicale
}
}
import log_common myradicales.duckdns.org
}
myhomeassistant.duckdns.org {
reverse_proxy localhost:8123
import log_common myhomeassistant.duckdns.org
}
http://mysnikket.duckdns.org,
http://groups.mysnikket.duckdns.org,
http://share.mysnikket.duckdns.org {
reverse_proxy localhost:5080
request_body {
max_size 20M
}
}
mysnikket.duckdns.org,
groups.mysnikket.duckdns.org,
share.mysnikket.duckdns.org {
tls /snikket/letsencrypt/live/mysnikket.duckdns.org/fullchain.pem /snikket/letsencrypt/live/mysnikket.duckdns.org/privkey.pem
reverse_proxy https://localhost:5443 {
transport http {
tls_insecure_skip_verify
}
}
request_body {
max_size 20M
}
}
and finally my docker-compose.yml file for all my containers apart from Portainer which needs to access the docker daemon via a unix socket
(please edit to suit your needs):
Code:
version: '3.7'
services:
caddy:
depends_on:
- radicale
- homeassistant
image: caddy:latest
container_name: caddy_reverse_proxy
volumes:
- /mnt/persist/mydocker/caddy/caddy_data:/data
- /mnt/persist/mydocker/caddy/Caddyfile:/etc/caddy/Caddyfile
- /mnt/persist/mydocker/snikket/snikket_data:/snikket:ro
- /mnt/persist/mydocker/caddy/logs:/var/log/caddy/
environment:
- TZ=Europe/London
restart: unless-stopped
network_mode: host
radicale:
image: tomsquest/docker-radicale
container_name: radicale
ports:
- 127.0.0.1:5232:5232
init: true
read_only: true
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- SETUID
- SETGID
- CHOWN
- KILL
healthcheck:
test: curl -f http://127.0.0.1:5232 || exit 1
interval: 30s
retries: 3
restart: unless-stopped
volumes:
- /mnt/persist/mydocker/radicale/data:/data
- /mnt/persist/mydocker/radicale/config:/config:ro
- /mnt/persist/mydocker/radicale/users:/etc/radicale/users
- /mnt/persist/mydocker/radicale/log:/var/log/radicale/log
environment:
- TZ=Europe/London
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant:stable
volumes:
- /mnt/persist/mydocker/homeassistant:/config
environment:
- TZ=Europe/London
restart: unless-stopped
network_mode: host
esphome:
image: esphome/esphome
volumes:
- /mnt/persist/mydocker/esphome/config:/config:rw
environment:
- TZ=Europe/London
network_mode: host
restart: unless-stopped
snikket_proxy:
container_name: snikket-proxy
image: snikket/snikket-web-proxy:beta
env_file: /mnt/persist/mydocker/snikket/snikket.conf
network_mode: host
volumes:
- /mnt/persist/mydocker/snikket/snikket_data:/snikket
- /mnt/persist/mydocker/snikket/acme_challenges:/var/www/html/.well-known/acme-challenge
restart: unless-stopped
snikket_certs:
container_name: snikket-certs
image: snikket/snikket-cert-manager:beta
env_file: /mnt/persist/mydocker/snikket/snikket.conf
volumes:
- /mnt/persist/mydocker/snikket/snikket_data:/snikket
- /mnt/persist/mydocker/snikket/acme_challenges:/var/www/.well-known/acme-challenge
restart: unless-stopped
snikket_portal:
container_name: snikket-portal
image: snikket/snikket-web-portal:beta
network_mode: host
env_file: /mnt/persist/mydocker/snikket/snikket.conf
restart: unless-stopped
snikket_server:
container_name: snikket
image: snikket/snikket-server:beta
network_mode: host
volumes:
- /mnt/persist/mydocker/snikket/snikket_data:/snikket
env_file: /mnt/persist/mydocker/snikket/snikket.conf
restart: unless-stopped
samba:
build: .
image: servercontainers/samba
container_name: samba-server
restart: unless-stopped
network_mode: host
environment:
WSDD2_DISABLE: 1
AVAHI_DISABLE: 1
ACCOUNT_user1: "user1:1000:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:DF56ET792A681E1A23E0F75695:[U ]:LCT-4502WT5:"
UID_user1: 1020
GROUPS_user1: users
ACCOUNT_user2: "user2:1000:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:29395D100D389TEB9E295A747E420:[U ]:LCT-521D03D5:"
UID_user2: 1021
GROUPS_user2: users
GROUP_users: 100
SAMBA_VOLUME_CONFIG_Foldername: "[Foldername Share]; path=/shares/Foldername; valid users = user1, user2; guest ok = no; force group = users; read only = no; browseable = yes"
SAMBA_VOLUME_CONFIG_media: "[media Share]; path=/shares/media; valid users = user1, user2; guest ok = no; force group = users; read only = no; browseable = yes"
volumes:
- /mnt/ssd1/Foldername:/shares/Foldername
- /mnt/ssd2/media:/shares/media
jellyfin:
image: linuxserver/jellyfin:latest
container_name: jellyfin
network_mode: host
volumes:
- /mnt/persist/mydocker/jellyfin/config:/config
- /mnt/persist/mydocker/jellyfin/cache:/cache
- /mnt/ssd2/media:/media
restart: unless-stopped
environment:
- TZ=Europe/London
after you have your configuration directory you can move it to /Projects/SkiffOS/configs/apps/ this will make it an exportable option in the next step.
So on to the installation!
Hopefully you still in the Projects directory with the freshly git cloned SkiffOS and your custom config directory moved to its place in the apps folder. As per the instruction on the SkiffOs github page you need to export all the options you want for the build, I have my board requirement, docker compose (which also installs docker) and my custom config directory.
this looks like this :
Code:
cd SkiffOS
export SKIFF_CONFIG=pine64/rockpro64,apps/compose,apps/my_docker_config
OK so I should mention that if you require any additional software such as "htop" installed then now is the time to select it this is done by typing
Code:
make br/menuconfig
please note! all these commands will only work if you are in the the SkiffOS directory located in the previously created Projects directory
This make command will create an ncurses menu that will allow you to select additional software, system configs kernel options .etc
The target packages option will allow you to select additional packages such as htop.
The / key will allow you to search all the options and space will allow you to select the option current highlighted . (remember to save before you exit the ncurses menu)
Now to finish up the build process.
Code:
make configure
make compile
This last command will begin the compiling process, this can take some time depending on your hardware, so now is the time to get a coffee!
(please note that when you recompile, for example after an update to SkiffOS this will be alot shorter timespan, as a previously compiled image will exist and only updated options and packages will be compiled )
To copy the compiled image to an SD card or emmc (although I currently haven't tested if emmc works)
You will need to be root to perform these next commands
Code:
sudo bash
blkid (this is to see where the SD is located )
export PINE64_SD=/dev/mmcblk0 ( please edit this last part to suit where you SD is located)
make cmd/pine64/common/format ( this formats the card , if you get an error with this command and your using Ubuntu please see note below)
make cmd/pine64/common/install ( this will copy the build to the sd card)
(Additional note whilst using Arch I had no issue with the format command above, however after switching to a Ubuntu based OS I noticed it failed with an error, the solution was to edit he formatting script and add a P to every partition and the fatlabel )
Code:
nano ~/Projects/SkiffOS/configs/pine64/common/scripts/format_sd.sh
see the addition of the p after every curly bracket
Code:
echo "Formatting boot partition..."
mkfs.vfat -F 32 ${PINE64_SD_SFX}p1
fatlabel ${PINE64_SD_SFX}p1 boot
echo "Formatting rootfs partition..."
$MKEXT4 -L "rootfs" ${PINE64_SD_SFX}p2
echo "Formatting persist partition..."
$MKEXT4 -L "persist" ${PINE64_SD_SFX}p3
now your ready to boot, but before you do I would create the Duckdns or other DDNS provider entries for the services you will expose to the internet (Please see my other tutorial for assistance), in my case Home assistant , Radicale and the Snikket XMPP server, so 3 entries in total.
I will point out for completeness sake that if your following along with this tutorial and you are wanting to trial Snikket as an XMPP server it is not recommended to use a Dynamic Domain Name Service, preferably you will require a static IP address from your internet provider and you should use the recommended DNS providers suggested on the Snikket website . I have a static IP but I am using Duckdns as a free DNS which works perfectly for my use case.
When you boot for the first time it may take a few minutes to provide SSH, if it doesnt provide SSH after 15 minutes, the SSH keys provided in the config file are either incorrect or you have an issue and the SD has failed to boot. (Connecting a serial console could help you resolve your issue see here )
After you have successfully booted move your mydocker config directory to the persist partition
Code:
mv /opt/mydocker /mnt/persist/
now to get Portainer up and running
Code:
docker run -d -p 9443:9443 --name portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
cr.portainer.io/portainer/portainer-ce:latest
You will need to setup Portainer to your requirements (This docker command will only get it installed and running)
Next pull all the container images for your services (This command will only work if your in the directory where your docker-compose file is located)
in my case:
Code:
cd /mnt/persist/mydocker/
docker-compose pull ( this will pull all images and can take some time )
So If your following along and are using the Radicale container, you will need to run a "one time" command to create a user password using htpasswd this can be done using a "temporary container", the container is removed after the command completes.
Code:
docker run -v /mnt/persist/mydocker/radicale:/radicale --rm -it alpine:edge sh -c "apk add apache2-utils && htpasswd -c /radicale/users calendar"
(this creates user "calendar" password)( the --rm remove the container after completion )
next if your using the samba container for a NAS and your would prefer to use password hashes rather than a plain text password, then run this docker command for each user and fill in the requested details.
This will print to screen a hash for the specified user which needs to be added to the docker-compose.yml file as shown above where it says ACCOUNT_username
Code:
docker run -ti --rm --entrypoint create-hash.sh servercontainers/samba
Now is the time to open the required ports on your router, Caddy only requires 80 and 443 for the reverse proxy service however Snikket requires quite a few port as shown here
Snikket requires a configuration file as shown above, however to use with a reverse proxy (Because Snikket gets its own lets encrypt certificates) you will need to change the ports in the .conf file (i.e uncomment) to something different from 80 and 443 which are in use by the Caddy container.
Before you uncomment these ports in the config I would start the Snikket containers, with out the reverse proxy running, that way the Snikket server can obtain its own lets encrypt certs without issue, the certs will renew without issue, with the newly altered ports in my case 5080 and 5443, from now on, no need to edit again.
Run these commands then edit the .conf file and uncomment the alternative ports.
Code:
docker-compose up snikket_proxy snikket_certs snikket_portal snikket_server
ctrl + c (to stop the containers )
nano /mnt/persist/mydocker/snikket/snikket.conf
OK so your almost there, If you have a backup of your Homeassistant install, now is the time to copy it to the homeassistant directory otherwise once started the home assistant container will be from scratch and will need to be setup from new.
Finally !
Code:
docker-compose up -d
This will start the containers in daemon mode (i.e in the background)
If all is working you should see the HomeAssistant login screen, the Radicale login screen and the Snikket admin login screen at the addresses you specified with you DNS. Of course you will see all other service via their local network address/IP
I would now create an admin invite for yourself to access the Snikket admin page
Code:
docker exec snikket create-invite --admin --group default
this will generate a URL for your first admin password setup
All services will need configuring/setup to suit your needs but all containers should be running and healthy, this can be checked in the Portainer UI.
Congrats!!
as a final note!
How to update SkiffOS, if there has been a new release or if you have changed the config on your build / require additional packages
Code:
cd Projects/SkiffOS
git pull --recurse-submodules
Note! you may need to re-build fully if too much has changed
(it which case the make clean command should be used before compiling):
Code:
make clean
Then compile:
Code:
make compile
Then push the updated image:
Code:
./scripts/push_image.sh root@SKIFFOS_IP_ADDRESS