PINE64
Kubernetes on SoPine cluster part.1 - Printable Version

+- PINE64 (https://forum.pine64.org)
+-- Forum: PINE A64-LTS / SOPINE Compute Module (https://forum.pine64.org/forumdisplay.php?fid=66)
+--- Forum: Clusterboard (https://forum.pine64.org/forumdisplay.php?fid=91)
+--- Thread: Kubernetes on SoPine cluster part.1 (/showthread.php?tid=5518)



Kubernetes on SoPine cluster part.1 - maya.b - 12-31-2017

So this is a quick guide on how I got Kubernetes installed and running on 5 nodes on a prototype SoPine clusterboard. This can also work on rock64's, pine64(+)/LTS or anything else really. I'm assuming you already know how to flash OS SD card images using your favourite host OS, and can configure a router, in particular assigning manual DHCP addresses to hosts.

This isn't step by step, and if you need clarification, then perhaps go learn about whatever it is first. This is not a beginner's guide.

Background:
I recently got the book "Kubernetes Up & Running" (KUAR) by Hightower, Burns & Beda from O'Reilly. It came out in September 2017 so it's fairly current, although with all tech books, there are some things already out of date.

I read the intro and chapter 1 - nice and informative.

Chapter 2 - "you may want to start expriementing" it suggested. Uhhm, yes. "You should use an online service offering" it advised. Uhmm, if I have to ... "If you're really want to set it up yourself ... appendix A has a guide" it revealed. I skipped the rest of Chapter 2 for now. Chapters 3 through Appendix A, I'll get back to later as well

NB* THIS IS SO FAR FROM PRODUCTION READY.

The steps that follow are a rough paraphrase of Appendix A in the KUAR book with a few changes (ie that out of date already stuff)

Step 1- Get hardware.
  • Two or more sopines
  • sopine clusterboard (cheats power cabling, network cabling (has built in  GigE ethernet switch), cable management and all the other hassles of DIY clusters)
Step 2- make a bunch of OS SD cards
  • I used an armbian 5.37 BSP xenial server build, any xenial build should work but stick with the minimal builds as you won't need most of what's in minimal, nevermind anything else
Step 3- network config
  • Rather than installing DHCP tools on the host node, I just plugged them in an booted each sopine up sequentially, and assinged them hosts sopine[0..4] and on my router gave them IP addresses nnn.nnn.nnn.200 through 204 (where n is whatever subnet used on your router. eg 192.168.1.200 or 10.10.10.200)
  • The book assumes the cluster will be isolated and hidden from the world so there are some extra network forwarding steps and configs I completely ignored as the clusterboard has it's own switch directly connected to the 'net and all nodes can see the world. Again this is *not* a production ready setup, where you'd likely have one exposed node, and all your worker nodes hidden/protected in a vlan or simiar. (Have I mentioned this is *not* a production ready setup?)
Step 4- Install Kubernetes stuff
  • (optional) install cssh(X) - Cluster SSH lets you blow up numerous systems at once by ssh'ing into all of them simultaneously and running things all at once. Since I haven't bothered (yet) with config management to setup the sopines, this is even more efficient #BadDevOps than ssh'ing into them individually way of doing things. (Thank you Xalius for introducing this to me!) (csshX is the OSX brew version)
  • install docker.io - this is the default container engine
  • install kubernetes package encryption key:
    Code:
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  • Add repo to your repo's list
    Code:
    echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
  • Update and upgrade the nodes
  • install actual kubernetes stuff
    Code:
    apt install -y kubelet kubeadm kubectl kubernetes-cni
  • this installed version 1.9 on my sopines
Step 5- Setup up the cluster!
  • On the master node (sopine0 in my case) 
    Code:
    kubeadm init --pod-network-cidr 192.168.1.0/24 --apiserver-advertise-address 192.168.1.200
  • Make sure to use the correct network range as you may not be using the same subnet as I. This will take a while. It will also generate the <token> you need to initialise the slave nodes and will actually print out the command for the slave nodes to join the cluster
  • on the slave nodes (sopine[1..n])
    Code:
    kubeadm join --token=<token> 192.168.1.200:6443 --discovery-token-ca-cert-hash sha256:<hash>
  • if it all played nice then to see your embryonic Kubernetes cluster details
    Code:
    kubectl get nodes
That's it for now, there are more things to do of course but that's what it takes to get started with (a non production ready) kubernetes cluster on a sopine cluster board.


RE: Kubernetes on SoPine cluster part.1 - jbk - 12-31-2017

Cool, I've been wanting to try this for a while - thanks for the write up Smile


RE: Kubernetes on SoPine cluster part.1 - hexalyn - 04-18-2018

I encountered an error with kubernetes last two versions (v1.10.0 and v1.10.1). I was only able to make it work with v1.9.6.

So if anyone has the same issue, you can replace the kubernetes installation of @maya.b with : 

Code:
apt-get -y install kubectl=1.9.6-00 kubelet=1.9.6-00 kubernetes-cni=0.6.0-00 kubeadm=1.9.6-00

I had no issue with everything else  Smile


RE: Kubernetes on SoPine cluster part.1 - PigLover - 04-18-2018

Yes - Kubeadm currently has a blocking issue with Kubernetes releases above >1.9.8 (including 1.10.x). Note that this is affecting ALL bare metal installs of Kubernetes, not just Pine boards.

The issue is with the installer (kubeadm) and not Kubernetes itself. You should be able to install/launch your cluster and then upgrade to 1.10.x using “apt upgrade” on each node in the cluster (suggest upgrading the master node last).