Clusterboard Guide
I'm a CS student at the University of Arizona.  I have a $500 budget for educational supplies to kill this semester and decided I wanted to take a dive into cluster computing.  After careful research and deliberation, I decided that the clusterboard would be the best balance of power, price and desk space.  This research was hindered by the fact that I could not find a good step by step guide to setting up a clusterboard.  So I've decided to make my own.  I have a reasonable amount of experience with hardware, software, and networking so (with plenty of chat support) I was able to figure this out.  I'm still in the process of updating this post but the machine is up and running!





Compute Modules - (7)

---Optional extras---

64GB eMMC (for NAS/high speed disk space)

USB eMMC interface (for debugging)

SOPINE baseboard (backup interface for testing compute modules)

USB TTY (allows terminal access through GPIO for debugging)


16gb SD Card 10 pack - (3 left over)

80mm fans - (2 x exhaust)

Fan power adapter - (1 end for 200mm that comes with the case, 2 for the back)

Case - (This case is oversized but extra space is used for air flow)

Heatsinks 8 pack - (3 packs,Each compute module should get a heatsink on CPU, memory and power management unit.  21 total)
These pads can replace the missing adhesive

Power Supply - (12V 12.5A 150W Provides more power than the one from the Pine store for running fans.)

160w Power Header For board and fans - (splits power to board and fans)

- continuous switch from my local electrical supply store
- clear acrylic to block the extra vents and create a wind tunnel.


There are lots of ways to power the board.  The Pine Store sells a power supply but you'll have to modify your board to power fans.  You can use the ATX port on the board, but you'll need to soldier some pretty large resistors.  If you don't make the special ATX mods, you'll need to use a continuous switch to keep it on.
Gateway/Management Computer

The internal networking switch on the clusterboard does not provide DHCP.  As an alternative to running on my router, I've decided to build a gateway out of a first edition Pine64 from the kickstarter.  It acts as DHCP, a router allowing the cluster to access the internet via wifi, terminal access to the cluster, and a time/file/dhcp/dns/etc server.  While this is certainly something I recommend doing, how you implement it will depend on what kind of device you use.  In this guide I'll refer to things you need to do but only when dealing with the clusterboard itself will I be providing specific steps.

Ideally, the sopine LTS kit would make the best gateway.  However, the network port on it is only 10/100.  Gigabit is what you need.  Most newer boards have this just make sure to research the specs before you buy.  If you want to go all out, the rockboard pro has a NAS setup that would work nicely.  With any luck, the same linux distro on any Cortex A53 should be able to compile programs and act as part of the cluster but no promises until I can do some more testing.

I've added a lot of extra case space and hardware for cooling.  I've been told this is overkill but if you want experience building clusters, you should probably know that cooling takes up a good part of the budget.
Bench Testing

Before we fully assemble the machine, we're going to do all of the initial setup.  Once we know everything runs, we can put it together and put it to work.
Prepare the SD cards

The best image (as of this post) is Armbian because of the continuous support.  However, I had to use the 3.10 xenial image (found under "other options") instead of Stretch (Debian) or Bionic (Ubuntu).

The armbian site recommends this software for writing the images to the SD cards.  Feel free to use whatever software you like.  I don't think it matters.
For each compute module

(Notice the ATX style power switch connection just below the ATX power connector.  Helps when you want to turn it on --Wink
[Image: 1532y4x.jpg]

You need to plug each module in one at a time (DO NOT HOT SWAP) and do the following steps:

-Assign it a static IP address from your DHCP server
--find it when it pulls an IP address
--use the MAC address to make the assignment from the DHCP server configuration
--be sure to give it an address outside of the automatic range (ie give them *.10-16 if your DHCP server hands out *.50-100)
--make a list of the static assignments to help later

-Login (ssh client from your gateway or management computer) with root access(if using ARMbian [username: root][pw: 1234])
--complete any first time setup
--this may include generating a user account(you'll probably want to make it's the same username on all of them)
--use "hostnamectl set-hostname [myhostname]" to give each a unique hostname (ie cluster[0-6])
--be sure to shutdown before turning the power off and test the module in the slot you plan to use it.

once each module has been tested, we can move on to testing it as a whole.
csftp and cssh

These next steps will require special software on the computer you're using to access the cluster.  Linux offers an easy way to connect to all clients at once.  See your distribution's pacman or apt-get information for details on how to install it if it isn't already there.

Alternate Windows software:

I don't know of an FTP option for windows at this point.  However, if you setup RSA keys, Filezilla will at least make it easy on you.
RSA keys

The next task involves generating RSA keys so the gateway and compute modules can talk to each other without login authorization.  Linux users can use "ssh-keygen -t rsa".  With default options, that should create two files in "~/.ssh/" named "id_rsa" and "".  If you're already using RSA keys, you can direct these files to be generated elsewhere.  Rename "" to "authorized_keys".  Then you can SFTP those files in to the "~/" (known as home) folder of the account you would like to use on all the compute modules.  Personally, "sudo" has never saved me from doing anything stupid so I like to live dangerously and use root.

Windows users can generate the keys while logged in to the first node in the cluster or generate "id_rsa"(private) and "authorized_keys"(public) using an online tool like this:

You'll need to configure your ssh/sftp client software to use the keys.
cssh/csftp clusters file

Users of cssh can create a file "/etc/clusters" that contains a text label that will allow you to tap in to all the compute modules at once just by typing "cssh [label]".
[Image: 2nqzn9v.jpg]

I'm sure other clients have a way of doing this.  For now, just make sure you can with whatever software you're using.  Now that we know everything works, time to box it up.
Disassembling the case

[Image: 2dvu3j7.jpg]

All the sides and the top are only connected by obvious screws.  I also took out and set aside the HDD mounting brackets.  The front can be pulled off gently.  I had to take off the big fan and take apart the mounting plate for the ATX switch to put in the continuous switch.
Adhesive Thermal Pad/Heatsinks

[Image: zmdsat.jpg]

Because The hearsinks didn't dome with any adhesive the way they're supposed to, I got up close and personal with the exacto knife.  The goal is to have a heatsink on all 3 of the major parts (CPU, memory and power)

[Image: 15pjymb.jpg]

Steer clear from the white border or you'll have trouble installing the compute modules.

[Image: 2r6mn2r.jpg]

This is also the point of no return for GPIO headers.  Once the heatsinks are on, they're hard to get at.  Make sure everything is running so you don't need to get at them with the serial interface.

[Image: 1z5k6yc.jpg]
[Image: 2ugcil4.jpg]

Once they're all set up, sandwich the chips between some bubble wrap and put a softcover book on top for a couple hours / overnight to let the adhesive set.
Motherboard mounting (or how I learned to stop worrying and love zip ties)

[Image: 2jg3vwk.jpg]

I can't say enough bad things about the mounting holes on this board.  The placement complies with Micro ITX but their size is just a tad smaller than the holes on the case.  Not to mention that one is right up against the ATX molex.  The case came with rubber feet and zip ties so I improvised.  Afterwards, I installed the case fans and routed power.

I put some clear plastic over most of the vents to create a wind tunnel.  I took a picture but it wasn't very clear.
Setting up servers

From here, what software you set up will depend greatly on what task you plan to do with it.  The good news is that from here on out, every other guide for setup should be valid.  You have 7 quad core Arm64 processors running a basic setup of Linux that will allow you to do anything any other cluster would do.  (assuming they don't require a different processor architecture)  As a CS student who will be doing parallel programming in the near future, I wanted to setup Open MPI.  I wound up setting up my gateway as a time server and a file server.  I was able to use apt-get to update software on the compute modules and install packages needed to access server functionality.  As long as you can figure out how to apt-get or compile the software you want to run, you should be good to go.
Open MPI

I used the steps in this guide to get the basics of Open MPI going:

But I made my own version of the "hello world" program:

This is what it looks like when it runs:

Final thoughts

The board itself has some hardware quirks but the processing power is real.  Setting up a server/gateway solves lots of problems and can offload what would normally be cluster0 tasks.  This build is more challenging on the software side than it is the hardware side.  I wouldn't recommend it for novice linux users / network admins.  That being said, it's not overly difficult if you can manage to work your way around Linux.  If you want to build a small ARM cluster capable of running serious tasks, this is definitely the way to go.  The board itself consolidates what would be a massive amount of space as well as preventing wiring headaches.
Thank you for reading my post.  If you have any input or suggestions (including information that needs to be updated.), please leave a comment to let me know.

Messages In This Thread
Clusterboard Guide - by AZClusterboard - 01-20-2019, 09:48 PM
RE: Clusterboard Guide - by BryanS - 03-15-2019, 10:48 AM
RE: Clusterboard Guide - by AZClusterboard - 03-15-2019, 05:14 PM
RE: Clusterboard Guide - by billotronic - 03-16-2019, 07:28 AM
RE: Clusterboard Guide - by AZClusterboard - 03-16-2019, 05:10 PM
RE: Clusterboard Guide - by AZClusterboard - 03-20-2019, 10:27 AM
RE: Clusterboard Guide - by tllim - 03-25-2019, 10:41 PM
RE: Clusterboard Guide - by ykanello - 08-31-2019, 08:50 AM
RE: Clusterboard Guide - by timoc - 07-14-2020, 10:07 AM
RE: Clusterboard Guide - by poVoq - 04-06-2021, 06:32 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
Brick Clusterboard CAD drawings TeaPack 1 1,100 03-31-2021, 01:38 PM
Last Post: Pine
  Managed Ethernet switch on the Clusterboard dsimic 0 267 02-20-2021, 07:53 AM
Last Post: dsimic
  New Clusterboard Setup wargfn 5 1,314 01-01-2021, 10:10 AM
Last Post: poVoq
  Clusterboard does not even turn on? poVoq 3 1,770 08-15-2020, 07:31 AM
Last Post: poVoq
  Clusterboard <-> Baseboard pinout mappings curtyc 1 1,855 08-08-2020, 06:13 PM
Last Post: lbruno
  What should i pack in the box with my v2.2. clusterboard? timoc 3 2,132 07-23-2020, 03:34 AM
Last Post: pfeerick
  Clusterboard/Modules shipment arrived in a bag ichibon-brosan 2 1,641 07-09-2020, 12:27 PM
Last Post: shadowhelo
  Clusterboard Armbian aww 17 10,809 07-01-2020, 02:40 PM
Last Post: wayward83
  Best Clusterboard Distribution AZClusterboard 8 4,706 06-26-2020, 03:15 AM
Last Post: clusterDude
  Rebooting my clusterboard jgullickson 3 2,448 06-10-2020, 12:00 PM
Last Post: revoman

Forum Jump:

Users browsing this thread: 1 Guest(s)