PINE64
Rock64 Cluster. - Printable Version

+- PINE64 (https://forum.pine64.org)
+-- Forum: ROCK64 (https://forum.pine64.org/forumdisplay.php?fid=85)
+--- Forum: General Discussion on ROCK64 (https://forum.pine64.org/forumdisplay.php?fid=86)
+--- Thread: Rock64 Cluster. (/showthread.php?tid=4927)

Pages: 1 2


RE: Rock64 Cluster. - digitaldaz - 09-13-2017

One thing to consider to is your storage, for example I too am planning on a 5 node cluster. I'll be giving each one an SSD so if its a 5W SSD at 5V that is 1A so thats 5A just for the storage.


RE: Rock64 Cluster. - stuartiannaylor - 09-15-2017

Any of you guys going to cluster storage also?

Been interested in https://lizardfs.com/, http://ceph.com/ & http://www.xtreemfs.org/


RE: Rock64 Cluster. - digitaldaz - 09-16-2017

I would say ceph may well be viable, I already have some 100Mb USB to eth adaptors so I would probably use the onboard gig for the ceph. I cetainly run small proxmox clusters for small solutions with a single gig interface for the ceph.


RE: Rock64 Cluster. - pitsch - 09-23-2017

clustering HDDs or SSDs..
depends on the use case, and what kind of existing storage infrastructure needs to be integrated/migrated:

on my storage side: 2 x 8TB hdd, one 4tb hdd, and a 128er samsung evo ssd.
the ssd for smaller files, databases, system, aiming to network boot via SPI nor flash.
the main storage mechanism should be able to cache between the three elements optimizing read/write io, evaluating OMV and various setups and add ons.
in a cluster setup i'd rather have one dedicated storage node, one dedicated master/gateway node, keeping 3 (or more) nodes for distributed computing, kubernetes, docker swarm etc..

of course it's great to have ssds (especially for local database applications), but the sweet spot might be rather giving each node a smaller 1tb external 2.5" hdd, selecting one with low idle power. the gigabit connection would cap the possible added throughput when using ssds, while when clustering hdds it will be harder to saturate the network bandwith.

interesting distributed storage solutions:
https://tahoe-lafs.org/trac/tahoe-lafs
https://ipfs.io/


RE: Rock64 Cluster. - stuartiannaylor - 09-23-2017

(09-23-2017, 11:45 AM)pitsch Wrote: clustering HDDs or SSDs..
depends on the use case, and what kind of existing storage infrastructure needs to be integrated/migrated:

on my storage side: 2 x 8TB hdd, one 4tb hdd, and a 128er samsung evo ssd.
the ssd for smaller files, databases, system, aiming to network boot via SPI nor flash.
the main storage mechanism should be able to cache between the three elements optimizing read/write io, evaluating OMV and various setups and add ons.
in a cluster setup i'd rather have one dedicated storage node, one dedicated master/gateway node, keeping 3 (or more) nodes for distributed computing, kubernetes, docker swarm etc..

of course it's great to have ssds (especially for local database applications), but the sweet spot might be rather giving each node a  smaller 1tb external 2.5" hdd, selecting one with low idle power. the gigabit connection would cap the possible added throughput when using ssds, while when clustering hdds it will be harder to saturate the network bandwith.

interesting distributed storage solutions:
https://tahoe-lafs.org/trac/tahoe-lafs
https://ipfs.io/

If have been doing a lot of research on this and was in the ceph IRC group and they actually suggested LizardFS rather than Ceph.
Its in most mainline distro's already and its easier to set up and admin.

Have a look https://lizardfs.com/