PINE64

Full Version: Rock64 NAS project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
(03-17-2018, 10:53 AM)pedroz Wrote: [ -> ]
(03-07-2018, 04:24 PM)Leapo Wrote: [ -> ](..) between 90 and 100 MB/s while running backups over the network (..)
Hey Leapo, could you tell which image and kernel are you using? And what is the network protocol you are using for network file transfer?
With ayufan's jessie-omv 0.5.15 (kernel 4.4.77-rockchip-ayufan-136) I'm getting good I/O rates in iozone benchmark for HDD but I'm stuck at ~40-45MB/s with SMB, AFP or even WebDAV.. Sad while with FTP i'm getting ~90MB/s so it's strange..
Iperf for eth0 shows stable ~920 Mbit/s.

What OS are you using on your PC(s) ?
(03-17-2018, 05:41 PM)Luke Wrote: [ -> ]What OS are you using on your PC(s) ?
I've done this test on two OS: first - Macbook Pro with OS X High Sierra and second - the same hardware with Manjaro Linux live usb. In both cases I'm connected with Gbit eth. On second attempt I'm getting ~50-60 MB/s read-write results for SMB. With FTP I'm still able to get ~100-110 MB/s read and ~90 MB/s write (which is moreover the I/O limit for HDD connected to my Rock64).
I was observing the CPU usage when using SMB and it's ~50% CPU one core, so I guess it's not the CPU performance issue.
(03-18-2018, 02:52 AM)pedroz Wrote: [ -> ]
(03-17-2018, 05:41 PM)Luke Wrote: [ -> ]What OS are you using on your PC(s) ?
I've done this test on two OS: first - Macbook Pro with OS X High Sierra and second - the same hardware with Manjaro Linux live usb. In both cases I'm connected with Gbit eth. On second attempt I'm getting ~50-60 MB/s read-write results for SMB. With FTP I'm still able to get ~100-110 MB/s read and ~90 MB/s write (which is moreover the I/O limit for HDD connected to my Rock64).
I was observing the CPU usage when using SMB and it's ~50% CPU one core, so I guess it's not the CPU performance issue.

Ok, the reason I ask is because I get terrible performance (~10MB/s) mounting the SMB share in Linux Mint's Cinnamon DE. 
Mounting the share from cli however yields the sort of performance one would expect (80-90MB/s) from the drive I've got attached and the network. 
I ended up adding it to fstab using the values below and it works great: 

Code:
//192.168.1.72/Main /media/OMV cifs username=lukasz,password=xxxx,rw,uid=1000,iocharset=utf8,sec=ntlm  0  0
Thanks for the hint, I may try to benchmark mount.cifs and see the results.
(03-07-2018, 04:24 PM)Leapo Wrote: [ -> ]
(02-28-2018, 07:40 PM)yours_david Wrote: [ -> ]Have you tried with massive HDD copying tasks on your setup? How's its stability? Thanks.

Speed and stability have been great. I currently have two 2TB hard disks connected to the controller, with the controller performing hardware RAID 1 (mirroring), and have it connected to my Rock64 (running Open Media Vault) via USB 3.0. I'm sharing an NTFS formatted volume using SAMBA.

I use it as a target for daily backups of my homelab, with backups being taken of an ESXi host by Veeam Backup & Recovery. I usually see transfer speeds in excess of between 90 and 100 MB/s while running backups over the network.

I know I might get slightly better performance if I were to reformat the disks as EXT4, but I wanted to be able to plug the controller + disks into a Windows machine if need-be.

Thanks for your reply Leapo. I have received mine, Rock64 connected with a two bay HDD dock get total bandwidth at about 100mb/s reading from windows big file copy task. I have found copy within 2 bay HDDs the speed drop down to about 50mb/s from rsync command, seem that the whole bottleneck is the USB 3.0. Plus, only a few of time, my HDDs droped offline. Seem stability may need to improve a little bit.
(03-17-2018, 10:53 AM)pedroz Wrote: [ -> ]
(03-07-2018, 04:24 PM)Leapo Wrote: [ -> ](..) between 90 and 100 MB/s while running backups over the network (..)
Hey Leapo, could you tell which image and kernel are you using? And what is the network protocol you are using for network file transfer?
With ayufan's jessie-omv 0.5.15 (kernel 4.4.77-rockchip-ayufan-136) I'm getting good I/O rates in iozone benchmark for HDD but I'm stuck at ~40-45MB/s with SMB, AFP or even WebDAV.. Sad while with FTP i'm getting ~90MB/s so it's strange..
Iperf for eth0 shows stable ~920 Mbit/s.

I'm using the "jessie-openmediavault-rock64-0.5.15-136-armhf" image from here with the kernel it comes with.

Like I said in my previous post, network transfers are being done over SMB.

I didn't really do anything special to get these speeds, it just kinda worked out of the box... Connected my storage device via USB 3.0, mounted the pre-existing NTFS volume and created shares using the OpenMediaVault WebUI, good to go. Still 100% reliable, handling gigabytes of backups from Veeam, every night.
Thanks Leapo for your answer.
At the moment I think I'll stay with stretch-openmediavault 0.6.25 with kernel 4.4.114 which seems to me to have the highest stability / performance ratio (in terms of software, I/O performance, crypto features support - way better performance than with jessie, and CPU thermal throttling).
What do guys think about this: http://www.hotway.com.tw/portfolio/portf...h82-su3s2/

It's just an enclosure that supports multiple bays, no RAID functionality. I'm thinking of using it with OMV and software raid
(02-18-2018, 03:48 PM)Leapo Wrote: [ -> ]I'm using one of these with my Rock64 to turn it into a NAS, it's a 5-port SATA to USB 3.0 bridge with hardware RAID:
https://www.amazon.com/Oodelay-eSATA-Por...B00PZ7347E

There's also a (much cheaper) 2-port version, if all your need are two disks:
https://www.amazon.com/Oodelay-eSATA-6Gb...B00T22JUT4

In hardware RAID mode, the Rock64 just sees one disk, and operates without any further configuration. If you switch to JBOD mode, the Rock64 sees all disks individually, allowing for the use of software RAID.

Looks cool, 
do you know any external 2 bays drive enclosure where this can fit?
I am successfully running 16 drives on my 4GB Rock64, two 8x USB3/eSATA (USB3 mode) enclosures with ZFS on Bionic.  Two 12TB RAIDZ2 arrays that I have been using since early ZFS days, never formatted, replaced many dead drives, no data loss issues..  and mounted the same array on almost every OS possible, including within VMs with both PCI/USB passthrough and direct .. extremely happy with it all, and far fewer issues than my previous (~2005) mdraid/XFS setup.

2x Mediasonic H82-SU3S2, purchased in 2011/2013 - https://www.amazon.com/gp/product/B005GYDMYG
- These are nice because they have JMicron eSATA also, compatible with PMP so one eSATA cable per 8x drives.

Uspeed Superspeed USB 3.0 HUB 4 port - also from around 2011, not available anymore but it looks like this:
http://mojomojo.co/wp-content/uploads/20...G_3087.jpg

I am using my Rock64 as a NAS and full-time work/dev PC (12-16 hours/day, VSS Code, SQL admin, etc) and have not booted up any x86 devices in a couple of weeks.

Zero stability issues other than Chromium locking up the machine at times (Firefox 60 is more responsive/video seems to work better anyway) , using ZFS-DKMS 0.7.5 from ubuntu bionic repo (with a slight build fix).

On Rock64, seeing around 40-50MB/s transfer times over eth to a Windows desktop with samba.  I can get around 180-280MB/s in optimal conditions with eSATA and the old 2TB drives I am using .. we'll see what the Rock960Pro has in store there, but this is fantastic and plenty fast/stable as a starting point for my needs.
Pages: 1 2 3