LAN/NIC problem
I'd say not tkaiser... I just booted up and updated my armbian... it's now "Armbian 5.20 Pine64 Debian jessie default", and the temperature is still a nice cool 25C Undecided Plus there's no commits to the pertinent file. Looks like it didn't make this release. ;-)

So longlseeps ubuntu image is the one that offers the best out-of-the-box experience/configuration. Nice stats on the H3... sounds promising for the pine64 once things are fixed up.

(09-15-2016, 04:36 PM)tkaiser Wrote:
(09-15-2016, 03:46 PM)amc2012 Wrote: For anyone using Armbian, there appears to be a huge update today, including some pine64 specific modules. Wonder if any of it affects some of the things being discussed here?

Nope, why should it?

The problems discussed here are a hardware problem and a 'community' problem. Everything any of the Armbian team members did for/with Pine64+ in the past went back to linux-sunxi community immediately and has been incorporated by longsleep in his source trees (whether this has been picked up by the various distro bakers is unfortunately another question).

The huge Armbian update to 5.20 affects many of the boards we support (especially the Allwinner based, Pine64 explicitly not since so much moronic 'the Mali' hype has been generated here). I fear even the little bug concerning wrong temperature readouts has not been fixed (can't test right now since no boards available).

Regarding Pine64/Pine64+ nothing has changed since months. We still use those settings that work best.
  Reply
(09-15-2016, 07:18 PM)pfeerick Wrote: So longlseeps ubuntu image is the one that offers the best out-of-the-box experience/configuration.

Would not agree. Sure, the temperature fix didn't made it into the release yet, I don't wanted to poke Igor now (please keep in mind that we support +40 SBC and that there are important and less important things to be done before such a major release) but this will come the next days (just do the usual apt-get update/upgrade, everything in Armbian will be fixed that way, no scripts needed)

In Armbian we have a few more tweaks that might improve NAS performance (the so called IRQ affinity, we send those for the USB ports to dedicated cores which helps in situations where both high IO and network activity happens and the so called IO scheduler, we treat HDDs and flash/SSD differently since... they are different, just look through /etc/init.d/armhwinfo) We also do a lot of logging and provide a mechanism to help us helping you ('sudo armbianmonitor -u' will upload extensive debug info).

But if you now compare longsleep's original Ubuntu image and your Armbian Jessie installation then longsleep's will show better performance as long as you rely on iperf3. Why? Since iperf/iperf3 are CPU bound and act in Xenial in one mode multi-threaded while on Jessie single-threaded and therefore Jessie will show lower numbers.

The whole iperf/iperf3 game played here is not taking into account what iperf/iperf3 are primarily measuring: CPU performance. If you want to test network instead you need to ensure that the other host is beefy enough to always exceed 935 Mbits/sec, it this is not the case then you always get random results.

So testing between two Pine64 with unknown settings is just producing numbers without meaning. If you test against a PC (it should be a PC since many Windows laptops for example throttle their network performance down when running on battery) then your local settings on Pine64 matter.

Our default cpufreq settings for A64 allow the CPU cores to jump between 480 and 1152 MHz. When they remain at 480 MHz (that's the 'ondemand' problem since this scheduler doesn't realize that network activity would also mean increasing clockspeed!) then you get pretty low scores since the CPU becomes the bottleneck. I tested that exactly for the audience here already, see post #2: http://forum.armbian.com/index.php/topic...s/?p=14673

So when Debian Jessie from pine64.pro shows just 410 / 595 Mbits/sec (the directions matter, you always have to test in both!) against a PC then you can imagine what you get when you let 2 Pine64 remaining at 480 MHz run against each other: Something in the 300 Mbits/sec range while your network connection is working perfect and would allow 940 Mbits/sec. Since iperf/iperf3 are CPU bound! When one of the 2 Pine64 is set to 1152 then it still matters which distro is used since iperf/iperf3 are still CPU bound and depending on the distro act sometimes multi-threaded and sometimes not (see link above, I tested and explained this already in detail)

And all of this I already tried to explain weeks ago when we started 'exploring' this fun stuff: http://forum.pine64.org/showthread.php?t...7#pid18687

TL;DR: If you want to test network throughput of your individual Pine64 in benchmark mode better switch to performance governor and test against a device that is known to exceed 935 MBits/sec. If a switch is in between it's necessary to know whether the switch allows exceeding 935 Mbits/sec or not (all my dumb GbE switches are fine, one of my 2 expensive managed shows sometimes weird behaviour). Testing with unknown cpufreq settings between 2 Pine64 is absolutely useless since numbers are just the result of CPU cores running at 480 MHz bottlenecking everything. Variation in results might be caused by anything. For real world scenarios switch to interactive governor since ondemand fails miserably detecting when to increase clockspeeds when it's about network and/or storage activities. Performance governor might show even better performance but then CPU cores run all the time at 1152 MHz (not that much of a problem since only consumption will increase slightly)

If you want a deeper understanding of this stuff you need to monitor what's happening in parallel (at least using htop, armbianmonitor or pine64_health.sh script, just read the last link above).
  Reply
(09-15-2016, 11:49 PM)tkaiser Wrote:
(09-15-2016, 07:18 PM)pfeerick Wrote: So longlseeps ubuntu image is the one that offers the best out-of-the-box experience/configuration.

<snip>

But if you now compare longsleep's original Ubuntu image and your Armbian Jessie installation then longsleep's will show better performance as long as you rely on iperf3. Why? Since iperf/iperf3 are CPU bound and act in Xenial in one mode multi-threaded while on Jessie single-threaded and therefore Jessie will show lower numbers.

The whole iperf/iperf3 game played here is not taking into account what iperf/iperf3 are primarily measuring: CPU performance. If you want to test network instead you need to ensure that the other host is beefy enough to always exceed 935 Mbits/sec, it this is not the case then you always get random results.

<snip>

I can affirm the above also.  I'm using the T61 ThankPad (AC Power) for the iperf|iperf3 host. The ubuntu image running on the good board easily gets double the speed numbers up into the low to mid 800(s) while the debian image on the same board rarely gets above the mid 400(s). Also, the ubuntu image has less random numbers than the debian image.  Tomorrow I'll play around with the 'performance' governor.
marcushh777    Cool

please join us for a chat @  irc.pine64.xyz:6667   or ssl  irc.pine64.xyz:6697

( I regret that I am not able to respond to personal messages;  let's meet on irc! )
  Reply
(09-16-2016, 12:34 AM)MarkHaysHarris777 Wrote: The ubuntu image running on the good board easily gets double the speed numbers up into the low to mid 800(s) while the debian image on the same board rarely gets above the mid 400(s).

Well, IMHO it's kinda useless to talk about 'the' ubuntu image since there are a few in the wild. Let's take Ubuntu/Xenial for example:
  • longsleep's original based on kernel 3.10.102
  • Something called 'Ubuntu Base Longsleep (3.10.65 BSP)' (available from pine64.pro -- if this is really still based on old, ugly and outdated 3.10.65 then that's just another reason to immediately shut the pine64.pro site down)
  • Something called 'Ubuntu Linux Image [20160530] based on Longsleep build, updated by Pine64' (no idea which kernel version and settings are used here)
  • Armbian / Xenial based on kernel 4.7
  • Armbian / Xenial based on kernel 3.10.102

The first and the last one should perform nearly identical (longsleep boots AFAIK with performance governor set while we use interactive instead so his image might win the boot race by 0.2 seconds but then both images switch to interactive by default anyway Wink

What the other images do in between is unknown (depends on kernel version and settings). And performance might vary for absolutely unrelated reasons. As already said when 'ondemand' governor is used the kernel might not get it that network activity needs an increase in CPU clockspeeds (in Allwinner BSP kernels you would have to set a few other tweaks to get reliable behaviour but since ondemand is broken anyway simply switching to interactive is the better idea).

That means it might happen that someone with a distro relying on ondemand might get worse results with a headless distro (no GUI running) since an image with a desktop environment might generate more background activity, then the kernel increases clockspeeds and the network activity added on top will then lead to CPU cores running at 1152 MHz in the end. That's why I said it's necessary to always monitor in parallel (htop and longsleep's health script is enough!). And by looking at htop output it's pretty easy to understand why low network throughput seems to occur at low CPU clockspeeds. Since the iperf/iperf3 thread is maxing out one CPU core with 100%. By using monitoring the problem gets immediately visible (and the problem is the tool in question being CPU bound, not the network).

Another note: When I did some testing for longsleep back in April to improve DVI support I used exactly one time an Ubuntu image from pine64 wiki. This was configured to run default Ubuntu screensavers after 10 minutes of inactivity. So CPU activity increased from 2% to 400% 10 minutes after each reboot. Testing then with iperf led to a dramatic decrease in numbers but since monitoring is mandatory it was quite easy to get the reason for this (an OS image thrown together without care -- those silly 3D screensavers should only run on systems where GPU acceleration is available which is not the case for most SBC)

Regarding your Thinkpad. It seems it's ok when you get above 800 Mbits/sec against it but I was involved in an incident a few years ago where the customer's network staff used Thinkpads for iperf tests (they had poor performance through cascaded Cisco switches due to wrong algorithm used for the EtherChannels and wrong QoS settings) and the one they used invalidated all results since it acted as a bottleneck. Only after exchanging it with something better the network staff could be convinced that their switch config was the problem (the 'database boy' as you called me is doing network/storage performance optimization for a living)
  Reply
(09-15-2016, 04:36 PM)tkaiser Wrote:
(09-15-2016, 03:46 PM)amc2012 Wrote: For anyone using Armbian, there appears to be a huge update today, including some pine64 specific modules. Wonder if any of it affects some of the things being discussed here?

Nope, why should it?

The problems discussed here are a hardware problem and a 'community' problem. Everything any of the Armbian team members did for/with Pine64+ in the past went back to linux-sunxi community immediately and has been incorporated by longsleep in his source trees (whether this has been picked up by the various distro bakers is unfortunately another question).

The huge Armbian update to 5.20 affects many of the boards we support (especially the Allwinner based, Pine64 explicitly not since so much moronic 'the Mali' hype has been generated here). I fear even the little bug concerning wrong temperature readouts has not been fixed (can't test right now since no boards available).

Regarding Pine64/Pine64+ nothing has changed since months. We still use those settings that work best.

I wasn't so much referring to the GbE hardware issue that seems to be present, but more with the tuning issues for those who actually have working boards (as we've been discussing theoretical GbE limits and how different builds and settings can affect the real speeds people could get). But no point in discussing further since you explained here.
  Reply
(09-16-2016, 08:29 AM)amc2012 Wrote: I wasn't so much referring to the GbE hardware issue that seems to be present, but more with the tuning issues for those who actually have working boards (as we've been discussing theoretical GbE limits and how different builds and settings can affect the real speeds people could get). But no point in discussing further since you explained here.

Just as information: @pfeerick did some tests today with Pine64+ and exceeded 80 MB/s after removing IO bottlenecks (just to show what's possible with A64's GbE implementation). I think he's currently preparing some stuff and will add it to this thread.

And on a related note: Mainline kernel (4.x) doesn't use the BSP kernel driver we're currently using with most OS images. Montjoie started to write a new driver from scratch (for H3 devices but since Ethernet implementation is almost the same as with A64 apritzel and our other linux-sunxi devs rely also on his driver).

Since I'm currently pineless and montjoie released v4 of his new Ethernet driver recently I gave it a try on a GbE equipped H3 device instead (the one I showed benchmark numbers before but with BSP kernel). Looked to good to be true (starting with 940 Mbits/sec but then the kernel paniced nicely and board restarted): http://pastebin.com/84zHgLc5

That's the nice thing with these Allwinner SoC similarities: montjoie codes on H3, I test on H3, others on A64, code review happens on our mailing list and when bugs are fixed by linux-sunxi community the code will work on Pine64/Pine64+ also. And at least the performance numbers I got look promising (especially since I forgot to patch u-boot and H3 was running with some weirdo low clockspeed).

In other words: It really looks promising for all those use cases where Pine64+ is a really nice device, benefitting from high GbE performance, virtualization and so on Smile
  Reply
(09-16-2016, 09:48 AM)tkaiser Wrote:
(09-16-2016, 08:29 AM)amc2012 Wrote: I wasn't so much referring to the GbE hardware issue that seems to be present, but more with the tuning issues for those who actually have working boards (as we've been discussing theoretical GbE limits and how different builds and settings can affect the real speeds people could get). But no point in discussing further since you explained here.

Just as information: @pfeerick did some tests today with Pine64+ and exceeded 80 MB/s after removing IO bottlenecks (just to show what's possible with A64's GbE implementation). I think he's currently preparing some stuff and will add it to this thread.

--- snip ---

In other words: It really looks promising for all those use cases where Pine64+ is a really nice device, benefitting from high GbE performance, virtualization and so on Smile

Excellent. TL is giving one more try at getting me a good board, and I'm sending him the two bad ones I have now. I really hope the third time's the charm as I'd love to stick with the PINE family, especially since we know now we can get decent speeds over the GbE port.
  Reply
(09-16-2016, 10:00 AM)amc2012 Wrote: TL is giving one more try at getting me a good board, and I'm sending him the two bad ones I have now. I really hope the third time's the charm as I'd love to stick with the PINE family, especially since we know now we can get decent speeds over the GbE port.

Well, that's the problem with 'micro communities', they tend to create 'micro realities' Wink In fact the GbE port is A64's fastest interface and the whole Linux software story started since GbE performance is as expected (longsleep for example uses Pine64+ for a specific use case where good GbE performance is mandatory amongst other things like virtualization)

Great to hear that your board gets replaced one more time. But I really hope Pine64 folks start to implement an improved replacement process for the affected boards (getting a couple of boards from shipping factory, testing them out and only if they're not affected by the issue shipping them as replacements -- but I fully understand that an extensive QA test booting every board is not possible given the low prices, Pine64 folks already have to suffer from rip-offs sold on Taobao)
  Reply
(09-15-2016, 11:49 PM)tkaiser Wrote:
(09-15-2016, 07:18 PM)pfeerick Wrote: So longlseeps ubuntu image is the one that offers the best out-of-the-box experience/configuration.

Would not agree. Sure, the temperature fix didn't made it into the release yet, I don't wanted to poke Igor now (please keep in mind that we support +40 SBC and that there are important and less important things to be done before such a major release) but this will come the next days (just do the usual apt-get update/upgrade, everything in Armbian will be fixed that way, no scripts needed)

My apologies... I should have been clearer... Blush  I was meaning of the ones offered by pine64!  Big Grin If we were talking about the best linux image (so far!!  Tongue )  for the pine64 for headless use... I wouldn't argue there... Armbian is just perfect from what I have seen so far. And a little issue like the wrong temperature which is easily fixed until it is officially corrected is a non-issue!


(09-16-2016, 09:48 AM)tkaiser Wrote: Just as information: @pfeerick did some tests today with Pine64+ and exceeded 80 MB/s after removing IO bottlenecks (just to show what's possible with A64's GbE implementation). I think he's currently preparing some stuff and will add it to this thread.

Well, wasn't planning on anything just yet, will be a few more days until I have some time to do redo videos I'd already done with some speed tests (unlisted as I'd already decided to redo them before I made them public). In the mean time, some other tests which we were talking about on the Armbian forum the other day are still worth mentioning.

Now anyone who has been following my attempts at getting some data on the sorts of performance you can expect to get from the pine64 (a Pine64+ 1GB) without having to have any exotic setups... even to the point where it is powered by c#$ppy power supplies, with un-optimised settings... where it can certainly work at 100T-ish speeds... (that's a measly 10MB/s)... simply to prove it doesn't need an exotic setup to make it really fly... but it certainly helps. So, this is a GbE board... so why didn't the network speeds fly? Well, on the pine64.pro debian image... I ran the network-tune script by longsleep, and got more like 30MB/s to the pine64, and 25MB/s from the pine (that's megabytes, not megabits!). Now, this was with a USB3 flash drive (I know, pine64 only has USB2 - but it's a higher speed drive than what the pine64 can max out on one port) in the lower usb port (so the true USB host port), and it was formatted with ExFAT (not the best for linux!). And this was transferring a 1GB video file from a SSD, on a desktop computer running Ubuntu. So... not the best settings, but with a minor tweak, it got some reasonable performance. And since it made no different in my setup whether it was 'dodgy' power or not, I've stuck with a battery for the remainder of the tests.

Now, tkasier said that I was to overcome the limitation of the USB controller, in order to try and push the GbE on the ethernet nearer to it's limits... I should try a btrfs filesystem, with compression on, and try transferring a 1GB file of zeros... which since it is zeros... should be pretty much a file the pine64 could instantly read, making it the GbE that would be bottleneck in the transfer. So, I tried that the other day, whilst armbian was loaded, and I was doing some other stuff with the pine64. Hence, I was on Windows this time (my desktop is dual-boot), so I need to try this again on ubuntu for consistency. However, the results were still telling. That 1GB file was now transferring at around 80MB/s in both directions. I also ran the HELIOS LanTest suite on the pine64, and got speeds in the order of 55MB/s+ in both directions (see the screenshots below for the stats). Conclusion... it may be a really, really nice NAS board once both USB ports can be used in RAID configuration at full speed. That is, for the ones with working GbE, that is... guess you guys all hate me now!  Tongue  Angel

Now, as tkaiser said over on the armbian forum... these numbers only indicate the transfers speeds on my particular network (which is nothing special - consumer grade cheapish networking equip - we're not talking CISCO enterprise grade stuff here) and isn't realistic as far as if it was used as a NAS - i.e. transferring lots of small files. As you can see from a later post over on the Armbian forum, with some more typical word and PDF docs... I got some good fast speeds for some 200-250MB data sets... but some really ugly speeds when transferring 2.1GB of data... including zillions of small text files..... that slowed things down a bit! However, since my primary use will be transferring audio and video files... I'm expecting some reasonable speeds from pine64!

   
   
   
  Reply
Just a small note. I'm currently testing montjoie's new Ethernet driver. Since I sent all 'my' Pine64+ to different people I do the tests on a H3 board (H3 is pretty similar to A64 but not that performant). This is BSP kernel (3.4.112 in this case) on an Orange Pi Plus 2E:

   

And this is montjoie's new Ethernet driver (v4):

   

Currently the board reboots due to a kernel panic (reproducable which is good news) but the results already look very promising. Really looking forward to doing tests with FreeBSD and Linux mainline kernel when the new Pine64+ arrive (since they're faster than H3 boards and we get even better network performance when appropriate settings are used Smile )
  Reply


Forum Jump:


Users browsing this thread: 45 Guest(s)