Slow ethernet in linux
#11
Hi, I do have the same problem. I am getting around 1MB/s, with 2MB/s peak (wget -O /dev/null from internet for specific big file), when on other computer on the same network I am getting 30-45MB/s, and maybe 90MB/s during good time of a day. I will probably do some testing on local network strictly, but I done some ssh tests, and it isn't looking pretty.

Forcing to 100 autoneg off using ethtools, make it even worse. I am getting 50-100KB/s! Forcing to 1000 autoneg off makes it do not work at all.

ip -s l shows no errors:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 7a:6e:63:xx:xx:xx brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
2277867123 3481188 0 0 0 0
TX: bytes packets errors dropped carrier collsns
6628900774 4981672 0 0 0 0


In dmesg I do see a lot of these errors, every few seconds, especially during heavy network use

[21189.979887] RTL871X: nolinked power save leave
[21191.275452] RTL871X: nolinked power save enter

Ok. Somehow magically it improved.

echo on > /sys/class/net/eth0/power/control

made me able to copy data over ssh from other Linux box (capable of easily doing 100MB/s), with about 22MB/s (tested using 1.2GB file using dd bs=64k from sshfs file).

I turned it back to auto:

echo auto > /sys/class/net/eth0/power/control

and it shows now 25.6MB/s

Not bad.
#12
(04-23-2016, 03:11 PM)baryluk Wrote: Hi, I do have the same problem. I am getting around 1MB/s, with 2MB/s peak (wget -O /dev/null from internet for specific big file), when on other computer on the same network I am getting 30-45MB/s, and maybe 90MB/s during good time of a day. I will probably do some testing on local network strictly, but I done some ssh tests, and it isn't looking pretty.

Forcing to 100 autoneg off using ethtools, make it even worse. I am getting 50-100KB/s!  Forcing to 1000 autoneg off makes it do not work at all.

ip -s l shows no errors:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
   link/ether 7a:6e:63:xx:xx:xx brd ff:ff:ff:ff:ff:ff
   RX: bytes  packets  errors  dropped overrun mcast  
   2277867123 3481188  0       0       0       0      
   TX: bytes  packets  errors  dropped carrier collsns
   6628900774 4981672  0       0       0       0


In dmesg I do see a lot of these errors, every few seconds, especially during heavy network use

[21189.979887] RTL871X: nolinked power save leave
[21191.275452] RTL871X: nolinked power save enter

Ok. Somehow magically it improved.

echo on > /sys/class/net/eth0/power/control

made me able to copy data over ssh from other Linux box (capable of easily doing 100MB/s), with about 22MB/s (tested using 1.2GB file using dd bs=64k from sshfs file).

I turned it back to auto:

echo auto > /sys/class/net/eth0/power/control

and it shows now 25.6MB/s

Not bad.
Hi, I have same issue as you, with autoneg on, i have only 3-5kb/s 100mb autoneg off 100 improved it to 120kb/s, and autoneg off 1000 is not working at all

i tried echo on > /sys/class/net/eth0/power/control as you suggested, but its not chaning anything, probably not related to the issue. did you change anything else in the meantime?
can you tell me what kernel do you use?

pine64user@debianpine64:~/Downloads$ uname -a
Linux debianpine64 3.10.65-5-pine64-longsleep #19 SMP PREEMPT Fri Apr 15 19:55:17 CEST 2016 aarch64 GNU/Linux


do anyone know working fix for terribly slow ethernet?
#13
(04-23-2016, 03:11 PM)baryluk Wrote: made me able to copy data over ssh from other Linux box (capable of easily doing 100MB/s), with about 22MB/s (tested using 1.2GB file using dd bs=64k from sshfs file)

So you did not test network speed but CPU encryption performance instead. Why exactly?
#14
(04-28-2016, 09:45 AM)tkaiser Wrote:
(04-23-2016, 03:11 PM)baryluk Wrote: made me able to copy data over ssh from other Linux box (capable of easily doing 100MB/s), with about 22MB/s (tested using 1.2GB file using dd bs=64k from sshfs file)

So you did not test network speed but CPU encryption performance instead. Why exactly?

to test the actual speed, i would rather use speedtest_cli

Code:
sudo wget https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py
sudo chmod a+rx speedtest_cli.py
./speedtest-cli

where i have 2Mbit/s, which, for some reason is 2-4x more than any wget command (60-120kb/s)
on the other host, speedtest result is 120Mbit/s  Angry
#15
(04-28-2016, 12:55 PM)piahoo Wrote: to test the actual speed, i would rather use speedtest_cli

Also wrong Smile

You should try to isolate problems. Get iperf/iperf3/netperf and test between machines on the LAN.

Regarding the many reports of really slow Ethernet performance I still wonder whether that's not in reality 'Internet performance' instead (maybe caused by mismatching MTU settings between Linux kernel running on Pine64 and the router -- it's known that Allwinner's BSP Ethernet driver is somewhat crappy -- at least not tested that much).
#16
I tested ssh, because I had no time to setup unencrypted local server of any kind. The information is still useful. Also during the test non of the cores hits 100%, so I wasn't exactly CPU constrained I think. Maybe some issues with IRQs.

I also said that I used wget over unencrypted http from internet (and mind you I do have 1Gbps internet and router capable of doing that, with others computer hitting 90MB/s for the same big file download).

I do not know what I have done, but playing with power/control file made it faster. I will reboot the board and see if it is slow again.
#17
using netperf and multiple streams, courtesy of the wrapper script: https://github.com/akhepcat/BettererSpeedTest

Code:
qdisc is not enabled:
  root@p64:~# lsmod | grep ifb | wc -l
  0

  root@p64:~# BettererSpeedTest
  2016-04-29 10:29:38 Testing against speedtest (ipv4) with 20 simultaneous sessions while pinging oldspeedtest (30 seconds)
  ...............................
   Download:  942 mb/s
    Latency: (in msec, 31 pings, 0.00% packet loss)
        Min: 0.306
      10pct: 0.325
     Median: 0.772
        Avg: 1.032
      90pct: 1.970
        Max: 2.460
  ...................................
     Upload:  575 mb/s
    Latency: (in msec, 24 pings, 0.00% packet loss)
        Min: 0.414
      10pct: 0.424
     Median: 42.400
        Avg: 58.657
      90pct: 137.000
        Max: 253.000
 



And then I enable qdisc, via custom wrapper script:
Code:
  root@p64:~# tc qdisc show
  qdisc htb 1: dev eth0 root refcnt 2 r2q 625 default 11 direct_packets_stat 0 direct_qlen 1000
  qdisc fq_codel 8002: dev eth0 parent 1:11 limit 10240p flows 1024 quantum 300 target 5.0ms interval 100.0ms
  qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
  qdisc htb 1: dev ifb0 root refcnt 2 r2q 625 default 11 direct_packets_stat 0 direct_qlen 32
  qdisc fq_codel 8001: dev ifb0 parent 1:11 limit 10240p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 
 
  root@p64:~# BettererSpeedTest
  2016-04-29 10:31:54 Testing against speedtest (ipv4) with 20 simultaneous sessions while pinging oldspeedtest (30 seconds)
  ..............................
   Download:  505 mb/s
    Latency: (in msec, 31 pings, 0.00% packet loss)
        Min: 0.333
      10pct: 0.365
     Median: 0.457
        Avg: 0.626
      90pct: 0.546
        Max: 5.740
  ..............................
     Upload:  845 mb/s
    Latency: (in msec, 29 pings, 0.00% packet loss)
        Min: 0.368
      10pct: 0.416
     Median: 0.750
        Avg: 2.315
      90pct: 2.820
        Max: 26.500
 


as you can see, my queuing still needs some tweaking to get the download speeds to recover to pre-queuing speeds,
but the upload speeds are quite nice.
#18
I recently added all the network scheduling modules to my Kernel configuration (https://github.com/longsleep/linux-pine6...99631139ce). Care to share your bootstrap script to setup qdisc?
#19
as mentioned in the readme, this was originally cribbed from the gentoo wiki,
and then i've extended it to make it easier for me to use.

https://github.com/akhepcat/qdisc


** followup **

If i only enable queuing on the upload side, my results are about 900d/800u mb/s, which is good enough for me. :-)
#20
netperf tests with standard pfifo_fast

Code:
$ netperf -H wielkiczarny.local -l 60 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to wielkiczarny.local () port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

87380  16384  16384    60.00     695.63

Code:
$ netperf -H wielkiczarny.local -l 60 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to wielkiczarny.local () port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

87380  16384  16384    60.00     702.86

Code:
$ netperf -H wielkiczarny.local -l 60 -t TCP_MAERTS
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to wielkiczarny.local () port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

87380  16384  16384    60.00     733.93

Code:
$ netperf -H wielkiczarny.local -l 60 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to wielkiczarny.local () port 0 AF_INET : demo
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992   65507   60.00      101426      0     885.87
212992           60.00        4087             35.70

Code:
$ netperf -H wielkiczarny.local -l 60 -t UDP_RR
MIGRATED UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to wielkiczarny.local () port 0 AF_INET : demo : first burst 0
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate        
bytes  Bytes  bytes    bytes   secs.    per sec  

212992 212992 1        1       60.00    8486.10  
212992 212992

Code:
$ ip l show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 7a:6e:63:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Still It is unclear to me why wget from Internet is pretty slow at ~4-6MB/s, and maks 10MB/s at the start of download.


Possibly Related Threads…
Thread Author Replies Views Last Post
  NEMS Linux 1.5 Released for A64/A64+, A64-LTS/SOPine, Rock64, RockPro64 (NAGIOS) Baldnerd 4 10,110 03-28-2020, 06:20 PM
Last Post: ty1911
  Howto run Linux with resolution other than 1080p longsleep 28 71,020 06-13-2019, 01:53 AM
Last Post: Nilda
  NEMS Linux for Pine A64 (+) Luke 1 5,530 05-09-2019, 05:42 PM
Last Post: pineadmin
  Pine Board using linux stuck during boot sequence ktaragorn 4 9,116 03-30-2019, 06:48 AM
Last Post: ktaragorn
  Gentoo Linux test image xalius 23 52,087 01-28-2019, 11:05 PM
Last Post: necrose99
  Rockpro64 NFS root mount (kernel 4.20) - ethernet help? tenspd137 0 3,190 12-06-2018, 01:14 AM
Last Post: tenspd137
  Real-time linux kernel Artyom 45 80,597 09-11-2018, 01:08 AM
Last Post: zzwpine
  linux distribution hazerty 3 6,737 04-01-2018, 02:48 PM
Last Post: dkryder
  Linux Web Server OS harcrow 2 6,267 01-30-2018, 10:26 AM
Last Post: Rustproof
  eta linux image?? firosiro 1 4,472 08-03-2017, 10:05 PM
Last Post: pfeerick

Forum Jump:


Users browsing this thread: 1 Guest(s)