Results 1 to 5 of 5

Thread: RPi-3+ TW 5.1.7 high ambient background load

  1. #1
    Join Date
    Feb 2017
    Location
    Montana, USA & Vermont, USA
    Posts
    134

    Default RPi-3+ TW 5.1.7 high ambient background load

    I just updated 5.1.5 --> 5.1.7, but I've seen the same issue on both installs: high ambient background load with no user tasks active:
    Code:
    hdtodd@Pi-6:~> uname -a
    Linux Pi-6 5.1.7-1-default #1 SMP Tue Jun 4 07:56:54 UTC 2019 (55f2451) aarch64 aarch64 aarch64 GNU/Linux
    hdtodd@Pi-6:~> uptime
     16:11:36  up   0:13,  1 user,  load average: 4.00, 3.82, 2.49
    hdtodd@Pi-6:~>
    If I run "top", I see no culprit -- nothing using more than 1% CPU and most using nothing. But performance is sluggish, so I think it's more than just a load-calculation error.

    Anyone else seeing this? Any idea what's causing it and how to fix it?

  2. #2
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    26,488
    Blog Entries
    15

    Default Re: RPi-3+ TW 5.1.7 high ambient background load

    Quote Originally Posted by hdtodd View Post
    I just updated 5.1.5 --> 5.1.7, but I've seen the same issue on both installs: high ambient background load with no user tasks active:
    Code:
    hdtodd@Pi-6:~> uname -a
    Linux Pi-6 5.1.7-1-default #1 SMP Tue Jun 4 07:56:54 UTC 2019 (55f2451) aarch64 aarch64 aarch64 GNU/Linux
    hdtodd@Pi-6:~> uptime
     16:11:36  up   0:13,  1 user,  load average: 4.00, 3.82, 2.49
    hdtodd@Pi-6:~>
    If I run "top", I see no culprit -- nothing using more than 1% CPU and most using nothing. But performance is sluggish, so I think it's more than just a load-calculation error.

    Anyone else seeing this? Any idea what's causing it and how to fix it?
    Hi
    What about iostat, vmstat, iotop..... not using swap?

    Code:
    iostat -x 5
    vmstat 1
    ps awwlx --sort=vsz (sorts by virtual sizes, low to high)
    iotop
    In your output, the 15 min time has not passed yet...

    Install glances (python3-Glances), this should give the best overview but will add load....

    What about disk i/o scheduler?

    Code:
    cat /sys/block/<disk_eg_mmcblk0>/queue/scheduler
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  3. #3
    Join Date
    Feb 2017
    Location
    Montana, USA & Vermont, USA
    Posts
    134

    Default Re: RPi-3+ TW 5.1.7 high ambient background load

    Quote Originally Posted by malcolmlewis View Post
    Hi
    What about iostat, vmstat, iotop..... not using swap?

    Code:
    iostat -x 5
    vmstat 1
    ps awwlx --sort=vsz (sorts by virtual sizes, low to high)
    iotop
    In your output, the 15 min time has not passed yet...


    Install glances (python3-Glances), this should give the best overview but will add load....

    What about disk i/o scheduler?

    Code:
    cat /sys/block/<disk_eg_mmcblk0>/queue/scheduler
    Thanks for the quick response, Malcolm!

    After an hour ...
    Code:
    Pi-6:/home/hdtodd # uptime
     17:08:41  up   1:11,  1 user,  load average: 4.00, 4.00, 4.00
    Virtual sizes ...
    Code:
    Pi-6:/home/hdtodd # ps awwlx --sort=-vsz
    F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
    5   474   559     1  20   0 721304  3828 ep_pol Ssl  ?          0:00 /usr/sbin/nscd
    5   502  1952  1951  20   0 171992  3212 do_sig S    ?          0:00 (sd-pam)
    4     0     1     0  20   0 166620  9928 ep_pol Ss   ?          0:22 /usr/lib/systemd/systemd --switched-root --system --deserialize 32
    4     0   383     1  20   0  86688  1324 do_sel Ss   ?          0:00 /usr/sbin/lvmetad -f
    5   472  1600     1  20   0  78148  2104 do_sel S    ?          0:00 /usr/sbin/chronyd
    4     0   367     1  20   0  49624 16336 ep_pol Ss   ?          0:02 /usr/lib/systemd/systemd-journald
    4     0   394     1  20   0  29812  8176 ep_pol Ss   ?          0:03 /usr/lib/systemd/systemd-udevd
    4     0  1591     1  20   0  29492  7816 ep_pol Ss   ?          0:00 /usr/sbin/cupsd -l
    4     0  2113  1958  20   0  17708  5944 do_sys S    pts/0      0:00 sudo -s
    5     0   530     1  16  -4  17672  1640 ep_pol S<sl ?          0:00 /sbin/auditd
    4   502  1951     1  20   0  16868  8132 ep_pol Ss   ?          0:00 /usr/lib/systemd/systemd --user
    4     0  1947  1634  20   0  16212  7532 do_sys Ss   ?          0:00 sshd: hdtodd [priv]
    5   502  1957  1947  20   0  16212  4508 -      S    ?          0:00 sshd: hdtodd@pts/0
    4     0   614     1  20   0  15092  5880 ep_pol Ss   ?          0:01 /usr/lib/systemd/systemd-logind
    4   499   550     1  20   0  13548  5860 ep_pol Ss   ?          0:02 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    4     0  1634     1  20   0  12784  6504 do_sel Ss   ?          0:00 /usr/sbin/sshd -D
    4     0   664     1  20   0  12096  6944 do_sel Ss   ?          0:00 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -u -t -f /var/log/wpa_supplicant.log
    0   502  1958  1957  20   0  10396  6864 do_wai Ss   pts/0      0:00 -bash
    4     0  2114  2113  20   0  10340  6808 do_wai S    pts/0      0:00 /bin/bash
    4     0   660     1  20   0   9860  6036 do_sys SLs  ?          0:00 /usr/sbin/wickedd-nanny --systemd --foreground
    4     0   655     1  20   0   9816  6252 do_sys SLs  ?          0:00 /usr/sbin/wickedd --systemd --foreground
    4     0   551     1  20   0   9700  6096 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground
    4     0   552     1  20   0   9700  6132 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground
    4     0   556     1  20   0   9696  5852 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground
    4     0  1658     1  20   0   7384  2444 hrtime Ss   ?          0:00 /usr/sbin/cron -n
    0     0  2236  2114  20   0   7184  2884 -      R+   pts/0      0:00 ps awwlx --sort=-vsz
    4     0  1597     1  20   0   2560   464 do_sel Ss+  tty1       0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    1     0     2     0  20   0      0     0 kthrea S    ?          0:00 [kthreadd]
    (rest have VSZ = 0)
    VMSTAT:
    Code:
    Pi-6:/home/hdtodd # vmstat 1
    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     0  0      0 525328  50456 291296    0    0    21     3 2046   34  1  0 99  0  0
     0  0      0 525328  50456 291296    0    0     0     0 8158  144  0  0 100  0  0
     0  0      0 525328  50456 291296    0    0     0     0 8329  316  0  0 100  0  0 ...

    Not using MMC, using SDA:
    Code:
    Pi-6:/home/hdtodd # cat /sys/block/sda/queue/scheduler
    [none] kyber bfq
    IOTOP shows no activity.

    And then, finally:
    Code:
    Pi-6:/home/hdtodd # uptime
     17:22:48  up   1:25,  1 user,  load average: 4.21, 4.14, 4.05
    Pi-6:/home/hdtodd # iostat -x 5
    Linux 5.1.7-1-default (Pi-6)     06/16/19     _aarch64_    (4 CPU)
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.75    0.26    0.38    0.04    0.00   98.56
    
    Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
    sda              6.11    0.78     75.53     17.44     0.00     0.58   0.00  42.64    0.86    1.15   0.00    12.37    22.22   1.46   1.01
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.00    0.00    0.00    0.00    0.00  100.00
    
    Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
    sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
    Note 100% idle, but load >4 (so maybe it really is the load calculation!), but the system is very sluggish.

    If I'm the only one seeing this, don't worry about tracking it down: might get fixed in the next release. It just seemed odd, and I couldn't find any sign of what was causing the problem ... thought it might be an issue in the kernel.

    David

  4. #4
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    26,488
    Blog Entries
    15

    Default Re: RPi-3+ TW 5.1.7 high ambient background load

    Quote Originally Posted by hdtodd View Post
    Thanks for the quick response, Malcolm!

    After an hour ...
    Code:
    Pi-6:/home/hdtodd # uptime
     17:08:41  up   1:11,  1 user,  load average: 4.00, 4.00, 4.00
    Virtual sizes ...
    Code:
    Pi-6:/home/hdtodd # ps awwlx --sort=-vsz
    F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
    5   474   559     1  20   0 721304  3828 ep_pol Ssl  ?          0:00 /usr/sbin/nscd
    5   502  1952  1951  20   0 171992  3212 do_sig S    ?          0:00 (sd-pam)
    4     0     1     0  20   0 166620  9928 ep_pol Ss   ?          0:22 /usr/lib/systemd/systemd --switched-root --system --deserialize 32
    4     0   383     1  20   0  86688  1324 do_sel Ss   ?          0:00 /usr/sbin/lvmetad -f
    5   472  1600     1  20   0  78148  2104 do_sel S    ?          0:00 /usr/sbin/chronyd
    4     0   367     1  20   0  49624 16336 ep_pol Ss   ?          0:02 /usr/lib/systemd/systemd-journald
    4     0   394     1  20   0  29812  8176 ep_pol Ss   ?          0:03 /usr/lib/systemd/systemd-udevd
    4     0  1591     1  20   0  29492  7816 ep_pol Ss   ?          0:00 /usr/sbin/cupsd -l
    4     0  2113  1958  20   0  17708  5944 do_sys S    pts/0      0:00 sudo -s
    5     0   530     1  16  -4  17672  1640 ep_pol S<sl ?          0:00 /sbin/auditd
    4   502  1951     1  20   0  16868  8132 ep_pol Ss   ?          0:00 /usr/lib/systemd/systemd --user
    4     0  1947  1634  20   0  16212  7532 do_sys Ss   ?          0:00 sshd: hdtodd [priv]
    5   502  1957  1947  20   0  16212  4508 -      S    ?          0:00 sshd: hdtodd@pts/0
    4     0   614     1  20   0  15092  5880 ep_pol Ss   ?          0:01 /usr/lib/systemd/systemd-logind
    4   499   550     1  20   0  13548  5860 ep_pol Ss   ?          0:02 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    4     0  1634     1  20   0  12784  6504 do_sel Ss   ?          0:00 /usr/sbin/sshd -D
    4     0   664     1  20   0  12096  6944 do_sel Ss   ?          0:00 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -u -t -f /var/log/wpa_supplicant.log
    0   502  1958  1957  20   0  10396  6864 do_wai Ss   pts/0      0:00 -bash
    4     0  2114  2113  20   0  10340  6808 do_wai S    pts/0      0:00 /bin/bash
    4     0   660     1  20   0   9860  6036 do_sys SLs  ?          0:00 /usr/sbin/wickedd-nanny --systemd --foreground
    4     0   655     1  20   0   9816  6252 do_sys SLs  ?          0:00 /usr/sbin/wickedd --systemd --foreground
    4     0   551     1  20   0   9700  6096 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground
    4     0   552     1  20   0   9700  6132 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground
    4     0   556     1  20   0   9696  5852 do_sys SLs  ?          0:00 /usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground
    4     0  1658     1  20   0   7384  2444 hrtime Ss   ?          0:00 /usr/sbin/cron -n
    0     0  2236  2114  20   0   7184  2884 -      R+   pts/0      0:00 ps awwlx --sort=-vsz
    4     0  1597     1  20   0   2560   464 do_sel Ss+  tty1       0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    1     0     2     0  20   0      0     0 kthrea S    ?          0:00 [kthreadd]
    (rest have VSZ = 0)
    VMSTAT:
    Code:
    Pi-6:/home/hdtodd # vmstat 1
    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     0  0      0 525328  50456 291296    0    0    21     3 2046   34  1  0 99  0  0
     0  0      0 525328  50456 291296    0    0     0     0 8158  144  0  0 100  0  0
     0  0      0 525328  50456 291296    0    0     0     0 8329  316  0  0 100  0  0 ...

    Not using MMC, using SDA:
    Code:
    Pi-6:/home/hdtodd # cat /sys/block/sda/queue/scheduler
    [none] kyber bfq
    IOTOP shows no activity.

    And then, finally:
    Code:
    Pi-6:/home/hdtodd # uptime
     17:22:48  up   1:25,  1 user,  load average: 4.21, 4.14, 4.05
    Pi-6:/home/hdtodd # iostat -x 5
    Linux 5.1.7-1-default (Pi-6)     06/16/19     _aarch64_    (4 CPU)
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.75    0.26    0.38    0.04    0.00   98.56
    
    Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
    sda              6.11    0.78     75.53     17.44     0.00     0.58   0.00  42.64    0.86    1.15   0.00    12.37    22.22   1.46   1.01
    
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.00    0.00    0.00    0.00    0.00  100.00
    
    Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
    sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
    Note 100% idle, but load >4 (so maybe it really is the load calculation!), but the system is very sluggish.

    If I'm the only one seeing this, don't worry about tracking it down: might get fixed in the next release. It just seemed odd, and I couldn't find any sign of what was causing the problem ... thought it might be an issue in the kernel.

    David
    Hi
    I'm guessing it's a bug with uptime and load calculation with cpu's...

    I get
    Code:
    uptime
    19:09:01  up   1:12,  1 user,  load average: 4.00, 4.00, 4.00
    Which in IMHO should all be zero....
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  5. #5
    Join Date
    Feb 2017
    Location
    Montana, USA & Vermont, USA
    Posts
    134

    Default Re: RPi-3+ TW 5.1.7 high ambient background load

    Quote Originally Posted by malcolmlewis View Post
    Hi
    I'm guessing it's a bug with uptime and load calculation with cpu's...

    I get
    Code:
    uptime
    19:09:01  up   1:12,  1 user,  load average: 4.00, 4.00, 4.00
    Which in IMHO should all be zero....
    Yes, I'm concluding the same thing. Over time, the "uptime" reported load levels out at 4 (on a quad-core), but idle is reported 100% by other tools. Still seems sluggish (going in via SSH), but I agree that it's a bug in "uptime".

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •