Hi and a Merry Christmas! For a couple of days my hard drives are working relentessly doing… what? I have a big load on my openSUSE 13.1 and I don’t know what’s causing it (used rkhunter and clamav just in case, nothing of course):
linux-hpbh:/home/fakemoth # uname -a
Linux linux-hpbh 3.11.10-25-desktop #1 SMP PREEMPT Wed Dec 17 17:57:03 UTC 2014 (8210f77) x86_64 x86_64 x86_64 GNU/Linux
linux-hpbh:/home/fakemoth # fdisk -l
Disk /dev/sda: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfa0a17e2
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 576716799 288357376 fd Linux raid autodetect
/dev/sda2 576716800 586072063 4677632 82 Linux swap / Solaris
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfa0a17e2
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 576716799 288357376 fd Linux raid autodetect
/dev/sdb2 576716800 586072063 4677632 82 Linux swap / Solaris
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00036db5
Device Boot Start End Blocks Id System
/dev/sdc1 2048 2920585215 1460291584 fd Linux raid autodetect
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006c42f
Device Boot Start End Blocks Id System
/dev/sdd1 2048 2920585215 1460291584 fd Linux raid autodetect
Disk /dev/md0: 295.3 GB, 295277756416 bytes, 576714368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md1: 1495.3 GB, 1495338385408 bytes, 2920582784 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
linux-hpbh:/home/fakemoth # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdd1[1] sdc1[0]
1460291392 blocks super 1.0 [2/2] [UU]
bitmap: 0/11 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[1]
288357184 blocks super 1.0 [2/2] [UU]
bitmap: 1/3 pages [4KB], 65536KB chunk
unused devices: <none>
linux-hpbh:/home/fakemoth # top
top - 08:56:50 up 1:04, 3 users, load average: 1.96, 2.02, 1.52
Tasks: 222 total, 3 running, 218 sleeping, 0 stopped, 1 zombie
%Cpu(s): 5.5 us, 1.4 sy, 0.0 ni, 82.2 id, 10.6 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem: 8190656 total, 3938248 used, 4252408 free, 742564 buffers
KiB Swap: 9355256 total, 0 used, 9355256 free, 1535452 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13265 fakemoth 20 0 108.1m 3.3m 2.2m R 22.39 0.041 4:00.50 gvfsd-metadata
1560 fakemoth 20 0 3287.5m 204.0m 61.9m S 7.630 2.551 5:36.14 plasma-desktop
1551 fakemoth 20 0 3021.9m 100.1m 69.9m S 4.253 1.252 4:06.05 kwin
1116 root 20 0 392.3m 182.4m 137.4m S 2.001 2.280 6:37.09 Xorg
1920 fakemoth 20 0 1906.7m 592.1m 57.8m S 2.001 7.403 11:54.69 firefox
278 root 20 0 0.0m 0.0m 0.0m D 1.126 0.000 0:12.64 jbd2/md0-8
189 root 0 -20 0.0m 0.0m 0.0m S 0.876 0.000 0:10.10 kworker/5:1H
10 root 20 0 0.0m 0.0m 0.0m S 0.625 0.000 0:12.31 rcu_preempt
16561 fakemoth 20 0 543.1m 35.2m 21.8m S 0.625 0.439 0:04.17 konsole
96 root 20 0 0.0m 0.0m 0.0m R 0.375 0.000 0:15.25 kworker/5:1
1574 fakemoth 20 0 9.6m 1.3m 0.8m S 0.375 0.016 0:10.94 ksysguardd
15618 root 20 0 0.0m 0.0m 0.0m S 0.375 0.000 0:01.08 kworker/0:2
12 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:02.42 rcuop/1
36 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:09.85 kworker/1:0
100 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:10.31 kworker/2:1
101 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:09.92 kworker/3:1
204 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:02.83 md0_raid1
1009 root 20 0 157.0m 12.5m 3.3m S 0.250 0.156 0:04.08 teamviewerd
1599 fakemoth 20 0 2217.2m 86.0m 6.5m S 0.250 1.075 0:06.78 mysqld
4913 root 20 0 0.0m 0.0m 0.0m S 0.250 0.000 0:08.36 kworker/4:0
17108 root 20 0 15.0m 1.6m 1.1m R 0.250 0.020 0:00.04 top
11 root 20 0 0.0m 0.0m 0.0m S 0.125 0.000 0:02.41 rcuop/0
14 root 20 0 0.0m 0.0m 0.0m S 0.125 0.000 0:01.69 rcuop/3
59 root 20 0 0.0m 0.0m 0.0m S 0.125 0.000 0:01.56 ksoftirqd/5
195 root 0 -20 0.0m 0.0m 0.0m S 0.125 0.000 0:00.32 kworker/4:1H
1114 root 20 0 28.3m 0.9m 0.6m S 0.125 0.012 0:02.23 atieventsd
1461 fakemoth 20 0 1450.3m 65.3m 27.3m S 0.125 0.817 0:02.26 kded4
17143 fakemoth 20 0 0.0m 0.0m 0.0m Z 0.125 0.000 0:00.01 aticonfig
You can see below that some jbd2 process is running (read it is the ext4 journaling system) but why; and why for days on a 300GB partition?
Take a look here at my IOs
http://i59.tinypic.com/swz5na.png
And here is my iotop
http://i59.tinypic.com/29p5bud.png