How to identify what happened to my IOs, HDDs working non stop? gvfsd-metadata?

Hi and a Merry Christmas! For a couple of days my hard drives are working relentessly doing… what? I have a big load on my openSUSE 13.1 and I don’t know what’s causing it (used rkhunter and clamav just in case, nothing of course):


linux-hpbh:/home/fakemoth # uname -a
Linux linux-hpbh 3.11.10-25-desktop #1 SMP PREEMPT Wed Dec 17 17:57:03 UTC 2014 (8210f77) x86_64 x86_64 x86_64 GNU/Linux
linux-hpbh:/home/fakemoth # fdisk -l

Disk /dev/sda: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfa0a17e2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   576716799   288357376   fd  Linux raid autodetect
/dev/sda2       576716800   586072063     4677632   82  Linux swap / Solaris

Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfa0a17e2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048   576716799   288357376   fd  Linux raid autodetect
/dev/sdb2       576716800   586072063     4677632   82  Linux swap / Solaris

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00036db5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  2920585215  1460291584   fd  Linux raid autodetect

Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006c42f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  2920585215  1460291584   fd  Linux raid autodetect

Disk /dev/md0: 295.3 GB, 295277756416 bytes, 576714368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 1495.3 GB, 1495338385408 bytes, 2920582784 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

linux-hpbh:/home/fakemoth # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid1 sdd1[1] sdc1[0]
      1460291392 blocks super 1.0 [2/2] [UU]
      bitmap: 0/11 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdb1[1]
      288357184 blocks super 1.0 [2/2] [UU]
      bitmap: 1/3 pages [4KB], 65536KB chunk

unused devices: <none>

linux-hpbh:/home/fakemoth # top
top - 08:56:50 up  1:04,  3 users,  load average: 1.96, 2.02, 1.52
Tasks: 222 total,   3 running, 218 sleeping,   0 stopped,   1 zombie
%Cpu(s):  5.5 us,  1.4 sy,  0.0 ni, 82.2 id, 10.6 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem:   8190656 total,  3938248 used,  4252408 free,   742564 buffers
KiB Swap:  9355256 total,        0 used,  9355256 free,  1535452 cached

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                  
13265 fakemoth  20   0  108.1m   3.3m   2.2m R 22.39 0.041   4:00.50 gvfsd-metadata                                                           
 1560 fakemoth  20   0 3287.5m 204.0m  61.9m S 7.630 2.551   5:36.14 plasma-desktop                                                           
 1551 fakemoth  20   0 3021.9m 100.1m  69.9m S 4.253 1.252   4:06.05 kwin                                                                     
 1116 root      20   0  392.3m 182.4m 137.4m S 2.001 2.280   6:37.09 Xorg                                                                     
 1920 fakemoth  20   0 1906.7m 592.1m  57.8m S 2.001 7.403  11:54.69 firefox                                                                  
  278 root      20   0    0.0m   0.0m   0.0m D 1.126 0.000   0:12.64 jbd2/md0-8                                                               
  189 root       0 -20    0.0m   0.0m   0.0m S 0.876 0.000   0:10.10 kworker/5:1H                                                             
   10 root      20   0    0.0m   0.0m   0.0m S 0.625 0.000   0:12.31 rcu_preempt                                                              
16561 fakemoth  20   0  543.1m  35.2m  21.8m S 0.625 0.439   0:04.17 konsole                                                                  
   96 root      20   0    0.0m   0.0m   0.0m R 0.375 0.000   0:15.25 kworker/5:1                                                              
 1574 fakemoth  20   0    9.6m   1.3m   0.8m S 0.375 0.016   0:10.94 ksysguardd                                                               
15618 root      20   0    0.0m   0.0m   0.0m S 0.375 0.000   0:01.08 kworker/0:2                                                              
   12 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:02.42 rcuop/1                                                                  
   36 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:09.85 kworker/1:0                                                              
  100 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:10.31 kworker/2:1                                                              
  101 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:09.92 kworker/3:1                                                              
  204 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:02.83 md0_raid1                                                                
 1009 root      20   0  157.0m  12.5m   3.3m S 0.250 0.156   0:04.08 teamviewerd                                                              
 1599 fakemoth  20   0 2217.2m  86.0m   6.5m S 0.250 1.075   0:06.78 mysqld                                                                   
 4913 root      20   0    0.0m   0.0m   0.0m S 0.250 0.000   0:08.36 kworker/4:0                                                              
17108 root      20   0   15.0m   1.6m   1.1m R 0.250 0.020   0:00.04 top                                                                      
   11 root      20   0    0.0m   0.0m   0.0m S 0.125 0.000   0:02.41 rcuop/0                                                                  
   14 root      20   0    0.0m   0.0m   0.0m S 0.125 0.000   0:01.69 rcuop/3                                                                  
   59 root      20   0    0.0m   0.0m   0.0m S 0.125 0.000   0:01.56 ksoftirqd/5                                                              
  195 root       0 -20    0.0m   0.0m   0.0m S 0.125 0.000   0:00.32 kworker/4:1H                                                             
 1114 root      20   0   28.3m   0.9m   0.6m S 0.125 0.012   0:02.23 atieventsd                                                               
 1461 fakemoth  20   0 1450.3m  65.3m  27.3m S 0.125 0.817   0:02.26 kded4                                                                    
17143 fakemoth  20   0    0.0m   0.0m   0.0m Z 0.125 0.000   0:00.01 aticonfig       

You can see below that some jbd2 process is running (read it is the ext4 journaling system) but why; and why for days on a 300GB partition?

Take a look here at my IOs
http://i59.tinypic.com/swz5na.png

And here is my iotop
http://i59.tinypic.com/29p5bud.png

Since you seem to have a soft raid, what do:

cat /proc/mdstat
and
mdadm -D /dev/md0

say?

You have the cat above, as for the mdadm info here it is, looks fine to me:

linux-hpbh:/home/fakemoth # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Thu Apr 24 21:11:42 2014
     Raid Level : raid1
     Array Size : 288357184 (275.00 GiB 295.28 GB)
  Used Dev Size : 288357184 (275.00 GiB 295.28 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Dec 25 08:38:59 2014
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : linux:0
           UUID : 235ae0da:aeb0961f:27153520:1a2a141a
         Events : 2004

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
linux-hpbh:/home/fakemoth # mdadm -D /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Tue Nov 19 22:10:55 2013
     Raid Level : raid1
     Array Size : 1460291392 (1392.64 GiB 1495.34 GB)
  Used Dev Size : 1460291392 (1392.64 GiB 1495.34 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Dec 25 09:58:40 2014
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : linux-5235:md1
           UUID : 15a244b1:aaf03a2f:63f41b15:30b20e95
         Events : 32917

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1


Have you tried nuking the gvfs metadata and killing the process? You’ll most likely find it in .local/share/

… mnope …should I do that, in the first place? Why? Is it safe?

On 2014-12-25 08:06, fakemoth wrote:
>
> Hi and a Merry Christmas! For a couple of days my hard drives are
> working relentessly doing… what? I have a big load on my openSUSE 13.1
> and I don’t know what’s causing it (used rkhunter and clamav just in
> case, nothing of course):

> Take a look here at my IOs
> [image: http://i59.tinypic.com/swz5na.png]
>
> And here is my iotop
> [image: http://i59.tinypic.com/29p5bud.png]

Close to unreadable, sorry.

To find out what is using your disk, just run in a terminal “iotop -o”
as root. If you need to, just paste the text here, inside a code tags block.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

I don’t understand what you mean by that. Also, already offered the iotop command, so I am interested more of a “why”/“what is in fact this stuff” now. Do you people even read this posts before replying? One asks for the cat /proc/mdstat the other for iotop when they were provided right from the beggining!

PS I did kill/delete everything related. And it’s just starting again. Are you quite sure it has nothing to do with the latest EXT4 improvements in kernels, or so I read something?

Since you’re a dick, I’ll just ignore you from now on. Bye bye forever.

On 2014-12-26 06:56, fakemoth wrote:
>
> robin_listas;2685173 Wrote:
>>
>> Close to unreadable, sorry.
>>
> I don’t understand what you mean by that.

That the photos you posted are so so small that I can not see anything
in it. Only a nice BSG photo.

> Also, offered already the
> iotop command,

No, you posted a photo that is so small that it is impossible to see
anything in it.

> so I am interested more of a “why”/“what is in fact this
> stuff” now.
>
> PS I did kill/delete everything related. And it’s just starting again.
> Are you quite sure it has nothing to do with the latest EXT4
> improvements in kernels, or so I read something?

I repeat: just run “iotop -o” in a terminal, and it will tell you what
exactly is using the disk.

I’ll refrain from guessing “whys” till you do, because we still do not
know “what” is doing it.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)


@robin_listas - thanks for clearing this up: in my case the screenshots are fine, full HD scaled as big as the forum, I can see them just fine, don’t know what happened. [jbd2/md0-8] is using > 85% IOs in iotop, gvfsd-metadata writes on disk at around 100K/s.

I would have fixed/replaced the screenshots if the forum would allow editing. Changed the kernel, no difference so it wasn’t the update.

Not everyone uses the web interface :open_mouth:

On 2014-12-27 10:36, fakemoth wrote:

> @robin_listas - thanks for clearing this up: in my case the screenshots
> are fine, full HD scaled as big as the forum, I can see them just fine,
> don’t know what happened.

I don’t know, today they show big and without commercials. The other day
what I got were links to a page with commercials and a view of the
screenshots in the middle.

> [jbd2/md0-8] is using > 85% IOs in iotop,
> gvfsd-metadata writes on disk at around 100K/s.

In the screenshot jbd2/md0-8 is 0 B/S. gvfsd-metadata is at 2.12 MB/s,
which is not that much, but surprising if it is non-stop. However, there
is more going on than what is listed, because the totals are bigger.
“iotop -o” is better.

What gvfsd-metadata can be doing, I can’t imagine.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

On 2014-12-27 18:56, gogalthorp wrote:
>
> Not everyone uses the web interface :open_mouth:

I was going to mention that, but I’m very confused because now the link
I get via nntp produces the full zoomable image. Previously I got a web
page (tynipic) in which the photo was just a smallish preview. Even when
zoomed it had insufficient resolution.

Hey! Now I’m getting that problem again.

Info: 1,600px × 900px (scaled to 645px × 363px)

clicking on it, it zooms a bit, but insufficient:

1,600px × 900px (scaled to 1,184px × 666px)

There is a “view raw image” link. Hitting it I do get the full
resolution image:

(-) 1,600px × 900px (scaled to 1,334px × 750px)
(+) 1,600px × 900px

maybe visiting the forum (http) activated a cookie which allowed me to
see the full resolution image. Next time I’ll look for that “raw image”
link, because it is not the first time I have this problem.

One thing more I learned… :wink:


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

It still does that… thing :slight_smile: omg it scares the hell out of me as I am a data paranoid. So… I decided to remove gvsf packages :beat-up: . And I did; as I understand it has something to do with Gnome and nautilus, but I am a KDE user, no silly Gnome here. Is this package really necessary for something (let’s call it !!!critical!!!, like the integrity of my data/performance of my system/overall stability/etc?) given the list of dependencies which was really short and closely related? Everything seems to work.

Will report back here on this misterious issue…

On 2015-01-05 15:36, fakemoth wrote:
>
> It still does that… thing :slight_smile: omg it scares the hell out of me as I am
> a data paranoid. So… I decided to remove gvsf packages :beat-up: . And
> I did; as I understand it has something to do with Gnome and nautilus,
> but I am a KDE user, no silly Gnome here. Is this package really
> necessary for something (let’s call it !!!critical!!!, like the
> integrity of my data/performance of my system/overall stability/etc?)
> given the list of dependencies which was really short and closely
> related? Everything seems to work.

You have gvfs packages because you may have some gtk based application,
and the toolchain is installed (maybe incomplete).


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)