Needless disk activity.

With leap 4.1 & 4.2 I am experiencing never ending disk activity when the machine is virtually idle. I am using VMware 6 with the latest patches. If I stop the OS the disk chatter stops as well. When I use dstat there is no significant disk activity; however, I hear the drive heads move and VMware shows the following activity.

<How do I add a local image to a message?>

Patrick

Here is the image: http://susepaste.org/95188557

Is this when you first boot up your machines?
If you’re running KDE, apper will scan your os, download a list of patches and offer to update which can take well more than 15 minutes at times… and if you have multiple Guests and the HostOS starting up in short order, this can mean your machine might have enormous disk activity for a very long time.

I recommend…
Creating and running Guests with KDE, Gnome or Enlightenment only rarely or not at all. I instead recommend LXDE/LXQt, XFCE, MinimalX and Server(text only) Guests.
Do not create and run Tumbleweed Guests except rarely if at all. Tumbleweed updates are upgrades which are very large.
Update your systems less often, and only manually so that you can control what is being updated at a time.
Don’t run BTRFS, particularly on your HostOS volume where your Guests are stored, or otherwise turn off snapshotting.

If you feel your disk activity isn’t related to systems updating, then verify by perhaps disabling network connections to isolate your machines from the Internet.

Of course, this assumes that you know how to evaluate the effect of Server workloads if your Guests are providing network services.

TSU

If I isolate the OS with:

ifdown eth0

the network connection is closed; but the disk chatter continues. Again, if I shut the OS down the disk chatter stops. I can’t seem to identify the process or service that is causing all the chatter.

Thanks,

Where are your VMware storage volumes located?
And, what is the HostOS file system type for those volumes?

And, whenever you describe some action you’ve done, you need to specify if it was done in the HostOS or a Guest, and if a Guest sometimes some additional detail depending on the action.

TSU

I have three physical drives. 1 SSD (1Tb), 2 5Tb hard drives (7200 rpm).

(/etc/fstab)
The OS boot volume is on the SSD and the default install configuration was used:
UUID=e5dab08d-7f28-40fe-b08d-f1ec22b1c305 swap swap defaults 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b / btrfs defaults 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /opt btrfs subvol=@/opt 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /srv btrfs subvol=@/srv 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /tmp btrfs subvol=@/tmp 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /usr/local btrfs subvol=@/usr/local 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/cache btrfs subvol=@/var/cache 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/crash btrfs subvol=@/var/crash 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/machines btrfs subvol=@/var/lib/machines 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/log btrfs subvol=@/var/log 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/opt btrfs subvol=@/var/opt 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/spool btrfs subvol=@/var/spool 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=5357b58e-e5b8-43a5-b0da-993f0ad52d1b /.snapshots btrfs subvol=@/.snapshots 0 0
UUID=12f065a9-8199-41b1-b121-28b6e375ac7a /home xfs defaults 1 2

The two hard are mounted as follows:
/dev/sdb /mnt/diskp ext4 defaults 0 0
/dev/sdc /mnt/disk1 ext4 defaults 0 0

All commands are done in a remote SSH connection attached to the host OS. What specifically do you mean by “done with Guest”?

Thanks,

command ‘df’ reports the following:

df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G  4.0K  3.9G   1% /dev
tmpfs           3.9G  116K  3.9G   1% /dev/shm
tmpfs           3.9G  2.6M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2        41G  8.5G   32G  22% /
/dev/sdb        4.5T  890G  3.7T  20% /mnt/diskp
/dev/sdc        4.5T  890G  3.6T  20% /mnt/disk1
/dev/sda3        78G  2.3G   76G   3% /home
/dev/sda2        41G  8.5G   32G  22% /var/log
/dev/sda2        41G  8.5G   32G  22% /var/lib/mariadb
/dev/sda2        41G  8.5G   32G  22% /tmp
/dev/sda2        41G  8.5G   32G  22% /var/lib/machines
/dev/sda2        41G  8.5G   32G  22% /opt
/dev/sda2        41G  8.5G   32G  22% /var/lib/pgsql
/dev/sda2        41G  8.5G   32G  22% /var/lib/named
/dev/sda2        41G  8.5G   32G  22% /var/lib/mysql
/dev/sda2        41G  8.5G   32G  22% /.snapshots
/dev/sda2        41G  8.5G   32G  22% /usr/local
/dev/sda2        41G  8.5G   32G  22% /var/tmp
/dev/sda2        41G  8.5G   32G  22% /boot/grub2/i386-pc
/dev/sda2        41G  8.5G   32G  22% /var/lib/mailman
/dev/sda2        41G  8.5G   32G  22% /var/spool
/dev/sda2        41G  8.5G   32G  22% /var/crash
/dev/sda2        41G  8.5G   32G  22% /var/opt
/dev/sda2        41G  8.5G   32G  22% /boot/grub2/x86_64-efi
/dev/sda2        41G  8.5G   32G  22% /var/lib/libvirt/images
/dev/sda2        41G  8.5G   32G  22% /srv
/dev/sda2        41G  8.5G   32G  22% /var/cache
tmpfs           799M   16K  799M   1% /run/user/1000
/dev/sr0        4.1G  4.1G     0 100% /run/media/patrick/openSUSE-Leap-42.2-DVD-x86_64028



Please post the results of the following command

esxcli storage filesystem list

TSU

Here you go:

 esxcli storage filesystem list
Mount Point                                        Volume Name  UUID                                 Mounted  Type             Size          Free
-------------------------------------------------  -----------  -----------------------------------  -------  ------  -------------  ------------
/vmfs/volumes/54a7e899-bc983bbb-965e-0cc47ad98b02  sandisk960   54a7e899-bc983bbb-965e-0cc47ad98b02     true  VMFS-5   959925190656  615848607744
/vmfs/volumes/582a2334-2d4e0d6f-6e7c-0cc47ad98b02  HGST_0       582a2334-2d4e0d6f-6e7c-0cc47ad98b02     true  VMFS-5  5000952545280   52106887168
/vmfs/volumes/582a22f6-e4de49b3-7c3c-0cc47ad98b02  HGST_1       582a22f6-e4de49b3-7c3c-0cc47ad98b02     true  VMFS-5  5000952545280   52106887168
/vmfs/volumes/54a7d5ca-1a870fed-d1be-0cc47ad98b02               54a7d5ca-1a870fed-d1be-0cc47ad98b02     true  vfat        299712512      86212608
/vmfs/volumes/0bc982e6-e4649822-bce7-c565d49af921               0bc982e6-e4649822-bce7-c565d49af921     true  vfat        261853184      84099072
/vmfs/volumes/a99fcded-830b396d-6c82-a1ace8d4c9e3               a99fcded-830b396d-6c82-a1ace8d4c9e3     true  vfat        261853184      84111360

OK,
It looks like your VMware manager is configured to store your Guests on all three of your disks, and if I were to guess your Guests are physically on your root partition (your SSD).

Your root partition file system is BTRFS, and likely with snapshots turned on.
This is not a recommended setup because

  • You’ll likely never want to restore to a snapshot that wipes out changes in not only your HostOS but every Guest as well. Ordinarily people want to restore only to a point in time for a particular machine not multiple machines.
  • Snapshots capture changes in disk activity. Why would you want to capture disk activity of multiple machines running simultaneously?

So, you have a few options…

  • Move the physical location of your Guests off your SSD (which contains root, formatted BTRFS and is also used for some VMware storage), then remove your VMware storage volume on the SSD
  • Turn off BTRFS snapshots in your HostOS.
  • Since this is a relatively new machine, you can save your Guests and re-build your HostOS specifying a different file system other than BTRFS, like ext4.

Whatever you decide to do, I always advise decisions to be based on your own personal preference, and on long range objectives. Never decide to do something simply because it’s easier to do today if it might mean taking your time and effort down the road.

Some other things off the top of my head you should consider when building a Production (not simply a personal Laboratory)

  • Run minimal applications in the HostOS, and if possible strip it down as much as possible, but this should be done only to the extent it won’t cause you pain.
  • Plan and design your storage to accommodate your objectives… for performance, security, expansion (future provisioning), more.
  • Choose your HostOS primarily for reliability and stability. Stay away from anything that sacrifices reliability.

HTH,
TSU

if I were to guess your Guests are physically on your root partition (your SSD).

This is correct. And I have multiple VMs on this Datastore. As you saying the Datastore is BTRFS partition?

You’ll likely never want to restore to a snapshot that wipes out changes in not only your HostOS but every Guest as well

Are you suggesting OpenSUSE VM BTRFS volume is monitoring the entire Datastore? How would a VM even know about the Datastore? The Datastore is completely transparent to any VM volumes. A Datastore is a function of the VMware not the VM OS.

Turn off BTRFS snapshots in your HostOS.

Are you suggesting OS BTRFS partition on the SSD is monitoring non-BTRFS partitions on the hard-drives such as ext4 partitions? Why would BTRFS impact other partition types?

Thanks,

Yes. This is in your fdisk which was in this thread’s fourth post
https://forums.opensuse.org/showthread.php/522469-Needless-disk-activity?p=2809700#post2809700

In a default installation, everything except the /home directory, swap and various mounts in memory is located in your first partition, and that partition is formatted BTRFS by default. BTRFS has certain desirable features over other file systems for most uses, but is a very bad choice for storing active virtual machine images, or at least with snapshotting enabled.

No. From what you posted, you installed your entire openSUSE on your SSD… Which means that your SSD has a BTRFS root partition, a /home partition which is likely formatted XFS and a swap partition.

You should also know that all applications install all their parts in the root and /home partitions by default because from a Developer’s point of view, those are the only locations that are guaranteed to exist (every installed system must have at least one disk, and these basic locations). Any additional disks are entirely optional and installed by a User, but from the Developer’s view is not guaranteed to exist.

So, by default even storage for Guests will be in the locations which are guaranteed to exist even when it’s not optimal. It’s up to you, as the “designer” of your own system to modify the application to fit your specific hardware resources.

TSU

TSU,

This is all fine; however, it doesn’t answer my original question. Why would a BTRFS partition that is installed on a Datastore SSD; a drive that has no moving parts and makes no noise, cause my other mechanical hard drives that are their own Datastores with a ext4 partition, make continuous noise; disk chatter? I don’t understand the correlation between the BTRFS partition on a SSD causing rapid head movement of other mechanical hard drives (disk chatter)?

Thanks…

What i described up to now only applies to “disk activity” in general, and can be observed or monitored by lights, or a monitoring application.
If you’re talking about mechanical disk sounds, that’s another thing.

I would generally suspect that the system is indexing your disks, and this would be natural with each bootup.

In any case, you can run iotop to list running disk read/writes and which application is responsible for the activity.

TSU