AWS EC2 Mounts and block devices confusion

Hello all

First post on these forums, hoping someone can shed some light on this issue I came across:

I ran the following two commands on an EC2 Suse Instance, but the output was unexpected:

|ip-xxxx:~ # df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 9.8G 4.4G 4.9G 48% /
devtmpfs 483M 100K 483M 1% /dev
tmpfs 498M 0 498M 0% /dev/shm
tmpfs 498M 79M 420M 16% /run
/dev/sda1 9.8G 4.4G 4.9G 48% /|

So sda1 appears to be the rootfs, so far so good…

|ip-xxxx:~ # lsblk
hda 3:0 0 10G 0
└─hda1 3:1 0 9.9G 0
sda 8:0 0 10G 0
└─sda1 8:1 0 9.9G 0 /
xvdf 202:80 0 2G 0 [SWAP]|

But lsblk reveals hda1. I dont have this in my aws control panel. Is is possible to get rid of this?

What you do probably should start with wherever you obtained this openSUSE image.
It probably would not cause a problem if you removed that configuration, but IMO you should research its origins before doing somethig that might break the image.

One thing you might do is inspect your syslog for any entries for hda1, you might see the drive mounted successfully or related errors.


cat /var/log/messages | grep hda1

If you find only errors and can’t determine a reason for keeping hda1, then IMO the next step is to look for how it’s mounted…
A starting point might be to simply unmount hda1 first to see what the result is, before trying to remove permanently.

  • It might be mounted with an entry in /etc/fstab
  • It might be mounted using the systemd mount service.
  • It might be manually mounted, and possibly run from a script.