Does a mounted local drive take up physical space on the system disk?

Device viewer indicates that the system drive has approximately 88 GB free out of approximately 467 GB, ~81% of the disk is in use. The size of the root directory (/root) is 79 GB; VirtualBox files contribute to this size.

Running the du command, I see that various mounted local drives located under the directory /z supposedly occupy 839 GB:

linux-5:~ # du -hs /*
1.6M    /bin
166M    /boot
0       /dev
18M     /etc
46M     /home
1.7G    /lib
9.8M    /lib64
16K     /lost+found
4.0K    /mnt
4.0K    /opt
du: cannot access . . .
0       /proc
79G     /root
9.7M    /run
12M     /sbin
4.0K    /selinux
24K     /srv
0       /sys
1.5G    /tmp
6.5G    /usr
1.1G    /var
839G    /z
linux-5:~ # 

The 839 GB number of course well exceeds the total disk space (467 GB).

Here is the fstab file:

UUID=82c65205-7768-4135-8ef0-eb131bfbfcaf  /         ext4  acl,user_xattr  0  1
UUID=12A82857A8283C1B                      /z/G      ntfs  defaults        0  0
UUID=9E18F8FA18F8D1EF                      /z/F      ntfs  defaults        0  0
UUID=2A08A9D808A9A2F5                      /z/D      ntfs  defaults        0  0
LABEL=WD750                                         /z/WD750  ntfs  defaults,nofail 0  0

On another machine, the D & F drives are accessible through the /mnt directory (390 GB) and the total usage is 5%.

linux-3:~ # du -hs /*
26M     /etc
250G    /.snapshots
168M    /boot
39M     /home
520M    /opt
1.4M    /srv
226M    /tmp
8.7G    /usr
3.4G    /var
du: cannot access . . .
0       /proc
0       /sys
1.5M    /run
2.2M    /bin
1.7G    /lib
14M     /lib64
4.0G    /root
11M     /sbin
390G    /mnt
0       /selinux
0       /dev
658G    /
linux-3:~ #

Since the drives accessible through the /z directory are physically separate disks, do they actually take up space on the system drive?

Would mounting them through the mnt directory (vs. /z on the root) result in a lower disk usage figure in device viewer?

  1. You probably mean “the root directory (/)”. /root is something different.
  2. using du is not very usefull to find out how large file systems are and how full they are. Better
df -h
  1. When you mount something, where it is mounted from has no influence on what is already mounted.
  2. Mounting in /z or /mnt ot wherever is basicaly the same. Using /mnt is only a convention. And for permanent usage deciding your own place in the directory tree (like you did with /z) is fine, it keeps /mnt free for e.g temporary usage or tests.

Henk -

Thank you. I used “root” in two contexts: the main tree (“root”) and the directory ("/root").

The command df -h yields the same result as device viewer (the linux system is installed on sde2):

linux-5:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        5.9G  4.0K  5.9G   1% /dev
tmpfs           5.9G  178M  5.7G   3% /dev/shm
tmpfs           5.9G   10M  5.9G   1% /run
tmpfs           5.9G     0  5.9G   0% /sys/fs/cgroup
/dev/sde2       467G  357G   87G  81% /
/dev/sdd2       650G  199G  452G  31% /z/F
/dev/sdc1       932G  182G  750G  20% /z/D
tmpfs           1.2G   28K  1.2G   1% /run/user/0
/dev/sdb1       699G  192G  507G  28% /z/G
/dev/sdf1       674G  267G  407G  40% /z/WD750
linux-5:~ # 

See the entry at the lower right-hand corner of the device viewer:

https://susepaste.org/images/90805046.png

What is contributing to the 357 GB? Are the mounted local drives physically occupying space on sde2? Is only 86 GB unoccupied?

The directory /root has nothing to do with your problem. It is the home directory of user root. It is not a mount point. Mentioning it is at the minimum confusing.

When the command df -h tells the same as the “device viewer” then let us forget the device viewer and concentrate on the easier to produce and read CLI.

Most probably the root partition (sde2) is a btrfs file system. For btrfs df does not give trustworthy results. There is a special btrfs command that shows how much is free on a btrfs file system. Sorry, I do not use btrfs can not produce that command (I tried browsing through the man pages, but could not find it).

I found something. You could try

btrfs filesystem df /

Henk -

If I add up the sizes of all of the other directories listed in the du response (thus excluding /z), the total comes in around 90 GB.

Joel

The file system on the system in question is linux-5 is Ext4; the other machine (linux-3) is using BtrFS. (I’ve stopped using BtrFS on new installs.)

I still trying to determine how much free space I have on sde2. 88 GB or 377 GB (467 less the ~90 GB I mentioned in my last post).

I don’t understand why mounting “blocks” the difference. I note that df -h has several columns including: Size, Used, and Avail. I also don’t fully understand meaning of the entry in the Used column and why the number is not considerably smaller.

What is driving these questions is a new PC, where I wish to set up a very similar configuration, but now with 15.3. The new PC has two drives about the same size as sde on linux-5 (marketed as 480 and 500 GB, respectively).

Given the 81% usage figure for sde2, should I install a 1TB drive to provide some breathing room? Or, can I safely use the ~500 GB drive?

Please note - data will be stored elsewhere; just OS and applications. Therefore, the new installation should not be appreciably larger - perhaps smaller since I plan to locate the VirtualBox vdi files (~75-80 GB) on a separate drive.

On my current desktop, using Leap 15.3, I have 50G assigned to the root file system:


# df /
Filesystem                 1K-blocks     Used Available Use% Mounted on
/dev/mapper/nwr2wdc2-root2  51343840 12996288  35709728  27% /

As you can see, that 50G is more than enough. Yes, I use “ext4”. I have home and swap on different logical volumes, so those are not part of the 50G.

How much space you actually need depends on what you install.

nrickert -

Thank you. In my case, it appears that the installation is larger by a factor of ten (489,134,104):

linux-5:~ # df /
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sde2      489134104 373969116  90248612  81% /
linux-5:~ # 

That’s what puzzle me.

I have not been following this thread very closely.

Why you are using that much space depends on what you are doing. You mention VirtualBox. I’m using KVM for virtualization, but I have my VM images stored in a different partition (using symbolic links). If I were to put those in the root file system, I would need a lot more space.

Show what you have, preferably as below. That could be more than is actually needed, but avoids tedious polling for infornation.

Partitions:

3400G:~ # **fdisk -l **
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors 
Disk model: ST2000DM001-1CH1 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 4096 bytes 
I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
Disklabel type: gpt 
Disk identifier: 609FED7F-FC4F-43A7-9D25-44783253D69C 

Device          Start        End    Sectors  Size Type 
/dev/sda1        2048      34815      32768   16M Microsoft reserved 
/dev/sda2    33761280 3770382335 3736621056  1.7T Linux filesystem 
/dev/sda3  3770382336 3770587135     204800  100M EFI System 
/dev/sda4  3770587136 3905975094  135387959 64.6G Microsoft basic data 
/dev/sda5  3905976320 3907026943    1050624  513M Windows recovery environment 


Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors 
Disk model: Samsung SSD 850  
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes 
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: gpt 
Disk identifier: BEEDF98F-DA82-4488-A275-F581FA13B9F8 

Device        Start       End   Sectors   Size Type 
/dev/sdb1      2048    206847    204800   100M EFI System 
/dev/sdb2    206848  63121407  62914560    30G Linux filesystem 
/dev/sdb3  63121408 976773134 913651727 435.7G Linux filesystem 


Disk /dev/sdc: 232.89 GiB, 250059350016 bytes, 488397168 sectors 
Disk model: Samsung SSD 850  
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes 
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: gpt 
Disk identifier: 8DDF3D13-6D21-462C-8B35-7F46FD561E45 

Device         Start       End  Sectors  Size Type 
/dev/sdc1       2048    206847   204800  100M EFI System 
/dev/sdc2     206848  63121407 62914560   30G Linux filesystem 
/dev/sdc3   63121408 147007487 83886080   40G Linux filesystem 
/dev/sdc4  147007488 208447487 61440000 29.3G Linux filesystem 
/dev/sdc5  208447488 267042815 58595328 27.9G Linux filesystem 
/dev/sdc6  267042816 329506815 62464000 29.8G Linux filesystem 
/dev/sdc8  329506816 392421375 62914560   30G Linux filesystem 
/dev/sdc9  392421376 455335935 62914560   30G Linux filesystem 
/dev/sdc10 455335936 488396799 33060864 15.8G Linux filesystem 
3400G:~ # 

File systems:

3400G:~ # **lsblk -f **
NAME    FSTYPE FSVER LABEL     UUID                                 FSAVAIL FSUSE% MOUNTPOINT 
sda                                                                                 
├─sda1                                                                              
├─sda2  ext4   1.0   Home-HDD  08fb3e4e-133d-4b2d-96a0-0a1e0a3381d8  190.8G    84% /HDD 
├─sda3  vfat   FAT32           D2F6-FE4C                                            
├─sda4  ntfs                   B6E420F9E420BE0D                                     
└─sda5  ntfs                   9E66B9EA66B9C377                                     
sdb                                                                                 
├─sdb1  vfat   FAT16           7739-823F                                            
├─sdb2  btrfs        tw-new    10726d74-53da-41e8-a3ed-7af130722783                 
└─sdb3  ext4   1.0   Home      18e63751-b483-4422-b10d-6b896681ee64   51.1G    83% /home 
sdc                                                                                 
├─sdc1  vfat   FAT16           404C-1EC8                              68.5M    31% /boot/efi 
├─sdc2  btrfs        Leap-15.3 4f975ab4-e072-4590-a2cf-69efaa8fa43f                 
├─sdc3  btrfs        TW-Btrfs  2b54b9ff-84c9-4db2-841b-aff657a64325    9.9G    74% / 
├─sdc4  btrfs        Fedora    95a1cc9a-3a30-455d-b3bc-764202e94522                 
├─sdc5  ext4   1.0   Manjaro   172e21e3-9964-4708-9651-e6c52906c58e                 
├─sdc6  btrfs                  5ad46063-a72e-4cac-aff5-10947d8b1357                 
├─sdc8  btrfs        Leap-15.2 36e8edd4-d99e-4e13-a998-8c101d9cba75                 
├─sdc9  xfs          CentOS-8  ba97568e-452b-4a42-a24b-253e087425da                 
└─sdc10 ext4   1.0   Kubuntu   8226cbe9-6836-47a9-a5cd-9dfa223b51fa                 
3400G:~ #

Usage of mounted partitions:

3400G:~ # **du -xhd1 -t1M /boot/efi / /home**
32M     /boot/efi/EFI
32M     /boot/efi
19M     /etc
152M    /boot
10G     /usr
11G     /
214G    /home/Albums
126G    /home/karl
114M    /home/neu
47M     /home/tester
5.7G    /home/lars
24M     /home/kmistelberger
239M    /home/default
233M    /home/guest
1.3G    /home/erika
547M    /home/jalbum
76M     /home/root
7.7G    /home/charlemagne
355G    /home
3400G:~ #

If you want to know what lives in the various spaces, and how much space it uses, open up ncdu in the / directory and take a “walk” through the directories. The OS need not take much space:

# inxi -Sy
System:
  Host: 00srv Kernel: 5.3.18-lp152.75-default x86_64 bits: 64
  Desktop: KDE 3  Distro: openSUSE Leap 15.2
# parted -l | grep Disk
Disk /dev/sda: 120GB
# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda10       18G  9.3G  7.2G  57%
# inxi -Sy
System:
  Host: g5eas Kernel: 5.3.18-57-default x86_64 bits: 64
  Desktop: Trinity R14.0.10 Distro: openSUSE Leap 15.3
# parted -l | grep Disk
Disk /dev/sda: 250GB
# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda27      6.2G  3.3G  2.7G  56% /

Note the device name on the latter. That disk currently has 23 Linux distro installations, each on a 6.2G filesystem, and 82GB in unallocated disk space for more partitions. Stuff accumulates according to the space available for it to fill. Small disks get backed up and restored faster, thus it’s easier to do backups often enough to risk little.