Mounting 1TB drive as /home on 500GB root system

Leap 15.4 is currently installed on a 500 GB SSD. A 1 TB drive mount point was selected to be /home during installation. Occupied space on the 1 TB drive appears under the root system since /home is a sub directory. Does this mean that the available space on the 1 TB drive is constrained to be under 500 GB?
The OS was installed on the 500 GB as (secure boot) an encrypted file system formatted btrfs using the YaST partioner ‘Guided Setup’.
The 1 TB drive was pre-formatted with a LUKS encrypted ext4 file system and mounted as /home with the partioner ‘Guided Setup’.
How should this have been installed? Anyone have recommendations on what should I do differently when I get around to installing Leap 15.5; RTFM?

Hi ricaard and welcome to this forum.

I have to admit I have no idea what the encryption could be affecting as I don’t have it but,

Certainly not.

It is not quite clear how you get this result or impression. Can you please provide the following info in a terminal:

df -h

please use the </> symbol to format the output and please post the full command with the full output.
You should see the mountpoint and size of your /home drive.

equinox:~> df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               4.0M  8.0K  4.0M   1% /dev
tmpfs                   16G  4.0K   16G   1% /dev/shm
tmpfs                  6.3G   26M  6.2G   1% /run
tmpfs                  4.0M     0  4.0M   0% /sys/fs/cgroup
/dev/mapper/cr_root    464G  215G  249G  47% /
/dev/mapper/cr_root    464G  215G  249G  47% /.snapshots
/dev/mapper/cr_root    464G  215G  249G  47% /boot/grub2/i386-pc
/dev/mapper/cr_root    464G  215G  249G  47% /srv
/dev/mapper/cr_root    464G  215G  249G  47% /var
/dev/mapper/cr_root    464G  215G  249G  47% /boot/grub2/x86_64-efi
/dev/mapper/cr_root    464G  215G  249G  47% /tmp
/dev/mapper/cr_root    464G  215G  249G  47% /usr/local
/dev/mapper/cr_root    464G  215G  249G  47% /root
/dev/mapper/cr_root    464G  215G  249G  47% /opt
/dev/nvme0n1p1         511M  5.2M  506M   2% /boot/efi
/dev/mapper/cr-auto-3  916G  497G  374G  58% /home
tmpfs                  3.2G  164K  3.2G   1% /run/user/1000

Interesting: yesterday the Use% column remained at 79% after I MOVED ~100 GB from a subdirectory of / to a sub directory of /home. This is my first check of df -h since logging in today and I see that Use% for the root directory on the SSD is 47%. Does this mean that the file space is not immediately made available on the SSD?

Yes, that is possible. But it is depending on the file system you are using. Especially BTRFS behaves a bit “strange” about the freed space. (Strange only at first sight, of course.)

OK. I should read up a bit more on BTRFS as I know it is somewhat more complicated than the systems I grew up with. But I take away that there is nothing fundamentally wrong or limiting by mounting the large drive at /home mount point under root.

That’s correct. I must admit that I rather vaguely understand the functions affecting the freed space, one for sure is the snapshots.
Here’s a couple of threads in the forum explaining a lot. You can use the search function. And also this should be useful:

Thanks a lot. This was useful to me. The manuals are quite good and I should spend time reading more about partitioning. My experience has been with rather simple partitioning schemes. It’s the usual case about not spending the time unless there is a ‘need to know’.

Most welcome. Have a lot of fun! :slightly_smiling_face:

This is all a bit vague explanation. Please provide hard facts:

lsblk -f

And no, the fact that the container of / is only 500GB does not mean that the container of /home is also restricted to 500MB. They are different things, on different places.

And of course is the mountpoint of /home inside /. All mountpoints (except / itself) are inside the directory tree starting at /.

The whole idea of mounting a file system somewhere is adding some storage space to that place in the directory tree, outside of the space already used.

equinox:~> lsblk -f
NAME          FSTYPE      FSVER LABEL  UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
└─sda1        crypto_LUKS 1            e0cb72fa-9485-4d50-a6cc-afcea3e84b45                
  └─cr-auto-3 ext4        1.0   ardisk 71c3582a-9194-48dd-8026-9be217da3b16    373G    54% /home
├─nvme0n1p1   vfat        FAT32        C257-AB53                             505.8M     1% /boot/efi
├─nvme0n1p2   crypto_LUKS 1            e11908ca-814f-471a-b051-84ca6af8a04d                
│ └─cr_root   btrfs                    c9d9e227-9106-41d9-bf95-e3d4b4fc73eb  248.1G    46% /opt
│                                                                                          /root
│                                                                                          /usr/local
│                                                                                          /tmp
│                                                                                          /boot/grub2/x86_64-efi
│                                                                                          /var
│                                                                                          /srv
│                                                                                          /boot/grub2/i386-pc
│                                                                                          /.snapshots
│                                                                                          /
└─nvme0n1p3   crypto_LUKS 1            0a0b066c-9ade-45f3-b942-f5a933b911f5                
  └─cr_swap   swap        1            45fdc0be-f8ba-4ad4-96c3-3eb9a67696a4                [SWAP]

Makes sense now.
I was thrown back by the output lines from df -h:

Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/cr_root    464G  215G  249G  47% /
/dev/mapper/cr-auto-3  916G  497G  374G  58% /home

After a file move to /home the User% of the 1 TB drive containing /home changed as expected but the Use% of the top level remained at 79%. Today, as seen above, it dropped down to 47% as expected. I assume that this delay is due to a combination of btrfs and SSD.

When you remove a file in “btrfs”, it still stays there in an older snapshot. However, btrfs maintenance periodically cleans this up by removing the oldest snapshots. So the free space eventually shows, but not immediately after removing a file.

1 Like

No, “btrfs maintenance” (whatever it means) does not remove any snapshot. Snapshots created by snapper are removed by snapper according to configured policy. Snapshots created manually (which also includes some other program) are never removed by snapper. It is up to creator of these snapshots to remove them.

btrfsmaintenance-refresh.service will setup periodic scrub, balance, trim and defragmentation according to configured policies. None of these is related to snapshot removal.

Does this mean that you now understand how your mass storage is organized?

I don’t regard ‘understand’ as a binary quantity. My understanding has increased, at least regarding availability of space on the mounted 1 TB drive mounted as home. However, I am not clear about the btrfs maintenance. Does the delay in disk space appearing (in df) after files are moved or deleted mean that the space is unavailable until maintenance is executed? I obviously need to learn more about file system operation, particularly btrfs. These files are transient in that they move from RAM disk to SSD and then to the mechanical drive as /home. Thank you for checking up.

I mean more the understanding of what disks you have, and how they are named by the system (like sda, nvme0n1). that they are partitioned and what the partitions names are (BTW, you can not see in the lsblk output how the partitioning exactly is with respect to place and size on the disk, for that you can use fdisk -l).

Also you can see what these partitions are used for (what is “on them”), like e.g. an ext4 file system, or swap space). And you can see how the file systems are build together to create the directory tree (that starts at /) of you system’s mass storage as presented to the users.

That `df’ program can not really determine what is going on in a Btrfs file system. Thus the output is trustworthy for many file system types (ext2/3/4, Reiserfs, XFS, Zfs, …), but not for Btrfs. Btrfs has it’s own tools.

Yes, I misunderstood your question and the short answer is yes. I relied on YaST Partioner to get an overview of how the mass storage is organized.