Btrfs: is it possible to expand a subvolume by mounting addtional paritions or HD to it

I’m using btrfs for the first time. My question is, if I have a partition mounted on a subvolume, say “@/opt” and later run out of space, can I expand it by mounting another partition such that the existing subvolume gets more space while keeping the current data.

I did a quick search but everyone just keeps talking about how btrfs has snapshots, and what it is, but no one actually shows how to perform tasks.

Any links to good how to docs would also be appreciated.

That is impossible. You may mean you see the opposite. But better show what you see, so we can help you understanding it. E.g.

cat /etc/fstab



What is impossible, to grow a parition like LVM?
I moved @/opt to another parition on another drive, but default @/opt was mounted as a subvolume off the same partition, like @/var is currently.

$ cat /etc/fstab 
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /                       btrfs  defaults                      0  0
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /var                    btrfs  subvol=/@/var                 0  0
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /usr/local              btrfs  subvol=/@/usr/local           0  0
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /srv                    btrfs  subvol=/@/srv                 0  0
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /root                   btrfs  subvol=/@/root                0  0
UUID=78e802c5-11af-4fbf-abae-191d6835fd1d  /home                   ext4   data=ordered                  0  2
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /boot/grub2/x86_64-efi  btrfs  subvol=/@/boot/grub2/x86_64-efi  0  0
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /boot/grub2/i386-pc     btrfs  subvol=/@/boot/grub2/i386-pc  0  0
UUID=18E5-D15F                             /boot/efi               vfat   utf8                          0  2
UUID=0210ae2f-a491-4d7f-ab81-1bc931d456ff  /.snapshots             btrfs  subvol=/@/.snapshots          0  0
UUID=ecbf6022-aa55-4879-97f9-60372325f1b1  /opt                    btrfs  subvol=/@/opt                 0  0

UUID=2265eae6-3eac-440a-9250-1ef2983d8a85  /home/MacSSD            ext4   data=ordered                  0 2
UUID=e34098d9-eca0-4c53-9953-3a6b4022d2ba  /home/CR1TB             ext4   data=ordered                  0 2
UUID=9bc93ded-69e8-42d7-83bb-651d0a7026be  /home/SG8TB             ext4   data=ordered                  0 2

$ df
Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/nvme0n1p2  314048512   64004832  248918528  21% /
devtmpfs             4096          4       4092   1% /dev
tmpfs            32879764     345032   32534732   2% /dev/shm
efivarfs              128         44         80  36% /sys/firmware/efi/efivars
tmpfs            13151908       2208   13149700   1% /run
tmpfs            32879768     537876   32341892   2% /tmp
/dev/nvme0n1p2  314048512   64004832  248918528  21% /.snapshots
/dev/nvme0n1p2  314048512   64004832  248918528  21% /boot/grub2/i386-pc
/dev/nvme0n1p2  314048512   64004832  248918528  21% /boot/grub2/x86_64-efi
/dev/nvme0n1p2  314048512   64004832  248918528  21% /root
/dev/nvme0n1p2  314048512   64004832  248918528  21% /srv
/dev/nvme0n1p2  314048512   64004832  248918528  21% /usr/local
/dev/nvme0n1p2  314048512   64004832  248918528  21% /var
/dev/nvme0n1p3  661664452    4238360  655370216   1% /opt
/dev/nvme0n1p1    1046512       5960    1040552   1% /boot/efi
/dev/sda1       983378656  111763680  821588416  12% /home
/dev/sde        240158724   27902848  199983624  13% /home/MacSSD
/dev/sdc1       960302804  479461944  431986400  53% /home/CR1TB
/dev/sdb       7751273220  824083976 6536471532  12% /home/SG8TB
tmpfs             6575952         60    6575892   1% /run/user/1000
/dev/sdh1      1922656748  950742396  874222448  53% /mnt/SG2TB
/dev/sdi1      3844550452 3307585716  341597476  91% /mnt/SG4TB
/dev/sdf1      3844550452  935942912 2713240280  26% /mnt/WD4TB
/dev/sdg1      3844520768 1616988036 2032167060  45% /mnt/Docs4TB

In short, yes.
You can add more devices (or partitions) to an existing btrfs filesystem and that would effectively increase your usable space according to your btrfs RAID profile.
See btrfs device --help.

Note that this increase is for all the filesystem’s subvolumes, not just @/opt.

1 Like

@/opt is a subvolume, it is not “mounted as subvolume”.

if you look at the /etc/fstab again you will notice the UUID for /opt is on a different hard drive partition, on another sub volume that I ended up adding an entry to do the mount.

If you moved opt to another partition, then it is not a subvolume and it gets mounted to /opt.
Before, opt was a subvolume of the BTRFS root filesystem (/) and therefore was mounted to /@/opt.

Because /opt is no longer a sub volume it would no longer be in root Snapper. It may be possible if you formatted as BTRFS to have a separate snapper directory maybe???

In detail:

1 Like


I guess the posts above give you an idea of the confusion you created by explaining that you move a btrfs subvolume outside de btrfs root file system.

While I can understand that one wants e.g. /opt to be on a separate file system (of whatever type), you did quite different.

I do not have much knowledge of btrfs, but what you did, as far as I can see, is creating a new btrfs file system on that other partition, then mount this on mount point /opt and also declaring the whole of the contents as a subvolume of that new file system.

This new btrfs file system has no relation with the original btrfs file system that is mounted on / and has several sub-volumes.

Thanks for sharing this, I still find the btrfs system totally confusing. It’s going to take me some time to figure out just the basics.

Thanks, you cleared up one confusion for me about me creating a new btrfs file system that is seperate from the root. I made the assumption that btrfs would somehow just merge the two into a unified filesystem after I mounted it.

I ended up giving up on btrfs after learning there is a hit on performance and I can’t find good reading material on how to use this filesystem effectively. I was seeing a noticable delay after logging in and trade off for having snapshot not worth it for me. So I just went back to using ext4.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.