BTRFS Root file system size and non functioning boot from snapshot

Hi folks,

So two questions.

I installed openSUSE 13.2 with the recommended BTRFS for the Root partition. The installer default recommendation also set the root partition to 20GB.

Note my Home partition is 210 GB (the remainder of the disk) and i went with EXT4.

I also changed the default Grub boot settings to install Grub2 to the MBR not on the root partition.

Now I have not had any success in booting from one of the BTRFS snapshots, when i choose this option in the Grub menu and select a specific snapshot and press enter nothing happens. Is this because i have Grub in MBR not on the / partition?

Secondly is 20GB too small for BTRFS snapshots? What is the total number of snapshots Snapper takes? Can i reduce the number of snapshots?

Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0006a4b6

Device     Boot    Start       End   Sectors   Size Id Type
/dev/sda1           2048   4208639   4206592     2G 82 Linux swap / Solaris
/dev/sda2  *     4208640  46153727  41945088    20G 83 Linux
/dev/sda3       46153728 488396799 442243072 210.9G 83 Linux

Disk is SSD (Samsung evo 840)

Thanks in advance.

With btrfs on root partition (/), Snapper installed by default provides for snapshot management via YaST or command line. Sounds as if you need some reading material, so try this:

Well 20GB may be too small, definitely if for example you run Tumbleweed and keep to default settings, since pre/post snapshotting automatically happens for changes made using YaST and zypper. Hourly snapshots (timeline) of the root file system are also taken by default.

Even on the standard release the default level of snapshotting is aggressive and consumes increasing levels of disk space, depending on retention time.

Daily “Cleanup” (by automatic cron job) will reduce the snapshots held, depending on the control file at /etc/snapper/configs/root. IMO you will need to reduce the default settings quite drastically to avoid your 20GB. You can also remove snapshots manually using snapper commands at any time. If running out of space, get rid of the oldest snapshots first (they’re usually the larger ones).

Yes, you can edit that file to reduce the default settings which are not aggressive enough to keep to 20GB in my experience, certainly not on 12.3 and 13.1 (both Tumbleweed - worst case). They needed 25-30GB to cover the big updates over time, with around a 50% reduction in the default config (root). For the new factory based Tumbleweed I have so far had to prune snapshots down to keeping 30 for Pre-Post. For Timeline snapshots it now retains only the most recent: 2 of hourly, 5 of daily, 3 of monthly, and 0 of yearly. That’s now down to just under 20GB used of 35GB partition, which will allow for the largest update (around 2000 packages), but it’s very early days to be certain of it!

Sorry, someone else will need to comment for SDD specific recommendations. I don’t have one, yet. :slight_smile:

No, it should not matter. Pplease give more details what happens when you try to boot into snapshot. What error messages, at which stage etc.

  1. The old df does not work on btrfs, it fails to calculate space used my filesystem metadata.
    You need to use :
    btrfs fi df <path>
    in this case: btrfs fi df /
    Or just:
    btrfs fi show /

which prints:
Label: none uuid: xxxx-xxx-xxx-xxx
Total devices 1 FS bytes used 14.00GiB
devid 1 size 20.00GiB used 20.00GiB path /dev/mapper/system-root

Which tells you disk is full.

To make room deleting normal files might not help. Most likely the filesystem snapshots have filled up the disk.

  1. Check your snapshots:
    btrfs subvolume list /

ID 257 gen 4959678 top level 5 path opt
ID 258 gen 4959651 top level 5 path srv
ID 259 gen 4959726 top level 5 path tmp
ID 260 gen 4959642 top level 5 path usr/local
ID 261 gen 4959642 top level 5 path var/crash
ID 262 gen 4959642 top level 5 path var/lib/mailman
ID 263 gen 4959642 top level 5 path var/lib/named
ID 264 gen 4959642 top level 5 path var/lib/pgsql
ID 265 gen 4959726 top level 5 path var/log
ID 266 gen 4959642 top level 5 path var/opt
ID 267 gen 4959726 top level 5 path var/spool
ID 268 gen 4959723 top level 5 path var/tmp
ID 274 gen 4959700 top level 5 path .snapshots
ID 561 gen 4959687 top level 274 path .snapshots/260/snapshot
ID 788 gen 4959687 top level 274 path .snapshots/396/snapshot
ID 789 gen 4959687 top level 274 path .snapshots/397/snapshot
ID 790 gen 4959687 top level 274 path .snapshots/398/snapshot ID 815 gen 4959698 top level 274 path .snapshots/402/snapshot ID 816 gen 4959699 top level 274 path .snapshots/403/snapshot

  1. Now delete useless snapshots with
    btrfs subvolume delete -c /.snapshots/398/snapshot

    Repeat that deleting one at time or you can use wildchars : btrfs subvolume delete -c /.snapshots/39*/snapshot

Leave ones you might need (I usually leave the oldest and 2. latest, the latest is most like corrupted)
after that says
btrfs fi flush /

and lastly

btrfs balance start /

NB: this might take 15minutes or more, let it run.

Now check that there is free space

btrfs fi show /
Label: none uuid: f8a54449-f368-489d-b60c-f22fd9490ce8
Total devices 1 FS bytes used 14.00GiB
devid 1 size 20.00GiB used 16.31GiB path /dev/mapper/system-root

you might need to reboot to get space free’ed.