Tumbleweed, snapshots prevent booting (by using all disk space)

I’m running Tumbleweed (and Mageia) with limited disk space, Tumbleweed uses snapshots which can take up huge amounts of disk space,
when I update tumbleweed I have at times found the system unbootable due to no space left on the partition, previously I have worked around this problem by repartitioning to
allow the needed disk space and then using YAST to remove the unwanted snap shots.

My questions are.
1./ How do I properly delete the snapshots (without using Yast) from a separate Linux OS.
2/.Snapshots does not wowk for me. any clues on limiting the number of snapshots would be useful.

My questions are.
1./ How do I properly delete the snapshots (without using Yast) from a separate Linux OS.

Well, from what I can tell it looks like I don’t, it needs to be done from the non bootable OS, if that is correct I need to find enough other deletable material on the root partition.
Or resize partitions to get the space on the root partition that I need to boot and fix the problem.
Interestingly however gparted finds a problem with the home partition and refuses to move it, I can however access all data on that partition with Mageia.

Hi
You can change the number configuration in /etc/snapper/configs/root also ensure with Tumbleweed you manually run the daily cronjob /etc/cron.daily/suse.de-snapper. Also ensure the btrfs cron jobs are present since it’s not just the snapshots but space recovery;


systemctl start btrfsmaintenance-refresh.service
systemctl status btrfsmaintenance-refresh.service

Then look at running the weekly cronjob /etc/cron.weekly/btrfs-balance.

Since there can be numerous updated Tumbleweed snapshots released, the only option is to run the cronjob manually, which may need a couple of runs to clean up and can also take some time.

Thank you for the informative reply, when I get the system up and running again I will follow your recommendations, I am sure they will help to minimize the main issue.

Thanks again!

It’s always possible to delete btrfs snapshot (snapper creates them below …/.snapshot). It won’t update snapper configuration, so you may end with some “dangling” references. I am not sure whether snapper offers some options to “reconcile” its config in this case. But it can always be done manually, and it should in any case be better than non-booting system.

Thanks for your reply, I have since deleted enough log files and so on, to boot the system to a point where I could run a terminal and the recommended code for deleting the snapshots from there.

snapper -c root list
snapper -c root delete #

It is interesting that there were 27 snapshots and over 14GB of space was freed using the delete command above, that’s on a 46GB partition.
At least for the root partition I will probably eventually go back to ext4.

Thanks again to both who replied for your help.

Interestingly however gparted finds a problem with the home partition and refuses to move it, I can however access all data on that partition

Another interesting thing about this is that although gparted reported irreparable file system errors on the “/home” partition (sda3) and would not allow me to resize or move it, so I could resize “/” (sda2), when I had made room on “/” I resized “/” (leaving free space between sda2 and sda3) and the problem disappeared.

I left this issue until I had the other fixed before starting on it and was surprised that it fixed itself.

I have not seen this before and wonder if it is a quirk that occurs on btrfs or of how gparted handles btrfs.

It would be interesting to know the likely cause of this.