Since I installed openSUSE Leap 42.1 my btrfs root partition has been inching toward its limit of 22 GB with all the updates that have been raining in since then! I am now getting warnings about the lack of available space (19.8 GB out of 22 GB)
Who can tell me what snapper configurations I can ditch and what to keep? Some of them are marked ‘important = yes’, so I have kept them, but I suspect that these are the ones taking disk space.
What does ‘important’ mean in this context? The bit of documentation I found wasn’t really helpful (e.g. I should get rid of the early ones, but they are all ‘important’). Am I right in thinking that cleanup is done ‘automatically’ (i.e. part of OpenSUSE Leap configuration) at boot or shutdown, so that I don’t need to change anything there?
PS: I asked this question in the applications forum, but got no answer there, so I have opened this thread here. This seems to me to be a subject that might drop through the cracks, so forum admin should delete this thread here, if this doesn’t belong here.
Here is my snapper list for info
snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+-----+-------+------------------------------+------+---------+-----------------------+--------------
single | 0 | | | root | | current |
single | 1 | | Sun 08 Nov 2015 00:41:40 CET | root | | first root filesystem |
pre | 290 | | Wed 02 Dec 2015 11:38:55 CET | root | number | zypp(y2base) | important=yes
post | 291 | 290 | Wed 02 Dec 2015 11:40:21 CET | root | number | | important=yes
pre | 320 | | Wed 09 Dec 2015 10:19:11 CET | root | number | zypp(y2base) | important=yes
post | 321 | 320 | Wed 09 Dec 2015 10:28:00 CET | root | number | | important=yes
pre | 390 | | Mon 28 Dec 2015 18:09:36 CET | root | number | zypp(y2base) | important=yes
post | 391 | 390 | Mon 28 Dec 2015 18:11:17 CET | root | number | | important=yes
pre | 550 | | Sat 30 Jan 2016 09:23:55 CET | root | number | zypp(y2base) | important=yes
post | 551 | 550 | Sat 30 Jan 2016 09:28:00 CET | root | number | | important=yes
pre | 628 | | Thu 18 Feb 2016 07:56:42 CET | root | number | zypp(y2base) | important=yes
post | 629 | 628 | Thu 18 Feb 2016 08:02:57 CET | root | number | | important=yes
pre | 639 | | Tue 23 Feb 2016 14:52:00 CET | root | number | yast sw_single |
pre | 640 | | Tue 23 Feb 2016 14:52:42 CET | root | number | zypp(y2base) | important=no
post | 641 | 640 | Tue 23 Feb 2016 14:54:24 CET | root | number | | important=no
post | 642 | 639 | Tue 23 Feb 2016 14:54:34 CET | root | number | |
pre | 643 | | Thu 25 Feb 2016 10:54:05 CET | root | number | yast sw_single |
pre | 644 | | Thu 25 Feb 2016 10:55:14 CET | root | number | zypp(y2base) | important=no
post | 645 | 644 | Thu 25 Feb 2016 11:03:04 CET | root | number | | important=no
post | 646 | 643 | Thu 25 Feb 2016 11:06:13 CET | root | number | |
pre | 647 | | Fri 26 Feb 2016 10:24:47 CET | root | number | yast sw_single |
pre | 648 | | Fri 26 Feb 2016 10:25:49 CET | root | number | zypp(y2base) | important=no
post | 649 | 648 | Fri 26 Feb 2016 10:27:04 CET | root | number | | important=no
post | 650 | 647 | Fri 26 Feb 2016 10:27:56 CET | root | number | |
pre | 651 | | Sat 27 Feb 2016 19:34:28 CET | root | number | yast snapper |
post | 652 | 651 | Sat 27 Feb 2016 19:35:30 CET | root | number |
Really? Last time I asked (openSUSE 13.1 or 13.2), some said that my 12 GB was too little and told me that 20 GB was minimum. I thought I was being generous in allocating 22.2 GB this time!
What should I turn off? Do you mean ‘ditch BTRFS for say, ext4’? Or do you mean ‘ditch all the snapshots and deinstall snapper so that no snapshots are made at all’? I’m guessing you mean that latter, so I wonder what the recommended size of the root partition would be with just btrfs and no snapshots?
Hi
Nooooo the default config is not that big only if your busy installing one package at a time over one day… By default leap sets time to 10 and number only… I suggest 2 and 1 for 20GB if you want to use it at /etc/snapper/configs/root, else turn it off, or if you do multiple package installs, manually run the cleanup daily cronjob /etc/cron.daily/suse.de-snapper
Note: I run virtual machines with BtrFS and snapper enabled with just 10-12 GiB for the root partition and they work just fine even after many update rounds.
Note: I run virtual machines with BtrFS and snapper enabled with just
10-12 GiB for the root partition and they work just fine even after many
update rounds.
Hi
The problem is things have changed a lot in the over three year article
you linked to
The default these days is only a number snapshot and the last 10, I
tend to use 4 and 2. Plus there is the daily cron job (suse.de-snapper)
and the weekly cron job btrfs-balance.sh (and btrfs-trim.sh for SSD
users), note if on Tumbleweed this doesn’t seem to be linked to the
btrfs maintenance tools at present, so needs to be manually created.
If you configure snapper, then manually run the daily job then the
balance one, things should come right.
–
Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 SP1|GNOME 3.10.4|3.12.53-60.30-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
But now, I seem to have screwed up snapper having deleted all the the snapshots (0-999). On starting snapper and clicking on ‘create’ a new snapshot, I enter a description and click on OK, The comes the error: “Failed to create new snapshot: .snapshots is not a btrfs subvolume”. ‘Current configuration’ is ‘root’
But now, I seem to have screwed up snapper having deleted all the the
snapshots (0-999). On starting snapper and clicking on ‘create’ a new
snapshot, I enter a description and click on OK, The comes the error:
“Failed to create new snapshot: .snapshots is not a btrfs subvolume”
How can I ‘reactive’ snapper?
Hi
So does the snapper config file and /.snapshots exist now?
/etc/snapper/configs/root
drwxr-x--- 1 root root 70 Mar 1 20:15 .snapshots
–
Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 SP1|GNOME 3.10.4|3.12.53-60.30-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
Yes, both the exist: ‘root’ file (that seems to be identical to the default in the /config-templates) and the /.snapshots directory (empty) with the following rights
If you haven’t done any BTRFS operations outside of snapper, I don’t know that you can actually damage how BTRFS snapshots are created.
You need to post your exact command creating a snapshot, either your command line (best) or a screenshot of the GUI tool. More than likely your syntax or a value is incorrect.
So it seems that the entry in fstab disappeared some time around
snapper delete 0-999
and
snapper cleanup number
After adding an fstab entry and mounting the /.snapshots, I now no longer have an empty snapper list
snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------------------------------+------+---------+-----------------------+---------
single | 0 | | | root | | current |
single | 1 | | Sun 08 Nov 2015 00:41:40 CET | root | | first root filesystem |
pre | 2 | | Fri 04 Mar 2016 17:15:43 CET | root | number | yast snapper |
post | 3 | 2 | Fri 04 Mar 2016 17:17:18 CET | root | number | |
It looks like the problem is solved. It now remains to decide how many of what kind of snapshot to keep (in order to avoid reaching the 22 GB limit again.
22 gig is really too small. but if you are real real conservative and don’t do lots and lots of updates like tumbleweed does it is probably doable. The actual space used is dependent on the changes to the files being managed. So it is hard to guess usage.
Great stuff! Did all that. NUMBER_LIMIT had been 50 by default and NUMBER_LIMIT_IMPORTANT was 10, so I have reduced them. The timeline section had also been set to “yes”.
Let’s see what happens