Snapper Policy

I don’t believe Snapper is following my policies I laid out in the /etc/snapper/configs/root file. Am I mistaken to believe there are more snap shots than what is shown in the the configuration file?

snapper -c root list:

Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------±—±------±--------------------------------±-----±--------±----------------------±-------------
single | 0 | | | root | | current |
single | 1 | | Tue 06 Jun 2017 07:44:27 PM CDT | root | | first root filesystem |
single | 2 | | Tue 06 Jun 2017 07:59:46 PM CDT | root | number | after installation | important=yes
pre | 59 | | Thu 08 Jun 2017 01:59:57 PM CDT | root | number | zypp(zypper) | important=yes
post | 60 | 59 | Thu 08 Jun 2017 02:01:33 PM CDT | root | number | | important=yes
pre | 61 | | Thu 08 Jun 2017 04:16:51 PM CDT | root | number | yast sw_single |
pre | 62 | | Thu 08 Jun 2017 04:17:42 PM CDT | root | number | zypp(y2base) | important=no
post | 63 | 62 | Thu 08 Jun 2017 04:17:45 PM CDT | root | number | | important=no
post | 64 | 61 | Thu 08 Jun 2017 04:17:48 PM CDT | root | number | |
pre | 65 | | Thu 08 Jun 2017 05:46:30 PM CDT | root | number | yast sw_single |
pre | 66 | | Thu 08 Jun 2017 05:49:09 PM CDT | root | number | zypp(y2base) | important=no
post | 67 | 66 | Thu 08 Jun 2017 05:49:15 PM CDT | root | number | | important=no
post | 68 | 65 | Thu 08 Jun 2017 05:49:21 PM CDT | root | number | |

/etc/snapper/configs/root:

create hourly snapshots

TIMELINE_CREATE=“no”

cleanup hourly snapshots after some time

TIMELINE_CLEANUP=“yes”

limits for timeline cleanup

TIMELINE_MIN_AGE=“1800”
TIMELINE_LIMIT_HOURLY=“0”
TIMELINE_LIMIT_DAILY=“7”
TIMELINE_LIMIT_WEEKLY=“7”
TIMELINE_LIMIT_MONTHLY=“7”
TIMELINE_LIMIT_YEARLY=“7”

cleanup empty pre-post-pairs

EMPTY_PRE_POST_CLEANUP=“yes”

limits for empty pre-post-pair cleanup

EMPTY_PRE_POST_MIN_AGE=“1800”

limit for number cleanup

NUMBER_MIN_AGE=“1800”
NUMBER_LIMIT=“2-7”
NUMBER_LIMIT_IMPORTANT=“2-7”

Would be handy if you posted the “snapper -c root list” between CODE tags so the table is better readable.

On the cleanup policy, see “man snapper-configs” and have a look on what is written under NUMBER_LIMIT_IMPORTANT.

Snapper is a serious problem in 42.3. It creates snapshots until / partition is full. This has happened after 3 complete re installations using default values during partitioning (40 GB root partition) and default snapper configuration. This time, I used a 500 GB root partition and reconfigured snapper as follows:


Key                    | Value-----------------------+------
ALLOW_GROUPS           |      
ALLOW_USERS            |      
BACKGROUND_COMPARISON  | yes  
EMPTY_PRE_POST_CLEANUP | yes  
EMPTY_PRE_POST_MIN_AGE | 1800 
FSTYPE                 | btrfs
NUMBER_CLEANUP         | yes  
NUMBER_LIMIT           | 1-5  
NUMBER_LIMIT_IMPORTANT | 2-6  
NUMBER_MIN_AGE         | 1800 
QGROUP                 | 1/0  
SPACE_LIMIT            | 0.1  
SUBVOLUME              | /    
SYNC_ACL               | no   
TIMELINE_CLEANUP       | yes  
TIMELINE_CREATE        | no   
TIMELINE_LIMIT_DAILY   | 5    
TIMELINE_LIMIT_HOURLY  | 5    
TIMELINE_LIMIT_MONTHLY | 5    
TIMELINE_LIMIT_WEEKLY  | 0    
TIMELINE_LIMIT_YEARLY  | 5    
TIMELINE_MIN_AGE       | 1800 

My root partition is now using 64.7 GB space with 138 snapshots. No wonder my last three installations had no space left on the root partition within a day of installation!

How do I get snapper under control without completely disabling? This didn’t happen with Leap 42.2.

Ok, I removed all but the first two snapshots (“first root file system” and “after installation | important=yes”) and now using a much more reasonable 14.2 GB of root partition. Now I need to watch the creation of new snapshots.

Well, by the end of the day the usage of the root partition kept climbing. I don’t know why and I’ve reached the point where I don’t much care. The installation defaults for either snapper or partitioning (or both) are insane – at least for a home user.

Perhaps it will be corrected for the final release (I know these kinds of things can happen in a beta or release candidate). For now, I’ve reinstalled using ext4 on root. I’m not convinced that btrfs/snapper are ready for prime time.