the snapshots seems to fill-up my root directory. Can I delete them?
My Tumbleweed root directory “/” is 25 GB large and I thought that’s way enough as my home partition is separated. I believe it was only used to maybe half the size after installation a few months ago. Now, it’s nearly full.
Hi
Since your using btrfs, you need to configure snapshots (Tumbleweed tends to fill up since updates are frequent), then run the relevant cronjobs to clean up snapshots made by snapper…
Also df with btrfs is not ‘btrfs’ friendly, hence need to use btrfs tools;
First configure snapper via editing /etc/snapper/configs/root, I use;
# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="3-4"
It seems to me, the configs are nearly according to your suggestion (see below), furthermore, the disk usage is limited to 0.5 of the file system space (this should be around 12.5 GB in my case). Also, suse.de-snapper exists in /etc/daily and btrfs-balance in weekly.
However, the current crontab -l give me
# crontab -l
no crontab for root
So, do I see it correctly: the cron jobs are prepared in etc but they have never been activated so far for my system (I did not yet run your suggested /etc/cron… yet)?
Here’s the complete /etc/snapper/configs/root:
# subvolume to snapshot
SUBVOLUME="/"
# filesystem type
FSTYPE="btrfs"
# btrfs qgroup for space aware cleanup algorithms
QGROUP="1/0"
# fraction of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"
# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"
# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"
# run daily number cleanup
NUMBER_CLEANUP="yes"
# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="2-10"
NUMBER_LIMIT_IMPORTANT="4-10"
# create hourly snapshots
TIMELINE_CREATE="no"
# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"
# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"
# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"
# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"
Hi
Not sure, but my understanding is number/timeline may override space… anyway, if you have 10, it will use that range until the time passes.
How many snapshots do you have?
snapper list
Run the jobs with your config and check;
btrfs fi usage /
/etc/cron.daily/suse.de-snapper
/etc/cron.weekly/btrfs-balance
snapper list
btrfs fi usage /
If you want to leave it, you still need to run the cron jobs to clean up. They are system jobs in /etc so crontab -l will be empty. If your computer is off when daily time runs, or you turn in on it should run 15 minutes later, likewise if you miss the weekly clean up. Because of the nature of Tumbleweed and last week there were multiple updates, space will fill. Since your only running a minimal partition (recommended is 40GB), then you do need to wind them back, or disable snapper completely if you don’t need/want.
Each snapshot is logical representation of the whole filesystem at the time snapshot was taken, if you have 10 snapshots you see 10 times size of your root filesystem. Most of this size is shared data, that physically exists just once but is accounted for each snapshot.
No, you do not. These cron jobs are not configured in per-user crontab, so they are not shown by “crontab -l”. Still they are active by default, once they are installed.
OK. So I waited for three months now, hoping the new configuration with max 6 snapshots instead of 10 would improve things. However, things got even worse. I now have:
btrfs fi du -s .
Total Exclusive Set shared Filename
87.04GiB 6.29GiB 10.79GiB .
Perhaps, the new configuration never got used?
snapper list
gives me 10 snapshots + the current one.
Do I have to stop the old cronjobs and start them anew? However, how can I do this, if they are not “per-user jobs” as arvidjaar said, and they don’t turn up in
crontab -l
?
As
btrfs fi usage / /etc/cron.daily/suse.de-snapper
/etc/cron.weekly/btrfs-balance
snapper list
btrfs fi usage /
does not seem to change anything, either, I am back to my original question:
Can I delete the older subdirectories in /.snapshots without getting a problem with the stability of my system (actually, my system is not running very smoothly since Nov.: the kernel is alright but the plasma 5 desktop only shows black background, no taskbar, no menue, etc.)?
DO NOT delete anything in /.snapshots. You will break the file system. Only use BTRFS and or snapper commands to manage snapshots
It looks like you have a rather small partition so best to just remove all you can and turn snapshots off. Or reinstall with at least a 40 gig root partition
seems to me you have a space problem and are jumping to conclusions on snapshots. there are many other possibilitie core-dumps, log files…
if you have quotas enabled you can check sudo btrfs qgroup show /, if not delete all snapshots sudo snapper rm 2-10000 and then check free space to definitively confirm or exclude snapshots as the cuase.
I now reduced the number of snapshots further to 2/3 in /etc/snapper/configs/root:
# subvolume to snapshot
SUBVOLUME="/"
# filesystem type
FSTYPE="btrfs"
# btrfs qgroup for space aware cleanup algorithms
QGROUP="1/0"
# fraction of the filesystems space the snapshots may use
SPACE_LIMIT="0.2"
# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"
# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"
# run daily number cleanup
NUMBER_CLEANUP="yes"
# limit for number cleanup
NUMBER_MIN_AGE="1800"
# changed 2-10 and 4-10 to 2-2 and 3-3 (Flo)
NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="3-3"
# create hourly snapshots
TIMELINE_CREATE="no"
# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"
# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
# changed 10 to 3 below (Flo)
TIMELINE_LIMIT_HOURLY="3"
TIMELINE_LIMIT_DAILY="3"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="3"
TIMELINE_LIMIT_YEARLY="3"
# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"
# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"
As a result the number of snapshots really decreased. Finally!:
snapper list
Typ | # | Vorher # | Datum | Benutzer | Bereinigen | Beschreibung | Benutzerdaten
-------+-----+----------+------------------------------+----------+------------+-----------------------+--------------
single | 0 | | | root | | current |
single | 1 | | Di 23 Aug 2016 22:32:34 CEST | root | | first root filesystem |
pre | 85 | | Di 13 Sep 2016 23:55:10 CEST | root | number | zypp(packagekitd) | important=yes
post | 86 | 85 | Mi 14 Sep 2016 00:44:07 CEST | root | number | | important=yes
pre | 105 | | Sa 22 Okt 2016 21:08:29 CEST | root | number | zypp(packagekitd) | important=yes
pre | 112 | | Mo 31 Okt 2016 23:58:15 CET | root | number | zypp(packagekitd) | important=yes
pre | 149 | | Mo 27 Feb 2017 21:44:16 CET | root | number | zypp(zypper) | important=no
post | 150 | 149 | Mo 27 Feb 2017 21:45:30 CET | root | number | | important=no
However, there are still more snapshots than specified in the config file. And the partition is still fully used:
I checked the directories on the root partition using du -s [directory] and .snapshots was the only directory containing much, see my very first post. If “du” doesn’t work, I could do the same thing using a btrfs command, but the syntax is quite complicated for me. So far, I couldn’t figure out what to write as “filesystem” in
btrfs filesystem du -s [directory].
your problem: snapshot space is determined more by age than by number (you can have 1000 that take 1mb), looking at the dates of the snapshots there is something wrong. changing the conf file will get you nowhere (in fact better to go back and change to something more sensible).
use “sudo snapper rm 85-145” and if this does not fix things there is something else wrong.
ps have you ever done a rollback?
(du is not to be relied on even if it is shorter to type)
extra:
consider not doing any updates until fixed (out of space is not pleasant on btrfs)
sudo btrfs qgroup show / (and check exclusive use - is the only way to know snapshot usage)
btrfs fi du -s .
Total Exclusive Set shared Filename
26.90GiB 3.32GiB 7.99GiB .
snapper list
Typ | # | Vorher # | Datum | Benutzer | Bereinigen | Beschreibung | Benutzerdaten
-------+-----+----------+------------------------------+----------+------------+-----------------------+--------------
single | 0 | | | root | | current |
single | 1 | | Di 23 Aug 2016 22:32:34 CEST | root | | first root filesystem |
pre | 149 | | Mo 27 Feb 2017 21:44:16 CET | root | number | zypp(zypper) | important=no
post | 150 | 149 | Mo 27 Feb 2017 21:45:30 CET | root | number | | important=no
So, I can always delete old snapshots this way. Of course, more comfortable would be, if the .snapshots directory would be kept small automatically. The question is now, how to prevent my system from filling up again? Do you have any suggestions for changes of my config file?
PS: I’ve not done a rollback yet. It’s for getting an older version of the OS back (from one of the snapshots) isn’t it?
so the cause of the problem (retained old snapshots) is not clear, (its why i asked about rollbacks) there is a chance that snapshots where made without specifying a clean up algorithm (for example) in which case snapper would never remove them. This looks likely since snapshots must have been removed (you had only a few) with some very old (and large) remaining. [careful not to take manual snapshots without specifying cleanup]
So i would keep an occasional eye on free space and snapper list and write back here if it becomes clear the same problem occurs again.
AFAIK you can change your config to whatever sensible value you like now (the original problem had nothing to do with the configs)
the cause is strange. at first i thought maybe the old ones where manual snapshots without cleanup specified but they are clearly are marked “number”. is it the case that you update very infrequently?
there is no longer enough info to determine the cause. So i would keep an occasional eye on free space and snapper list and write back here with full “snapper list” if it becomes clear the same problem occurs again.
I had the feeling, the updates weren’t really working sometimes because I had often about 1500 updates available. Then I would run them but it didn’t take more than maybe 15 min. And a few days later I had again 1500 updates available.
Maybe, I should manually start a full update of the system, now that I have space available. Is there a way to do it from the command line?
Tumbleweed can have lots and lots of updates because it is a rolling release. It is it’s nature. If you want a more stable OS use leap.
Snapper is recommended to be on BTRFS root partitions with at least 40 Gigs size (IMHO 50 gig for TW). Any less and you do run into out of space problems. You can do some adjustments to snapper