Hi
Tumbleweed 20170712 [at the moment], Root = 50 GB Btrfs.
I had a shock today when i serendipitously checked Dolphin for something but then noticed my 50 GB / partition had only 3.3 GB free space remaining. Wondering what had happened, i then discovered that somehow [some time ago, don’t know when or how] i seemed to have accidentally duplicated a large ~30 GB] directory from my separate /home partition, in / as well. Realising that was redundant, i deleted it via Root Actions in Dolphin, but was confused to see that my / free space remained at only 3.3 GB.
Searching this forum for clues, as i suspected that Btrfs & Snapper might be the cause via excessive Snapshots, i found many helpful threads & posts, including:
- https://forums.opensuse.org/showthread.php/524929-Disk-claims-to-be-full-even-after-deleting-files?p=2823621#post2823621
- https://forums.opensuse.org/showthread.php/525180-Regularly-Running-Outa-Disk-Space?p=2825196#post2825196
- https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition?p=2800310#post2800310
- https://forums.opensuse.org/showthread.php/525459-Tumbleweed-snapshots-prevent-booting-(by-using-all-disk-space)?p=2826854#post2826854
My initial Snapshot status:
linux-763v:~> sudo snapper list
[sudo] password for root:
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+-----+-------+-------------------------------+------+---------+-----------------------+--------------
single | 0 | | | root | | current |
pre | 144 | | Thu 22 Jun 2017 14:10:21 AEST | root | number | zypp(zypper) | important=yes
post | 145 | 144 | Thu 22 Jun 2017 14:25:28 AEST | root | number | | important=yes
pre | 335 | | Fri 30 Jun 2017 11:37:45 AEST | root | number | zypp(zypper) | important=yes
post | 336 | 335 | Fri 30 Jun 2017 12:03:17 AEST | root | number | | important=yes
pre | 351 | | Sun 02 Jul 2017 14:58:19 AEST | root | number | zypp(zypper) | important=yes
post | 352 | 351 | Sun 02 Jul 2017 15:07:50 AEST | root | number | | important=yes
single | 422 | | Tue 04 Jul 2017 18:32:26 AEST | root | number | rollback backup of #1 | important=yes
single | 423 | | Tue 04 Jul 2017 18:32:27 AEST | root | | |
pre | 426 | | Tue 04 Jul 2017 18:47:57 AEST | root | number | zypp(zypper) | important=yes
post | 427 | 426 | Tue 04 Jul 2017 19:01:28 AEST | root | number | | important=yes
pre | 481 | | Mon 10 Jul 2017 16:34:42 AEST | root | number | zypp(zypper) | important=yes
post | 482 | 481 | Mon 10 Jul 2017 17:03:20 AEST | root | number | | important=yes
pre | 523 | | Sun 16 Jul 2017 16:07:06 AEST | root | number | yast firewall |
post | 524 | 523 | Sun 16 Jul 2017 16:14:14 AEST | root | number | |
pre | 525 | | Sun 16 Jul 2017 16:14:27 AEST | root | number | yast firewall |
post | 526 | 525 | Sun 16 Jul 2017 16:14:36 AEST | root | number | |
pre | 527 | | Sun 16 Jul 2017 16:15:42 AEST | root | number | yast sw_single |
pre | 528 | | Sun 16 Jul 2017 16:17:47 AEST | root | number | zypp(ruby) | important=no
post | 529 | 527 | Sun 16 Jul 2017 16:17:55 AEST | root | number | |
pre | 530 | | Sun 16 Jul 2017 16:18:43 AEST | root | number | yast sw_single |
pre | 531 | | Sun 16 Jul 2017 16:19:35 AEST | root | number | zypp(ruby) | important=no
post | 532 | 531 | Sun 16 Jul 2017 16:19:39 AEST | root | number | | important=no
post | 533 | 530 | Sun 16 Jul 2017 16:19:51 AEST | root | number | |
pre | 534 | | Sun 16 Jul 2017 22:06:18 AEST | root | number | yast sw_single |
post | 535 | 534 | Sun 16 Jul 2017 22:09:01 AEST | root | number | |
pre | 536 | | Mon 17 Jul 2017 17:52:46 AEST | root | number | yast snapper |
linux-763v:~>
Looking in
/etc/cron.weekly
i was & still am] surprised that it was empty. Therefore i checked:
linux-763v:~> zypper info btrfsmaintenance
Loading repository data...
Reading installed packages...
Information for package btrfsmaintenance:
-----------------------------------------
Repository : Main Repository (OSS)
Name : btrfsmaintenance
Version : 0.3.1-2.1
Arch : noarch
Vendor : openSUSE
Installed Size : 46.3 KiB
**Installed : Yes
Status : up-to-date**
Source package : btrfsmaintenance-0.3.1-2.1.src
Summary : Scripts for btrfs periodic maintenance tasks
Description :
Scripts for btrfs maintenance tasks like periodic scrub, balance, trim or defrag
on selected mountpoints or directories.
Question: How can this package be already installed yet that preceding directory be empty?
Following the referenced links i then forced its reinstallation:
linux-763v:~> sudo zypper in -f btrfsmaintenance
[sudo] password for root:
Loading repository data...
Reading installed packages...
Forcing installation of 'btrfsmaintenance-0.3.1-2.1.noarch' from repository 'Main Repository (OSS)'.
Resolving package dependencies...
The following package is going to be reinstalled:
btrfsmaintenance
1 package to reinstall.
Overall download size: 30.6 KiB. Already cached: 0 B. No additional space will be used or freed after the operation.
**Continue? [y/n/...? shows all options] (y): **
Retrieving package btrfsmaintenance-0.3.1-2.1.noarch (1/1), 30.6 KiB ( 46.3 KiB unpacked)
Retrieving: btrfsmaintenance-0.3.1-2.1.noarch.rpm .................................................................[done (4.7 KiB/s)]
Checking for file conflicts: ..................................................................................................[done]
(1/1) Installing: btrfsmaintenance-0.3.1-2.1.noarch ...........................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/btrfsmaintenance ...
Refresh script btrfs-scrub.sh for monthly
Refresh script btrfs-defrag.sh for none
Refresh script btrfs-balance.sh for weekly
Refresh script btrfs-trim.sh for none
…& was gratified to see that now btrfs-balance appeared in
/etc/cron.weekly
I confirmed that yes my / really is almost full:
linux-763v:~> sudo btrfs filesystem show /
[sudo] password for root:
Label: none uuid: 59c063db-fa0d-4e1e-baa2-df255f4262fb
Total devices 1 FS bytes used 46.54GiB
devid 1 size 50.00GiB used 49.96GiB path /dev/sda3
The first of these commands, after a lag, presumably completed ok [given no error msg], but the second one worries me:
linux-763v:~> systemctl start btrfsmaintenance-refresh.service
linux-763v:~> systemctl status btrfsmaintenance-refresh.service
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; disabled; vendor preset: disabled)
**Active: inactive (dead)**
linux-763v:~>
Question: WHY would it be inactive / disabled / dead, & how should i remedy this?
Before next manually running the two applicable cron-jobs to get back my free space, i inspected /etc/snapper/configs/root. Here it is, still as-found, apart from the obvious small edit i made as per another of Malcolm’s links above:
# subvolume to snapshot
SUBVOLUME="/"
# filesystem type
FSTYPE="btrfs"
# btrfs qgroup for space aware cleanup algorithms
QGROUP="1/0"
# fraction of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"
# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"
# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"
# run daily number cleanup
NUMBER_CLEANUP="yes"
# limit for number cleanup
NUMBER_MIN_AGE="1800"
#NUMBER_LIMIT="2-10"
#NUMBER_LIMIT_IMPORTANT="4-10"
# 17/7/17: I reduced the preceding as per Malcolm's https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition?p=2800310#post2800310
NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="3-5"
# create hourly snapshots
TIMELINE_CREATE="no"
# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"
# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"
# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"
# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"
Question: Should i change any other of the default settings in that file pls? [This installation is just on a personal laptop, with ~250 GB SSD].
Next i manually ran both cron-jobs:
linux-763v:~> sudo /etc/cron.daily/suse.de-snapper
[sudo] password for root:
linux-763v:~> sudo /etc/cron.weekly/btrfs-balance
Before balance of /
Data, single: total=37.17GiB, used=14.32GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=768.00MiB, used=379.11MiB
GlobalReserve, single: total=41.16MiB, used=0.00B
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 54G 16G 38G 30% /
Done, had to relocate 0 out of 43 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=1
Done, had to relocate 5 out of 43 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=5
Done, had to relocate 8 out of 38 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 30 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 30 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=30
Done, had to relocate 2 out of 29 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 27 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=50
Done, had to relocate 1 out of 27 chunks
Done, had to relocate 0 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=1
SYSTEM (flags 0x2): balancing, usage=1
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=5
SYSTEM (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=10
SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=20
SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=30
SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 26 chunks
After balance of /
Data, single: total=20.17GiB, used=14.32GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=768.00MiB, used=377.98MiB
GlobalReserve, single: total=39.06MiB, used=0.00B
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 54G 16G 38G 30% /
linux-763v:~>
Question: Do those outputs look ok pls?
<<continued in follow-on post>>