[size=3]Recently the disk usage on the root partition has increased by 3 gb unexpectedly. I noticed that there seem to be more kernel versions than expected. The zypp.conf has the following:
[/size]## Default: Do not delete any kernels if multiversion = provides:multiversion(kernel) is set
multiversion.kernels = latest,latest-1,running
[size=3]This means to me that if the latest is currently running there should be 2 in the /boot directory. However, I see 4 as shown below:
[/size]localhost:/boot # ls -l vmlinuz*
lrwxrwxrwx 1 root root 32 Nov 10 18:05 vmlinuz → vmlinuz-4.12.14-lp150.11-default
-rw-r–r-- 1 root root 7028944 May 14 2018 vmlinuz-4.12.14-lp150.11-default
-rw-r–r-- 1 root root 7057520 Oct 5 04:47 vmlinuz-4.12.14-lp150.12.19-default
-rw-r–r-- 1 root root 7057520 Oct 13 11:24 vmlinuz-4.12.14-lp150.12.22-default
-rw-r–r-- 1 root root 7061616 Nov 2 03:44 vmlinuz-4.12.14-lp150.12.25-default localhost:/boot # uname -a
Linux localhost 4.12.14-lp150.12.25-default #1 SMP Thu Nov 1 06:14:23 UTC 2018 (3fcf457) x86_64 x86_64 x86_64 GNU/Linux localhost:/boot #
[size=3]
Since I seem to be running the latest (so running and latest are the same), the cleanup specified in zypp.conf does not seem to be in effect as there are more than 2 kernels. Also the vmlinuz link seem to be pointing to the oldest – Why?, what is it used for.
Also, is there a way to find the total size of each snapshot easily. The number of snapshots seems to be correct and match the specification in the snapper conf
file.
The btrfs_maintenance script seems to be periodically running a balance operation, so that seem to working okay.
Are they any other maintenance scripts that I should look at to see if they are not operating as expected.
Not sure about most of your questions, but can comment only about your request about snapshot sizes…
I seem to remember reading that snapshots do some kind of deduplication, so any “absolute size” number would likely be misleading. it’s also why it’s so important to use snapper to view and remove/purge unwanted snapshots manually safely, if you instead attempted to delete individual files using ordinary file operations, you’d likely wreck your snapshot system.
Something is not in good shape in the kernel update process in your system. An up to date system with your setup should show:
bruno@LT_B:~> ls -l /boot/vmlinuz*
lrwxrwxrwx 1 root root 35 8 nov 11.17 /boot/vmlinuz -> vmlinuz-4.12.14-lp150.12.25-default
-rw-r--r-- 1 root root 7057520 13 ott 17.24 /boot/vmlinuz-4.12.14-lp150.12.22-default
-rw-r--r-- 1 root root 7061616 2 nov 08.44 /boot/vmlinuz-4.12.14-lp150.12.25-default
bruno@LT_B:~>
So apparently the vmlinuz link was not updated after the first kernel update, then the purge-kernels command was not run after the second update.
Maybe you can run the purge-kernels command in a superuser terminal, like:
su -
root password:
purge-kernels
and see what happens, the two oldest kernels should be purged from your system.
I cannot comment on snapshots or btrfs maintenance since I’m not using btrfs here.
localhost:~ # #snapper list', "btrfs fi us / -T", "btrfs su li /", "btrfs qgroup show -p --sync".
localhost:~ # snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+-----+-------+--------------------------+------+---------+-----------------------+--------------
single | 0 | | | root | | current |
single | 1 | | Sat Jul 28 15:36:07 2018 | root | | first root filesystem |
pre | 150 | | Sun Aug 26 16:25:00 2018 | root | number | zypp(packagekitd) | important=yes
post | 151 | 150 | Sun Aug 26 16:25:50 2018 | root | number | | important=yes
pre | 317 | | Tue Oct 9 17:57:07 2018 | root | number | zypp(packagekitd) | important=yes
post | 318 | 317 | Tue Oct 9 18:02:55 2018 | root | number | | important=yes
pre | 323 | | Mon Oct 15 18:14:10 2018 | root | number | zypp(packagekitd) | important=yes
post | 324 | 323 | Mon Oct 15 18:19:48 2018 | root | number | | important=yes
pre | 363 | | Sat Oct 27 15:44:38 2018 | root | number | zypp(packagekitd) | important=yes
post | 364 | 363 | Sat Oct 27 15:50:37 2018 | root | number | | important=yes
pre | 365 | | Sat Nov 10 17:55:07 2018 | root | number | zypp(packagekitd) | important=yes
post | 366 | 365 | Sat Nov 10 18:08:05 2018 | root | number | | important=yes
pre | 377 | | Wed Nov 21 20:03:45 2018 | root | number | zypp(packagekitd) | important=no
post | 378 | 377 | Wed Nov 21 20:03:56 2018 | root | number | | important=no
pre | 379 | | Fri Nov 23 17:05:52 2018 | root | number | yast printer |
post | 380 | 379 | Fri Nov 23 17:11:54 2018 | root | number | |
pre | 381 | | Fri Nov 23 17:48:39 2018 | root | number | yast printer |
post | 382 | 381 | Fri Nov 23 17:56:44 2018 | root | number | |
pre | 383 | | Sat Nov 24 18:56:44 2018 | root | number | zypp(zypper) | important=no
post | 384 | 383 | Sat Nov 24 18:57:05 2018 | root | number | | important=no
pre | 385 | | Mon Nov 26 12:05:41 2018 | root | number | zypp(zypper) | important=no
post | 386 | 385 | Mon Nov 26 12:11:00 2018 | root | number | | important=no
pre | 387 | | Mon Nov 26 12:23:11 2018 | root | number | zypp(packagekitd) | important=no
post | 388 | 387 | Mon Nov 26 12:31:43 2018 | root | number | | important=no
pre | 389 | | Mon Nov 26 13:21:48 2018 | root | number | yast snapper |
localhost:~ # btrfs fi us / -T
Overall:
Device size: 40.00GiB
Device allocated: 24.55GiB
Device unallocated: 15.45GiB
Device missing: 0.00B
Used: 19.36GiB
Free (estimated): 20.29GiB (min: 20.29GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 62.39MiB (used: 0.00B)
Data,single: Size:23.01GiB, Used:18.17GiB
/dev/mapper/eui.0025385281b244f7-part2 23.01GiB
Metadata,single: Size:1.51GiB, Used:1.19GiB
/dev/mapper/eui.0025385281b244f7-part2 1.51GiB
System,single: Size:32.00MiB, Used:16.00KiB
/dev/mapper/eui.0025385281b244f7-part2 32.00MiB
Unallocated:
/dev/mapper/eui.0025385281b244f7-part2 15.45GiB
ERROR: cannot access '-T': No such file or directory
localhost:~ # btrfs -T fi us /
Unknown global option: -T
localhost:~ # btrfs su li /
ID 257 gen 32 top level 5 path @
ID 258 gen 23964 top level 257 path @/var
ID 259 gen 23710 top level 257 path @/usr/local
ID 260 gen 2204 top level 257 path @/tmp
ID 261 gen 23292 top level 257 path @/srv
ID 262 gen 23939 top level 257 path @/root
ID 263 gen 23292 top level 257 path @/opt
ID 264 gen 19205 top level 257 path @/boot/grub2/x86_64-efi
ID 265 gen 18120 top level 257 path @/boot/grub2/i386-pc
ID 266 gen 23870 top level 257 path @/.snapshots
ID 267 gen 23964 top level 266 path @/.snapshots/1/snapshot
ID 460 gen 7243 top level 266 path @/.snapshots/150/snapshot
ID 461 gen 7243 top level 266 path @/.snapshots/151/snapshot
ID 648 gen 13472 top level 266 path @/.snapshots/317/snapshot
ID 650 gen 15532 top level 266 path @/.snapshots/318/snapshot
ID 658 gen 15572 top level 266 path @/.snapshots/323/snapshot
ID 660 gen 15588 top level 266 path @/.snapshots/324/snapshot
ID 717 gen 19354 top level 266 path @/.snapshots/363/snapshot
ID 718 gen 19353 top level 266 path @/.snapshots/364/snapshot
ID 719 gen 19663 top level 266 path @/.snapshots/365/snapshot
ID 721 gen 20129 top level 266 path @/.snapshots/366/snapshot
ID 733 gen 23664 top level 266 path @/.snapshots/377/snapshot
ID 734 gen 23664 top level 266 path @/.snapshots/378/snapshot
ID 737 gen 23664 top level 266 path @/.snapshots/379/snapshot
ID 738 gen 23664 top level 266 path @/.snapshots/380/snapshot
ID 739 gen 23664 top level 266 path @/.snapshots/381/snapshot
ID 740 gen 23664 top level 266 path @/.snapshots/382/snapshot
ID 745 gen 23664 top level 266 path @/.snapshots/383/snapshot
ID 746 gen 23664 top level 266 path @/.snapshots/384/snapshot
ID 748 gen 23700 top level 266 path @/.snapshots/385/snapshot
ID 749 gen 23712 top level 266 path @/.snapshots/386/snapshot
ID 751 gen 23744 top level 266 path @/.snapshots/387/snapshot
ID 754 gen 23772 top level 266 path @/.snapshots/388/snapshot
ID 755 gen 23869 top level 266 path @/.snapshots/389/snapshot
localhost:~ # btrfs qgroup show -p --sync /
qgroupid rfer excl parent
-------- ---- ---- ------
0/5 16.00KiB 16.00KiB ---
0/257 16.00KiB 16.00KiB ---
0/258 951.30MiB 951.30MiB ---
0/259 16.00KiB 16.00KiB ---
0/260 1020.00KiB 1020.00KiB ---
0/261 16.00KiB 16.00KiB ---
0/262 175.83MiB 175.83MiB ---
0/263 16.00KiB 16.00KiB ---
0/264 3.39MiB 3.39MiB ---
0/265 16.00KiB 16.00KiB ---
0/266 9.03MiB 9.03MiB ---
0/267 10.04GiB 244.86MiB ---
0/460 6.87GiB 62.32MiB ---
0/461 6.87GiB 61.37MiB ---
0/648 8.79GiB 263.34MiB ---
0/650 9.26GiB 47.94MiB ---
0/658 8.91GiB 51.79MiB ---
0/660 9.35GiB 83.77MiB ---
0/717 9.12GiB 72.38MiB ---
0/718 9.14GiB 15.58MiB ---
0/719 9.14GiB 34.03MiB ---
0/721 9.91GiB 99.68MiB ---
0/733 10.07GiB 9.52MiB ---
0/734 10.07GiB 6.34MiB ---
0/737 10.07GiB 544.00KiB ---
0/738 10.07GiB 252.00KiB ---
0/739 10.07GiB 188.00KiB ---
0/740 10.07GiB 160.00KiB ---
0/745 10.07GiB 1008.00KiB ---
0/746 10.09GiB 3.01MiB ---
0/748 10.08GiB 8.59MiB ---
0/749 10.43GiB 20.14MiB ---
0/751 10.43GiB 2.44MiB ---
0/754 10.46GiB 1.11MiB ---
0/755 10.46GiB 1.31MiB ---
1/0 16.88GiB 8.14GiB 0/460,0/461,0/648,0/650,0/658,0/660,0/717,0/718,0/719,0/721,0/733,0/734,0/737,0/738,0/739,0/740,0/745,0/746,0/748,0/749,0/751,0/754,0/755
localhost:~ #
How recent is “recently”? Your output does not show any jump in the last couple of months; about 3GB jump happened apparently somewhere between end of August and beginning of October; otherwise it was relatively steady growth which is expected assuming you actually install updates and keep old snapshots.
Currently half of used space is consumed by snapshots. You may want to check whether snapper cleanup timers are enabled (in particular why snapshot dated July was not removed). This could be normal if you configured snapshot retention depending on available space (which you still have plenty).
8.14/19.36 = 42% if understand what is in the above output. Yes, the jump was from August to October.
I checked the timers:
localhost:/usr/lib/systemd/system # systemctl status snapper-cleanup.timer
● snapper-cleanup.timer - Daily Cleanup of Snapper Snapshots
Loaded: loaded (/usr/lib/systemd/system/snapper-cleanup.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Thu 2018-11-29 13:11:07 EST; 2h 15min ago
Trigger: Fri 2018-11-30 13:21:06 EST; 21h left
Docs: man:snapper(8)
man:snapper-configs(5)
Nov 29 13:11:07 localhost systemd[1]: Started Daily Cleanup of Snapper Snapshots.
localhost:/usr/lib/systemd/system # systemctl status snapper-timeline.timer
● snapper-timeline.timer - Timeline of Snapper Snapshots
Loaded: loaded (/usr/lib/systemd/system/snapper-timeline.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Thu 2018-11-29 13:11:07 EST; 2h 15min ago
Trigger: Thu 2018-11-29 16:00:00 EST; 32min left
Docs: man:snapper(8)
man:snapper-configs(5)
Nov 29 13:11:07 localhost systemd[1]: Started Timeline of Snapper Snapshots.
localhost:/usr/lib/systemd/system # cat snapper-cleanup.service
[Unit]
Description=Daily Cleanup of Snapper Snapshots
Documentation=man:snapper(8) man:snapper-configs(5)
[Service]
Type=simple
ExecStart=/usr/lib/snapper/systemd-helper --cleanup
IOSchedulingClass=idle
CPUSchedulingPolicy=idle
localhost:/usr/lib/systemd/system # cat snapper-timeline.service
[Unit]
Description=Timeline of Snapper Snapshots
Documentation=man:snapper(8) man:snapper-configs(5)
[Service]
Type=simple
ExecStart=/usr/lib/snapper/systemd-helper --timeline
The timer seem to be running. The services call the systemd-helper which seems to be a binary file.
The snapper.log file seems to indicate that the snapshot are deleted as the count is 23 now and it has been as high as 27.
Since this is appears to be operating correctly the only remaining problem is extra copies of the old kernels which I have filed a problem report on. I still have not figured out how it is supposed to work as the purge-kernels.service in /usr/lib/systemd/system appears to be a one time operation and It seems to me that every time a new kernel in added the oldest one specified in the zypp.conf should be removed.
I was wondering if there is automatic cleanup of the log files in /var/log or is it expected to be done manually?