Breaking (zypper dup) distro upgrade into multiple phases due to space constraint of / partition?

Hello,

I currently have a 55GB / partition for my Tumbleweed and recently I ran into a bit of trouble when I was performing

zypper dup

Basically the cache size for the distro upgrade has exceeded the available space in my root partition.

For this particular case, I increased the root partition to 60GB without a problem but I have other systems with TW root partition ~45/55GB full with no room to expand. Is there a way to break the distro upgrade into multiple phases, limiting the cache usage?

I’m not a regular TW user, so others may have better suggestions but I think that a “zypper dup” in “phases” is not a good idea.
Please consider the --cache-dir global option (or just the --pkg-cache-dir option) to divert the bunch of downloaded packages out of your root directory, possibly to a larger directory where you keep other downloaded stuff.
For more details check:

man zypper

or ask here as usual :wink:

With the aid of symbolic links, I have the package cache on a home NFS server. And I configure the repos to keep downloaded packages.

This has two nice effects:
(1) I don’t run out of space on the package cache;
(2) updates on second or later machines go faster, because the packages have already been downloaded.

Bad effects:
(a) I do have to occasionally clean out the package cache. I remove files which have not been used during the last 14 days (with the “find” command;
(b) Occasionally an update fails, usually because the previous package restarted the network. When given the “retry,ignore,abort” choice, I use “retry” which has always worked.

I have some TW installs with / in the range of 20-30 GB, frequently not larger. Didn’t have problems with zypper dup with more than 3200 packages.

Maybe you should use filelight to find, what is filling up your root partition and remove some trash?

At least 9/10 times, its’ the snapshots.
Just manually remove snapshots you can do without.
I usually keep at least the first snapshot, first snapshot after first upgrade and my most recent upgrade(s and/or package installs.
If the system is rebooted often, then snapshots associated with shutdowns and bootups can be removed.
You may have your own set of priorities.

Also,
No matter the recommended or minimal root parition sizes, my personal <practical> recommendation is a minimal size of 100 GB.

TSU

I do fully use the ~45GB with packages I need. This is also for ext4 root partitions for LEAP versions, so it’s not really snapshots.

The idea of defining cache location sounds like something better than I was thinking about. I am thinking mounting a portion of RAM for cache (I have 32GB RAM and don’t need more than 16 GB most of the time.).

Hi
Is the / filesystem btrfs or ext4? If btrfs is snapper running and you have snapshots in /.snapshots? Have you moved /tmp to tmpfs yet? If btrfs have you balanced the drive lately?

Only my TWs run BTRFS and snapper functions have been deleted and locked. My /tmp is not separately mounted and I have no /.snapshots.

For the BTRFS roots, the it regulary blanaces itself evey 4 days or so (deafult systemctl configurations).

One of my TW installs had /.snapshots full of things, I deleted most of it using the isntructions found on (https://www.simplified.guide/suse/snapper-remove-snapshots)

snapper --config root delete 106-163

essentially everything other than 0 and 1.

Also, for good measure, I removed the snapper zypp plugin


**#** zypper rm snapper-zypp-plugin

Is this enough to prevent automatic snapper creation? There is nothing to do with snapper in cron directories and I had no idea how to configure snapper root config the way I want to (https://forums.opensuse.org/showthread.php/503019-How-to-disable-auto-snapshots-in-13-2-or-limit-the-disk-space-used-by-snapshots-files).

You linked to an article about disabling disk snapshot which I think you meant to link to this one about removing snapper snapshot.

Disabling snapper plugins is not what most users want to do. You may run ‘snapper rollback’ and then delete older snapshots but keep the newer ones.

I have:

erlangen:~ # snapper list
    # | Type   | Pre # | Date                     | User | Cleanup | Description              | Userdata     
------+--------+-------+--------------------------+------+---------+--------------------------+--------------
   0  | single |       |                          | root |         | current                  |              
1698  | pre    |       | Sat Aug 15 08:11:03 2020 | root | number  | zypp(zypper)             | important=no 
1699  | post   |  1698 | Sat Aug 15 08:11:07 2020 | root | number  |                          | important=no 
1700  | pre    |       | Sat Aug 15 08:12:05 2020 | root | number  | zypp(zypper)             | important=no 
1701  | post   |  1700 | Sat Aug 15 08:12:25 2020 | root | number  |                          | important=no 
1702  | single |       | Sat Aug 15 08:14:35 2020 | root | number  | rollback backup of #1647 | important=yes
1703* | single |       | Sat Aug 15 08:14:36 2020 | root |         | writable copy of #1697   |              
1704  | pre    |       | Sun Aug 16 16:06:51 2020 | root | number  | yast sw_single           |              
1705  | post   |  1704 | Sun Aug 16 16:09:27 2020 | root | number  |                          |              
1708  | pre    |       | Mon Aug 17 10:39:24 2020 | root | number  | zypp(zypper)             | important=yes
1709  | post   |  1708 | Mon Aug 17 10:43:38 2020 | root | number  |                          | important=yes
1710  | pre    |       | Tue Aug 18 06:31:49 2020 | root | number  | zypp(zypper)             | important=no 
1711  | post   |  1710 | Tue Aug 18 06:32:17 2020 | root | number  |                          | important=no 
1712  | pre    |       | Tue Aug 18 18:17:13 2020 | root | number  | zypp(zypper)             | important=no 
1713  | post   |  1712 | Tue Aug 18 18:18:13 2020 | root | number  |                          | important=no 
1714  | pre    |       | Wed Aug 19 12:46:41 2020 | root | number  | zypp(zypper)             | important=no 
1715  | post   |  1714 | Wed Aug 19 12:47:04 2020 | root | number  |                          | important=no 
1716  | pre    |       | Wed Aug 19 12:47:29 2020 | root | number  | zypp(zypper)             | important=no 
1717  | post   |  1716 | Wed Aug 19 12:47:32 2020 | root | number  |                          | important=no 
1718  | pre    |       | Wed Aug 19 12:49:05 2020 | root | number  | zypp(zypper)             | important=no 
1719  | post   |  1718 | Wed Aug 19 12:49:17 2020 | root | number  |                          | important=no 
1720  | pre    |       | Thu Aug 20 02:27:51 2020 | root | number  | zypp(zypper)             | important=no 
1721  | post   |  1720 | Thu Aug 20 02:32:19 2020 | root | number  |                          | important=no 
1722  | pre    |       | Fri Aug 21 08:14:24 2020 | root | number  | zypp(zypper)             | important=no 
1723  | post   |  1722 | Fri Aug 21 08:16:58 2020 | root | number  |                          | important=no 
1724  | pre    |       | Sun Aug 23 06:26:33 2020 | root | number  | zypp(zypper)             | important=no 
1725  | post   |  1724 | Sun Aug 23 06:28:34 2020 | root | number  |                          | important=no 
1726  | pre    |       | Sun Aug 23 07:15:02 2020 | root | number  | yast bootloader          |              
1727  | post   |  1726 | Sun Aug 23 07:15:46 2020 | root | number  |                          |              
1730  | pre    |       | Sun Aug 23 07:33:00 2020 | root | number  | yast bootloader          |              
1731  | post   |  1730 | Sun Aug 23 07:33:24 2020 | root | number  |                          |              
1734  | pre    |       | Mon Aug 24 04:36:15 2020 | root | number  | zypp(zypper)             | important=no 
1735  | post   |  1734 | Mon Aug 24 04:36:33 2020 | root | number  |                          | important=no 
1736  | pre    |       | Tue Aug 25 04:24:22 2020 | root | number  | zypp(zypper)             | important=yes
1737  | post   |  1736 | Tue Aug 25 04:27:23 2020 | root | number  |                          | important=yes
1738  | pre    |       | Wed Aug 26 06:21:39 2020 | root | number  | zypp(zypper)             | important=no 
1739  | post   |  1738 | Wed Aug 26 06:22:33 2020 | root | number  |                          | important=no 
1740  | pre    |       | Thu Aug 27 06:36:15 2020 | root | number  | zypp(zypper)             | important=no 
1741  | post   |  1740 | Thu Aug 27 06:36:25 2020 | root | number  |                          | important=no 
1742  | pre    |       | Fri Aug 28 09:09:22 2020 | root | number  | zypp(zypper)             | important=no 
1743  | post   |  1742 | Fri Aug 28 09:09:25 2020 | root | number  |                          | important=no 
1744  | pre    |       | Fri Aug 28 11:49:42 2020 | root | number  | zypp(zypper)             | important=yes
1745  | post   |  1744 | Fri Aug 28 11:50:16 2020 | root | number  |                          | important=yes
1746  | pre    |       | Sat Aug 29 15:46:55 2020 | root | number  | zypp(zypper)             | important=yes
1747  | post   |  1746 | Sat Aug 29 16:03:45 2020 | root | number  |                          | important=yes
1748  | pre    |       | Sat Aug 29 16:04:25 2020 | root | number  | zypp(zypper)             | important=yes
1749  | post   |  1748 | Sat Aug 29 16:04:31 2020 | root | number  |                          | important=yes
1750  | pre    |       | Sun Sep  6 15:17:52 2020 | root | number  | zypp(zypper)             | important=yes
1751  | post   |  1750 | Sun Sep  6 15:32:23 2020 | root | number  |                          | important=yes
1752  | pre    |       | Sun Sep  6 15:34:32 2020 | root | number  | zypp(zypper)             | important=yes
1753  | post   |  1752 | Sun Sep  6 15:34:38 2020 | root | number  |                          | important=yes
1754  | pre    |       | Mon Sep  7 07:58:58 2020 | root | number  | zypp(zypper)             | important=no 
1755  | post   |  1754 | Mon Sep  7 07:59:45 2020 | root | number  |                          | important=no 
1756  | pre    |       | Tue Sep  8 05:55:28 2020 | root | number  | zypp(zypper)             | important=no 
1757  | post   |  1756 | Tue Sep  8 05:55:48 2020 | root | number  |                          | important=no
erlangen:~ # 

File system usage is:

erlangen:~ # btrfs filesystem usage -T /
Overall:
    Device size:                  59.45GiB
    Device allocated:             35.03GiB
    Device unallocated:           24.42GiB
    Device missing:                  0.00B
    Used:                         31.96GiB
    Free (estimated):             26.43GiB      (min: 26.43GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               90.70MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data     Metadata System              
Id Path      single   single   single   Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/sdb5 32.00GiB  3.00GiB 32.00MiB    24.42GiB
-- --------- -------- -------- -------- -----------
   Total     32.00GiB  3.00GiB 32.00MiB    24.42GiB
   Used      29.99GiB  1.97GiB 16.00KiB            
erlangen:~ # 

btrfsmaintenance automatically runs balance and keeps allocated space reasonably low. Deleting older snapshots may require ‘snapper rollback’ to latest snapshot. This moves default snapshot from #1703 to #1757 and enables deletion of #1703.