TW dup issues with latest version

I rebooted the PC. Now I get this error

zypper: error while loading shared libraries: librpm.so.9: cannot open shared object file: No such file or directory
chris@asus-roc:~>$sudo btrfs filesystem usage -T /
Overall:
    Device size:		  40.00GiB
    Device allocated:		  37.53GiB
    Device unallocated:		   2.47GiB
    Device missing:		     0.00B
    Device slack:		     0.00B
    Used:			  26.24GiB
    Free (estimated):		  13.07GiB	(min: 13.07GiB)
    Free (statfs, df):		  13.07GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		  74.48MiB	(used: 0.00B)
    Multiple profiles:		        no

             Data     Metadata  System                             
Id Path      single   single    single   Unallocated Total    Slack
-- --------- -------- --------- -------- ----------- -------- -----
 1 /dev/sda2 36.00GiB   1.50GiB 32.00MiB     2.47GiB 40.00GiB     -
-- --------- -------- --------- -------- ----------- -------- -----
   Total     36.00GiB   1.50GiB 32.00MiB     2.47GiB 40.00GiB 0.00B
   Used      25.40GiB 861.16MiB 16.00KiB                           
chris@asus-roc:~>$

Dmesg https://paste.opensuse.org/pastes/0a501d3bb2c9

zypper seems dead now!!! :rage:

And YAST2 Software gives me this.

Internal error. Please report a bug report with logs.
Run save_y2logs to get complete logs.

Caller: /usr/lib64/ruby/vendor_ruby/3.2.0/yast/yast.rb:186:in `import_pure'

Details: component cannot import namespace 'Pkg'

It has enough space.

If you did it after reboot, it is already too late. Previous messages are lost.

There are a lot of topics about this problem. Just browse forum for the last couple of days or use search.

Wow. I hope you have backups or can boot up from a working read only snapshot to do sudo snapper rollback.

I got similar errors and lost the read-only snapshot boot entries on a VM after I was experimenting doing rather stupid things like attempting to force install packages in parallel circumventing multiple safeguards in the process.

Perhaps some third-party repo/package/script caused this.

This topic was already discussed several times in the last days…see for solution:

Normaly no rollback needed to fix this…

2 Likes

Point is: Why is it still possible to bork your zypper as the bug is known for several days?

Might somebody stop this?

Why not simply read the bug report?

For some reason all my snapshots are gone :frowning:. Luckily I do rsync backup key data to an external drive. I’m rethinking of using clonezilla/rescuezilla again too.

Reading through them now. Thanks. Yesterday was not good for me to think through issues clearly as my room was 38C and high humidity - damn heatwave.

I’ll go through the solutions soon. However I may opt to wipe and reinstall TW from scratch with a larger root partition. I have noticed that over the last 12 months each dup slowly requires additional disk space even though I haven’t installed any new software. My 40GB root partition is slowly being chewed up and I have to delete old snapshots to get dup done. But that’s another issue.

Thanks for your replies.

@suse_rasputin do you only have the default repositories active? No issues here on two Tumbleweed (GNOME) and two MicroOS (Hyprland/multi-user) systems. No non standard repos or third party repos active here…

All good now. Using @hui link for the interrupted update, my TW has finally updated ~4100 packages. Though the cost was ~10GB of snapshot storage used leaving me with ~3GB on the root partition - that won’t cover another dup like this.

I’ll rollback the latest snapshot and then delete the older ones after a few days.

Thanks again.

1 Like

Did you manage to restore the snapshots in grub menu?

Yes @pavinjoseph they did come back…BUT…I genuinely didn’t trust things.

chris@asus-roc:~>$sudo btrfs filesystem usage -T /
 [sudo] password for root:
 Overall:
     Device size:                  40.00GiB
     Device allocated:             40.00GiB
     Device unallocated:            1.00MiB
     Device missing:                  0.00B
     Device slack:                    0.00B
     Used:                         36.85GiB
     Free (estimated):              2.50GiB      (min: 2.50GiB)
     Free (statfs, df):             2.50GiB
     Data ratio:                       1.00
     Metadata ratio:                   1.00
     Global reserve:               87.97MiB      (used: 0.00B)
     Multiple profiles:                  no

              Data     Metadata System
 Id Path      single   single   single   Unallocated Total    Slack
 -- --------- -------- -------- -------- ----------- -------- -----
  1 /dev/sda2 38.28GiB  1.69GiB 32.00MiB     1.00MiB 40.00GiB     -
 -- --------- -------- -------- -------- ----------- -------- -----
    Total     38.28GiB  1.69GiB 32.00MiB     1.00MiB 40.00GiB 0.00B
    Used      35.78GiB  1.07GiB 16.00KiB

 chris@asus-roc:~>$sudo snapper list
    # | Type   | Pre # | Date                     | User | Cleanup | Description           | Userdata     
 -----+--------+-------+--------------------------+------+---------+-----------------------+--------------
   0  | single |       |                          | root |         | current               |
 179* | single |       | Thu 08 Feb 2024 17:26:30 | root |         | writable copy of #177 |
 188  | pre    |       | Fri 09 Feb 2024 11:25:29 | root | number  | zypp(zypper)          | important=yes
 189  | post   |   188 | Fri 09 Feb 2024 12:11:00 | root | number  |                       | important=yes
 190  | pre    |       | Fri 09 Feb 2024 12:19:04 | root | number  | zypp(zypper)          | important=yes
 191  | post   |   190 | Fri 09 Feb 2024 12:19:59 | root | number  |                       | important=yes
 192  | pre    |       | Fri 09 Feb 2024 12:20:15 | root | number  | yast sw_single        |
 193  | pre    |       | Fri 09 Feb 2024 12:21:57 | root | number  | zypp(ruby.ruby3.3)    | important=no
 194  | post   |   193 | Fri 09 Feb 2024 12:22:21 | root | number  |                       | important=no
 195  | post   |   192 | Fri 09 Feb 2024 12:22:25 | root | number  |                       |
 196  | pre    |       | Fri 09 Feb 2024 12:49:14 | root | number  | yast sw_single        |
 197  | pre    |       | Fri 09 Feb 2024 12:49:47 | root | number  | zypp(ruby.ruby3.3)    | important=no
 198  | post   |   197 | Fri 09 Feb 2024 12:49:54 | root | number  |                       | important=no
 199  | post   |   196 | Fri 09 Feb 2024 12:50:02 | root | number  |                       |
 chris@asus-roc:~>$

Being stuck with only 2.5GB left on the root partition I selected snapshot #199 in the GRUB menu and did a snapshot rollback with it and then deleted all the previous snapshots.

chris@asus-roc:~>$sudo snapper list
   # | Type   | Pre # | Date                     | User | Cleanup | Description             | Userdata     
-----+--------+-------+--------------------------+------+---------+-------------------------+--------------
  0  | single |       |                          | root |         | current                 |              
179  | single |       | Thu 08 Feb 2024 17:26:30 | root | number  | writable copy of #177   |              
188  | pre    |       | Fri 09 Feb 2024 11:25:29 | root | number  | zypp(zypper)            | important=yes
189  | post   |   188 | Fri 09 Feb 2024 12:11:00 | root | number  |                         | important=yes
190  | pre    |       | Fri 09 Feb 2024 12:19:04 | root | number  | zypp(zypper)            | important=yes
191  | post   |   190 | Fri 09 Feb 2024 12:19:59 | root | number  |                         | important=yes
192  | pre    |       | Fri 09 Feb 2024 12:20:15 | root | number  | yast sw_single          |              
193  | pre    |       | Fri 09 Feb 2024 12:21:57 | root | number  | zypp(ruby.ruby3.3)      | important=no 
194  | post   |   193 | Fri 09 Feb 2024 12:22:21 | root | number  |                         | important=no 
195  | post   |   192 | Fri 09 Feb 2024 12:22:25 | root | number  |                         |              
196  | pre    |       | Fri 09 Feb 2024 12:49:14 | root | number  | yast sw_single          |              
197  | pre    |       | Fri 09 Feb 2024 12:49:47 | root | number  | zypp(ruby.ruby3.3)      | important=no 
198  | post   |   197 | Fri 09 Feb 2024 12:49:54 | root | number  |                         | important=no 
199  | post   |   196 | Fri 09 Feb 2024 12:50:02 | root | number  |                         |              
200  | single |       | Sun 11 Feb 2024 08:09:22 | root | number  | rollback backup of #179 | important=yes
201* | single |       | Sun 11 Feb 2024 08:09:22 | root |         | writable copy of #199   |              

Then sudo snapper delete 179-200.

Now I have my ~13GB storage back on the root partition

chris@asus-roc:~>$sudo btrfs filesystem usage -T /
Overall:
    Device size:		  40.00GiB
    Device allocated:		  40.00GiB
    Device unallocated:		   1.00MiB
    Device missing:		     0.00B
    Device slack:		     0.00B
    Used:			  25.28GiB
    Free (estimated):		  13.71GiB	(min: 13.71GiB)
    Free (statfs, df):		  13.71GiB
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		  87.88MiB	(used: 0.00B)
    Multiple profiles:		        no

             Data     Metadata  System                             
Id Path      single   single    single   Unallocated Total    Slack
-- --------- -------- --------- -------- ----------- -------- -----
 1 /dev/sda2 38.28GiB   1.69GiB 32.00MiB     1.00MiB 40.00GiB     -
-- --------- -------- --------- -------- ----------- -------- -----
   Total     38.28GiB   1.69GiB 32.00MiB     1.00MiB 40.00GiB 0.00B
   Used      24.57GiB 722.33MiB 16.00KiB                           
chris@asus-roc:~>$
1 Like

@kitman you need to run systemctl status btrfs-balance.service manually…

1 Like

It’s currently inactive. I had a quick read of the man page but I don’t understand.

What will it do if I enable the service at boot?

Thanks.

@kitman You don’t need to, it runs automatically via a systemd timer… since you deleted the snapshots manually, it’s a quick way to recover the disk space rather than wait for it to run…

Check the data from btrfs filesystem usage -T / before and after…

1 Like

Ah I get it. I did notice that the free space didn;t increase immediately after the manual snapshot deletes. Noted for next time :+1:

Thanks.

Glad to hear it’s solved :grinning:

Snapper in its default config leaves 20% of the FS free, but for a 40G drive that’s only 8G. You might want to change the option FREE_LIMIT in /etc/snapper/configs/root.

@pavinjoseph Did you mean SPACE_LIMIT?

chris@asus-roc:~>$cat /etc/snapper/configs/root | grep LIMIT
SPACE_LIMIT="0.5"
NUMBER_LIMIT="2-10"
NUMBER_LIMIT_IMPORTANT="4-10"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"

To be honest, I’m concerned about the creep in additional space used EACH dup. Now the next one will used another 1.5GiB! :rage:.

The following product is going to be upgraded:
  openSUSE Tumbleweed  20240206-0 -> 20240209-0
.
.
.
.
.
221 packages to upgrade, 7 new, 2 to remove.
Overall download size: 626.5 MiB. Already cached: 0 B. After the operation,
additional 1.5 GiB will be used.

Soon I’ll have no disk space left just from upgrades only.

Time for me to soon consider a larger NVMe drive with a larger BTRFS root partition - one setup where I can dynamically resize partitions.

Sorry, going off original topic. Might start a new topic.

@kitman did you review your space after balance was run? I $HOME part of snapshots?