Some questions after I installed Tumbleweed

Hi I am missing some functions I had with 15.6

Can’t opt for single click ??? (KDE)
ctrl esc does not show the processes => what to use in Tumbleweed

Missing community repository => Packman
No Packman

Grub shows only one option and is in command line format

I installed two 2 tb ssd
one is /home
the other I would like to use for daily (hourly) automatic backup
what would you recommend

thanks and best regards

What desktop environment?

@sfalken KDE … just updated the post

Systemsettings->General Behaviour->Clicking

Shortcuts are freely customizable ->Systemsettings->Keyboard->Shortcuts
It’s Meta+Esc

It’s available via Myrlyn->Extras->Configure Repositories->Add->Community Repositories->Packman

Normal for a fresh install with only one kernel.

This is normal for the new grub-bls standard.

1 Like

System Settings->Quick Settings Middle of the window, you can set single click with the radio buttons. Plasma changed from Single Click by default with Plasma 6.0 (I think)

Meta+Esc should launch the System Monitor. You can change the hotkey in System Settings->Keyboard->Shortcuts if you so desire. (Meta is oftentimes the “windows key”)

I use Borg Backup with Vorta as a front-end. You could also use rsync to do this (I do that as well for local backups, and use Borg for a backup of critical files to a cloud storage solution I use).

@hui Thanks for the comprehensive reply
@sfalken Thanks !
@hendersj I will look into it

Please show
ls -l /boot/efi/loader/entries/
cat /boot/efi/loader/entries/*.conf
With grub-bls the GRUB menu text comes from the title= line, which you should be able to edit to something more descriptive if needed?

For example

title   openSUSE Leap 16.0
linux   /boot/vmlinuz-6.8.7-1-default
initrd  /boot/initrd-6.8.7-1-default
options root=UUID=3b62f1d5-2a9c-4c5a-9c4f-0f4a7c1e6b3e ro quiet splash

Hopefully, someone else using grub2-bls should be able to advise further here.

More info:

btrbk is lightweight, ultra fast, easy to use and requires minimal configuration:

erlangen:~ # cat /etc/btrbk/btrbk.conf
snapshot_preserve       7d 4w 12m 100y
target_preserve         7d 4w 12m 100y
snapshot_dir               /Btrbk/btrbk_snapshots/Backup
target                     /Backup/btrbk_snapshots/erlangen
subvolume                  /
subvolume                  /home
erlangen:~ # 

Total size of backup is 612 GiB:

erlangen:~ # du -chd0 /.snapshots/3979/snapshot /home
14G     /.snapshots/3979/snapshot
588G    /home
602G    total
erlangen:~ # 

A full backup takes just 30 seconds and consumes 12 seconds CPU time.

erlangen:~ # journalctl -S 0:00 -u btrbk
Jan 12 04:28:55 erlangen systemd[1]: Starting btrbk backup of /...
Jan 12 04:29:25 erlangen btrbk[66577]: --------------------------------------------------------------------------------
Jan 12 04:29:25 erlangen btrbk[66577]: Backup Summary (btrbk command line client, version 0.33.0-dev)
Jan 12 04:29:25 erlangen btrbk[66577]:     Date:   Mon Jan 12 04:28:55 2026
Jan 12 04:29:25 erlangen btrbk[66577]:     Config: /etc/btrbk/btrbk.conf
Jan 12 04:29:25 erlangen btrbk[66577]: Legend:
Jan 12 04:29:25 erlangen btrbk[66577]:     ===  up-to-date subvolume (source snapshot)
Jan 12 04:29:25 erlangen btrbk[66577]:     +++  created subvolume (source snapshot)
Jan 12 04:29:25 erlangen btrbk[66577]:     ---  deleted subvolume
Jan 12 04:29:25 erlangen btrbk[66577]:     ***  received subvolume (non-incremental)
Jan 12 04:29:25 erlangen btrbk[66577]:     >>>  received subvolume (incremental)
Jan 12 04:29:25 erlangen btrbk[66577]: --------------------------------------------------------------------------------
Jan 12 04:29:25 erlangen btrbk[66577]: /
Jan 12 04:29:25 erlangen btrbk[66577]: +++ /Btrbk/btrbk_snapshots/Backup/ROOT.20260112T0428
Jan 12 04:29:25 erlangen btrbk[66577]: >>> /Backup/btrbk_snapshots/erlangen/ROOT.20260112T0428
Jan 12 04:29:25 erlangen btrbk[66577]: /home
Jan 12 04:29:25 erlangen btrbk[66577]: +++ /Btrbk/btrbk_snapshots/Backup/home.20260112T0428
Jan 12 04:29:25 erlangen btrbk[66577]: >>> /Backup/btrbk_snapshots/erlangen/home.20260112T0428
Jan 12 04:29:25 erlangen systemd[1]: btrbk.service: Deactivated successfully.
Jan 12 04:29:25 erlangen systemd[1]: Finished btrbk backup of /.
Jan 12 04:29:25 erlangen systemd[1]: btrbk.service: Consumed 12.157s CPU time.
erlangen:~ # 
1 Like

@deano_ferrari

/ # ls -l /boot/efi/loader/entries/ and
/ # cat /boot/efi/loader/entries/*.conf

production:/ # ls -l /boot/efi/loader/entries/
total 8 
-rwxr-xr-x. 1 root root 508 Jan 11 09:44 snapper-opensuse-tumbleweed-6.18.3-1-default.conf 
-rwxr-xr-x. 1 root root 508 Jan 12 19:59 snapper-opensuse-tumbleweed-6.18.4-1-default.conf 
production:/ # 


production:/ # cat /boot/efi/loader/entries/*.conf
# Boot Loader Specification type#1 entry 
title      openSUSE Tumbleweed 20260108 
version    @6.18.3-1-default 
sort-key   opensuse-tumbleweed 
options    splash=silent quiet security=selinux selinux=1 mitigations=auto root=UUID=b503dd07-328d-4eda-871d-3
2ad853f6069 systemd.machine_id=3fa8f688a43c46d6a9c9f26d2c2fc590 
linux      /opensuse-tumbleweed/6.18.3-1-default/linux-486a1027722e38ef3daa159cb5c40996322db294 
initrd     /opensuse-tumbleweed/6.18.3-1-default/initrd-3f44bf1189a9675c1f012223134e5f26871a76fa 
# Boot Loader Specification type#1 entry 
title      openSUSE Tumbleweed 20260109 
version    @6.18.4-1-default 
sort-key   opensuse-tumbleweed 
options    splash=silent quiet security=selinux selinux=1 mitigations=auto root=UUID=b503dd07-328d-4eda-871d-3
2ad853f6069 systemd.machine_id=3fa8f688a43c46d6a9c9f26d2c2fc590 
linux      /opensuse-tumbleweed/6.18.4-1-default/linux-19285b6945bb74be13e1584af7092b3deb21db39 
initrd     /opensuse-tumbleweed/6.18.4-1-default/initrd-cde8ff7fe1ba59d4033fcdbb42af30df1c19fd72 
production:/ # 

@karlmistelberger
Thanks Karl :+1:, In the meantime I have installed Vorta/borgbackup

It’s probably important, though, to note that a snapshot on the same drive isn’t the same as a backup - you can take a snapshot super quickly, but using a tool like btrbk to also move that snapshot to another device (where it can be used for data recovery in case of a drive failure) is going to probably take more than 30 seconds for several GB of data - moving data is still subject to available bus bandwidth when moving between devices (or systems).

So the titles are human friendly…

title openSUSE Tumbleweed 20260108
title openSUSE Tumbleweed 20260109

… sure ! I was just wondering the “display” as in command line format
… no problem

Everybody is entitled to their opinion, of course.

Disks:

erlangen:~ # lsscsi -s 
[N:0:4:1]    disk    Samsung SSD 970 EVO Plus 2TB__1            /dev/nvme0n1  2.00TB
[N:1:1:1]    disk    Samsung SSD 990 EVO 2TB__1                 /dev/nvme1n1  2.00TB
erlangen:~ # 

Filesystems:

erlangen:~ # btrfs filesystem show 
Label: 'System'  uuid: 0e58bbe5-eff7-4884-bb5d-a0aac3d8a344
        Total devices 1 FS bytes used 1.20TiB
        devid    2 size 1.82TiB used 1.27TiB path /dev/nvme1n1p2

Label: 'Backup'  uuid: 8a723ba5-c46f-45df-b708-0cf9c541da27
        Total devices 1 FS bytes used 1.24TiB
        devid    2 size 1.79TiB used 1.29TiB path /dev/nvme0n1p2
erlangen:~ # 

btrbk created the following subvolumes on the backup drive:

erlangen:~ # btrfs subvolume show /Backup/btrbk_snapshots/erlangen/ROOT.20260113T0544/
btrbk_snapshots/erlangen/ROOT.20260113T0544
        Name:                   ROOT.20260113T0544
        UUID:                   3f6f4b6c-3ed8-9445-b884-21ab33ded0d4
        Parent UUID:            b41e7bd9-d52f-7841-a4d7-c459d7eced5c
        Received UUID:          14c05794-ddd5-474a-a054-8f0e0797f85b
        Creation time:          2026-01-13 05:44:42 +0100
        Subvolume ID:           2068
        Generation:             16079
        Gen at creation:        16075
        Parent ID:              317
        Top level ID:           317
        Flags:                  readonly
        Send transid:           1891991
        Send time:              2026-01-13 05:44:42 +0100
        Receive transid:        16076
        Receive time:           2026-01-13 05:44:44 +0100
        Snapshot(s):
        Quota group:            0/2068
          Limit referenced:     -
          Limit exclusive:      -
          Usage referenced:     13.25GiB
          Usage exclusive:      114.77MiB
erlangen:~ # 
erlangen:~ # btrfs subvolume show /Backup/btrbk_snapshots/erlangen/home.20260113T0544/
btrbk_snapshots/erlangen/home.20260113T0544
        Name:                   home.20260113T0544
        UUID:                   d85e7d8b-4510-0747-a156-fc3e1706cbf8
        Parent UUID:            8bc4b9c3-a5e6-4447-9ae2-669de7e86bcd
        Received UUID:          c9bafc6f-513b-6d4b-9c9b-32e3c0897c31
        Creation time:          2026-01-13 05:44:46 +0100
        Subvolume ID:           2069
        Generation:             16081
        Gen at creation:        16078
        Parent ID:              317
        Top level ID:           317
        Flags:                  readonly
        Send transid:           1891992
        Send time:              2026-01-13 05:44:46 +0100
        Receive transid:        16079
        Receive time:           2026-01-13 05:44:58 +0100
        Snapshot(s):
        Quota group:            0/2069
          Limit referenced:     -
          Limit exclusive:      -
          Usage referenced:     589.90GiB
          Usage exclusive:      1.09GiB
erlangen:~ # 

Total transfer is 114.77 Mib + 1.09GiB = 1.21GiB.

Verifying the full backup:

erlangen:~ # diff -qr --no-dereference /Btrbk/btrbk_snapshots/Backup/home.20260113T0544/ /Backup/btrbk_snapshots/erlangen/home.20260113T0544/
Only in /Btrbk/btrbk_snapshots/Backup/home.20260113T0544/charlemagne: .cache
Only in /Btrbk/btrbk_snapshots/Backup/home.20260113T0544/karl: .cache
erlangen:~ # 

Note: .cache is excluded:

erlangen:~ # ll /Btrbk/btrbk_snapshots/Backup/home.20260113T0544/karl/.cache
total 0
erlangen:~ # ll /Btrbk/btrbk_snapshots/Backup/home.20260113T0544/charlemagne/.cache
total 0
erlangen:~ # 

That’s rather weird, backup device in same system? But aside from that, in the event of failure(s) as in replace the devices and rebuild the system, what is the process?

Backups are fine, the restore process is the thing to test… That’s why I only care about data, Install OS, run system and user scripts, rsync data back and done…

2 Likes

I don’t know that understanding the limitations of the bandwidth in a data bus or network is a matter of “opinion”.

It’s not feasible to move 100 GB of data over a 56KB link in 30 seconds (as an extreme example). That’s a physical limitation, as I’m sure you’d agree.

Years ago, I was involved in the architecture of a backup system that worked over a microwave link between two buildings. We ran into a bandwidth issue that resulted from a lack of understanding of how the software (IBM’s ADSM) would use bandwidth, and how the multichannel microwave link would allocate additional bandwidth (IIRC, there were 24 1.5 Mbps channels available).

ADSM would determine what the connection speed was between the device being backed up and the server, and would throttle itself to 80% of the bandwidth available.

The microwave link wouldn’t bond another channel until 85% (as I recall) of the bandwidth on the current channel was used.

So between the two thresholds, restoring all the data in the event of a catastrophic failure would have taken about 6 weeks without tweaks to the configuration of the microwave link (IIRC, the ADSM configuration didn’t have an option to use more bandwidth).

No amount of wishful thinking or “different opinions” would have sped up that restore operation without configuration changes, and even then, the full restoration time wasn’t reduced to fewer than several days (which would’ve cost millions of dollars in downtime).

Similarly, trying to do a Windows 2000 Server domain controller installation over a 56K MMV VSAT satellite link would also have taken multiple days, if the latency hadn’t been a primary issue. Again, not a matter of opinion, but a fact based on the bandwidth and latency limitations of the technology at the time.

It’s important for the readers here to understand those limitations - and that your specific results may not reflect their experience because their hardware is different than yours. Setting an expectation that your experiences are valid for everyone sets others up for failure.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.