Upgrade to 15.3 fail

Hi, I tried upgrading to to LEAP 15.3 from 15.2 but fail. Suspect it is because of memory issues.

I used an older snapshot to get back to LEAP 15.2, and clear some memory and rebalance to hopefully get more space to do the update . Given the below, I suspect I might run out of memory again soon.

linux-3ztp:/home/by79 # btrfs filesystem usage -T /
Overall:
    Device size:                  40.00GiB
    Device allocated:             38.56GiB
    Device unallocated:            1.44GiB
    Device missing:                  0.00B
    Used:                         32.63GiB
    Free (estimated):              6.71GiB      (min: 6.71GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               90.53MiB      (used: 0.00B)

             Data     Metadata System              
Id Path      single   single   single   Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/sda3 36.86GiB  1.67GiB 32.00MiB     1.44GiB
-- --------- -------- -------- -------- -----------
   Total     36.86GiB  1.67GiB 32.00MiB     1.44GiB
   Used      31.59GiB  1.04GiB 16.00KiB            
linux-3ztp:/home/by79 # du -xhd1 -t1M / /var/ /tmp/
32M     /etc
9.4G    /usr
672M    /var
2.2M    /bin
93M     /boot
1.5G    /lib
14M     /lib64
57M     /root
11M     /sbin
12G     /
85M     /tmp/YaST2-28694-MpzvCR
88M     /tmp/
linux-3ztp:/home/by79 # snapper list
   # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description             | Userdata     
-----+--------+-------+---------------------------------+------+------------+---------+-------------------------+--------------
  0  | single |       |                                 | root |            |         | current                 |              
253  | pre    |       | Sat 17 Apr 2021 06:06:57 PM +08 | root |  25.03 MiB | number  | zypp(zypper)            | important=yes
254  | post   |   253 | Sat 17 Apr 2021 06:07:06 PM +08 | root |  26.40 MiB | number  |                         | important=yes
286  | pre    |       | Mon 29 Aug 2022 05:13:16 PM +08 | root |  40.97 MiB | number  | zypp(zypper)            | important=yes
287  | post   |   286 | Mon 29 Aug 2022 05:13:27 PM +08 | root |  15.97 MiB | number  |                         | important=yes
288  | single |       | Fri 02 Sep 2022 07:09:40 PM +08 | root | 253.05 MiB | number  | rollback backup of #250 | important=yes
289* | single |       | Fri 02 Sep 2022 07:09:40 PM +08 | root | 761.24 MiB |         | writable copy of #284   |              
linux-3ztp:/home/by79 # 

Is there anything meaningful I can delete?

TIA.

“Upgrade” is a very generic term. Maybe you could tell how you are trying to upgrade.

Your partitioning is plain and utter PITA. Consider installing Leap 15.3 on a single btrfs partition.

**Leap-15-4:~ #** cat /etc/fstab  
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /                       btrfs  defaults                      0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /.snapshots             btrfs  subvol=/@/.snapshots          0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /var                    btrfs  subvol=/@/var                 0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /usr/local              btrfs  subvol=/@/usr/local           0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /tmp                    btrfs  subvol=/@/tmp                 0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /srv                    btrfs  subvol=/@/srv                 0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /root                   btrfs  subvol=/@/root                0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /opt                    btrfs  subvol=/@/opt                 0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /home                   btrfs  subvol=/@/home                0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /boot/grub2/x86_64-efi  btrfs  subvol=/@/boot/grub2/x86_64-efi  0  0 
UUID=85d405ec-d559-49a1-b59c-5c5c9f176724  /boot/grub2/i386-pc     btrfs  subvol=/@/boot/grub2/i386-pc  0  0 
UUID=6B6D-1CDE                             /boot/efi               vfat   utf8                          0  2 
**Leap-15-4:~ #**

**Leap-15-4:~ #** df -h -x tmpfs -x devtmpfs 
Filesystem      Size  Used Avail Use% Mounted on 
/dev/sda3        49G   16G   33G  32% / 
/dev/sda3        49G   16G   33G  32% /.snapshots 
/dev/sda3        49G   16G   33G  32% /boot/grub2/i386-pc 
/dev/sda3        49G   16G   33G  32% /boot/grub2/x86_64-efi 
/dev/sda3        49G   16G   33G  32% /root 
/dev/sda3        49G   16G   33G  32% /opt 
/dev/sda3        49G   16G   33G  32% /srv 
/dev/sda3        49G   16G   33G  32% /home 
/dev/sda3        49G   16G   33G  32% /tmp 
/dev/sda3        49G   16G   33G  32% /usr/local 
/dev/sda3        49G   16G   33G  32% /var 
/dev/sda1       500M   27M  473M   6% /boot/efi 
**Leap-15-4:~ #**
**Leap-15-4:~ #** btrfs filesystem usage -T /           
Overall: 
    Device size:                  48.83GiB 
    Device allocated:             23.05GiB 
    Device unallocated:           25.78GiB 
    Device missing:                  0.00B 
    Used:                         15.38GiB 
    Free (estimated):             32.94GiB      (min: 32.94GiB) 
    Free (statfs, df):            32.94GiB 
    Data ratio:                       1.00 
    Metadata ratio:                   1.00 
    Global reserve:               57.84MiB      (used: 0.00B) 
    Multiple profiles:                  no 

             Data     Metadata  System               
Id Path      single   single    single   Unallocated 
-- --------- -------- --------- -------- ----------- 
 1 /dev/sda3 22.01GiB   1.01GiB 32.00MiB    25.78GiB 
-- --------- -------- --------- -------- ----------- 
   Total     22.01GiB   1.01GiB 32.00MiB    25.78GiB 
   Used      14.85GiB 548.50MiB 16.00KiB             
**Leap-15-4:~ # **
**Leap-15-4:~ #** snapper list 
   # | Type   | Pre # | Date                     | User | Used Space | Cleanup | Description           | Userdata      
-----+--------+-------+--------------------------+------+------------+---------+-----------------------+-------------- 
  0  | single |       |                          | root |            |         | current               |               
  1* | single |       | Fri Aug  6 11:47:32 2021 | root | 198.82 MiB |         | first root filesystem |               
149  | pre    |       | Sun Jul 24 13:13:45 2022 | root | 419.31 MiB | number  | zypp(zypper)          | important=yes 
150  | post   |   149 | Sun Jul 24 13:16:40 2022 | root |  53.67 MiB | number  |                       | important=yes 
157  | pre    |       | Wed Aug  3 00:01:06 2022 | root |   7.75 MiB | number  | zypp(zypper)          | important=yes 
158  | post   |   157 | Wed Aug  3 00:03:00 2022 | root |   7.46 MiB | number  |                       | important=yes 
159  | pre    |       | Wed Aug 10 06:10:23 2022 | root |   6.64 MiB | number  | zypp(zypper)          | important=no  
160  | post   |   159 | Wed Aug 10 06:12:08 2022 | root |  17.09 MiB | number  |                       | important=no  
161  | pre    |       | Thu Aug 25 08:38:48 2022 | root |   9.17 MiB | number  | zypp(zypper)          | important=yes 
162  | post   |   161 | Thu Aug 25 08:43:45 2022 | root |  40.51 MiB | number  |                       | important=yes 
163  | pre    |       | Thu Aug 25 08:44:51 2022 | root | 256.00 KiB | number  | zypp(zypper)          | important=yes 
164  | post   |   163 | Thu Aug 25 08:45:16 2022 | root |  28.64 MiB | number  |                       | important=yes 
165  | pre    |       | Fri Sep  2 08:37:57 2022 | root |   4.06 MiB | number  | zypp(zypper)          | important=no  
166  | post   |   165 | Fri Sep  2 08:38:01 2022 | root | 576.00 KiB | number  |                       | important=no  
167  | pre    |       | Fri Sep  2 08:39:17 2022 | root | 440.00 KiB | number  | zypp(zypper)          | important=yes 
168  | post   |   167 | Fri Sep  2 08:45:22 2022 | root |  36.71 MiB | number  |                       | important=yes 
**Leap-15-4:~ #**

You can remove all snapshots except the current one (which is marked with *) and number 0. “Used space” shown by snapper could be very confusing (it really means “space used exclusively by this subvolume”, which is rather different from “snapshot size”). As you have two pre/post snapshots created more than a year ago, they can easily consume half of your disk drive together.

Try removing the oldest snapshot (253) and look at/post output of “btrfs fi us -T /” again.

“Upgrade” is a very generic term. Maybe you could tell how you are trying to upgrade.

I was using the below:

zypper refresh
zypper update
sed -i 's/15.2/${releasever}/g' /etc/zypp/repos.d/*.repo
zypper --releasever=15.3 refresh

after which something got stuck, and then I use snapper to get back.

Currently,

by79@linux-3ztp:~> zypper lr
Repository priorities are without effect. All enabled repositories share the same priority.

#  | Alias                               | Name                                    | Enabled | GPG Check | Refresh
---+-------------------------------------+-----------------------------------------+---------+-----------+--------
 1 | NVIDIA                              | NVIDIA                                  | Yes     | (r ) Yes  | Yes
 2 | google-chrome                       | google-chrome                           | Yes     | (r ) Yes  | Yes
 3 | http-download.opensuse.org-3d474035 | devel:languages:R:released              | Yes     | (r ) Yes  | Yes
 4 | libdvdcss                           | libdvdcss                               | Yes     | (r ) Yes  | Yes
 5 | openSUSE-Leap-${releasever}-0       | openSUSE-Leap-15.2-0                    | No      | ----      | ----
 6 | packman                             | packman                                 | Yes     | (r ) Yes  | Yes
 7 | repo-debug                          | openSUSE-Leap-15.2-Debug                | No      | ----      | ----
 8 | repo-debug-non-oss                  | openSUSE-Leap-15.2-Debug-Non-Oss        | No      | ----      | ----
 9 | repo-debug-update                   | openSUSE-Leap-15.2-Update-Debug         | No      | ----      | ----
10 | repo-debug-update-non-oss           | openSUSE-Leap-15.2-Update-Debug-Non-Oss | No      | ----      | ----
11 | repo-non-oss                        | openSUSE-Leap-15.2-Non-Oss              | Yes     | (r ) Yes  | Yes
12 | repo-oss                            | openSUSE-Leap-15.2-Oss                  | Yes     | (r ) Yes  | Yes
13 | repo-source                         | openSUSE-Leap-15.2-Source               | No      | ----      | ----
14 | repo-source-non-oss                 | openSUSE-Leap-15.2-Source-Non-Oss       | No      | ----      | ----
15 | repo-update                         | openSUSE-Leap-15.2-Update               | Yes     | (r ) Yes  | Yes
16 | repo-update-non-oss                 | openSUSE-Leap-15.2-Update-Non-Oss       | Yes     | (r ) Yes  | Yes

Your partitioning is plain and utter PITA. Consider installing Leap 15.3 on a single btrfs partition.

by79@linux-3ztp:~> cat /etc/fstab  
UUID=0868bfa0-99a1-4067-928d-741d5108206b swap swap defaults 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d / btrfs defaults 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /opt btrfs subvol=@/opt 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /srv btrfs subvol=@/srv 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /tmp btrfs subvol=@/tmp 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /usr/local btrfs subvol=@/usr/local 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/cache btrfs subvol=@/var/cache 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/crash btrfs subvol=@/var/crash 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/machines btrfs subvol=@/var/lib/machines 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/log btrfs subvol=@/var/log 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/opt btrfs subvol=@/var/opt 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/spool btrfs subvol=@/var/spool 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=b5e24a72-de47-4d48-a152-c0839ffaef8d /.snapshots btrfs subvol=@/.snapshots 0 0
UUID=3b9fe632-938b-4dae-b87e-87653abd14e1 /home                xfs        defaults              1 2
by79@linux-3ztp:~> df -h -x tmpfs -x devtmpfs 
df: /run/user/1000/doc: Operation not permitted
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        41G   33G  6.7G  84% /
/dev/sda3        41G   33G  6.7G  84% /var/opt
/dev/sda3        41G   33G  6.7G  84% /var/log
/dev/sda3        41G   33G  6.7G  84% /var/lib/libvirt/images
/dev/sda3        41G   33G  6.7G  84% /var/lib/machines
/dev/sda3        41G   33G  6.7G  84% /tmp
/dev/sda3        41G   33G  6.7G  84% /var/spool
/dev/sda3        41G   33G  6.7G  84% /usr/local
/dev/sda3        41G   33G  6.7G  84% /var/cache
/dev/sda3        41G   33G  6.7G  84% /var/lib/mailman
/dev/sda3        41G   33G  6.7G  84% /var/lib/pgsql
/dev/sda3        41G   33G  6.7G  84% /var/tmp
/dev/sda3        41G   33G  6.7G  84% /var/crash
/dev/sda3        41G   33G  6.7G  84% /var/lib/mariadb
/dev/sda3        41G   33G  6.7G  84% /var/lib/mysql
/dev/sda3        41G   33G  6.7G  84% /var/lib/named
/dev/sda3        41G   33G  6.7G  84% /srv
/dev/sda3        41G   33G  6.7G  84% /.snapshots
/dev/sda3        41G   33G  6.7G  84% /opt
/dev/sda4       890G   29G  861G   4% /home
/dev/sdc1       121M   65M   56M  54% /run/media/by79/HP Port Rep
by79@linux-3ztp:~> sudo btrfs filesystem usage -T /
[sudo] password for root: 
Overall:
    Device size:                  40.00GiB
    Device allocated:             38.56GiB
    Device unallocated:            1.44GiB
    Device missing:                  0.00B
    Used:                         32.65GiB
    Free (estimated):              6.69GiB      (min: 6.69GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               89.97MiB      (used: 0.00B)

             Data     Metadata System              
Id Path      single   single   single   Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/sda3 36.86GiB  1.67GiB 32.00MiB     1.44GiB
-- --------- -------- -------- -------- -----------
   Total     36.86GiB  1.67GiB 32.00MiB     1.44GiB
   Used      31.61GiB  1.04GiB 16.00KiB            

I have quite a bit of space on sda4…

My current snapper list…

by79@linux-3ztp:~> sudo snapper list
[sudo] password for root: 
   # | Type   | Pre # | Date                            | User | Used Space | Cleanup | Description             | Userdata     
-----+--------+-------+---------------------------------+------+------------+---------+-------------------------+--------------
  0  | single |       |                                 | root |            |         | current                 |              
253  | pre    |       | Sat 17 Apr 2021 06:06:57 PM +08 | root |  25.03 MiB | number  | zypp(zypper)            | important=yes
254  | post   |   253 | Sat 17 Apr 2021 06:07:06 PM +08 | root |  26.40 MiB | number  |                         | important=yes
286  | pre    |       | Mon 29 Aug 2022 05:13:16 PM +08 | root |  40.97 MiB | number  | zypp(zypper)            | important=yes
287  | post   |   286 | Mon 29 Aug 2022 05:13:27 PM +08 | root |  15.97 MiB | number  |                         | important=yes
288  | single |       | Fri 02 Sep 2022 07:09:40 PM +08 | root | 253.05 MiB | number  | rollback backup of #250 | important=yes
289* | single |       | Fri 02 Sep 2022 07:09:40 PM +08 | root | 761.24 MiB |         | writable copy of #284   |             

Not too confident on relying #286 onwards if something goes wrong…I deleted a number of snapshots prior to re-balancing but it doesn’t seem to free up a lot of space…but noted, will delete after finding out the issues…

OK, that then is the on-line method.

That that mean during the last zypper ref? Maybe you should describe in more detail what “got stuck” means.

That does not look to bad, except that in fact it only suggests things, because the element that is important is not in the list. We need the URLs to know what the repos are. The Alias and Name columns only show things that are local to your system.
So better is

zypper lr -d

Me too:

**erlangen:~ #** fdisk -l /dev/nvme0n1 
**Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors**
Disk model: Samsung SSD 970 EVO Plus 2TB             
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes 
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: gpt 
Disk identifier: F5B232D0-7A67-461D-8E7D-B86A5B4C6C10 

**Device        ****     Start****       End****   Sectors**** Size****Type**
/dev/nvme0n1p1       2048    1050623    1048576  512M EFI System 
/dev/nvme0n1p2    1050624 3702228991 3701178368  1.7T Linux filesystem 

**erlangen:~ #**
**erlangen:~ #** btrfs filesystem usage -T / 
Overall: 
    Device size:                   1.72TiB 
    Device allocated:            465.07GiB 
    Device unallocated:            1.27TiB 
    Device missing:                  0.00B 
    Used:                        461.45GiB 
    Free (estimated):              1.27TiB      (min: 652.74GiB) 
    Free (statfs, df):             1.27TiB 
    Data ratio:                       1.00 
    Metadata ratio:                   2.00 
    Global reserve:              512.00MiB      (used: 0.00B) 
    Multiple profiles:                  no 

                  Data      Metadata System               
Id Path           single    DUP      DUP      Unallocated 
-- -------------- --------- -------- -------- ----------- 
 1 /dev/nvme0n1p2 459.01GiB  6.00GiB 64.00MiB     1.27TiB 
-- -------------- --------- -------- -------- ----------- 
   Total          459.01GiB  3.00GiB 32.00MiB     1.27TiB 
   Used           456.16GiB  2.64GiB 80.00KiB             
**erlangen:~ #**

Then post

btrfs subvolume list /
btrfs qgroup show /

Yes, during the zypper ref, there is a number of warnings on memory, and also if I am not mistaken, might have warnings on Nvidia as well (which i think is normal because of the licensing?) . Can’t remember exactly the details, during the last maybe 20% of the refresh, have to abort…and then when I restart, I got a new login page where I am unable to login after a long time (not even the login name and password). That is when I use snapper to go back to older versions.

 by79@linux-3ztp:~> zypper lr -d
#  | Alias                               | Name                                    | Enabled | GPG Check | Refresh | Priority | Type   | URI                                                                                         | Service
---+-------------------------------------+-----------------------------------------+---------+-----------+---------+----------+--------+---------------------------------------------------------------------------------------------+--------
 1 | NVIDIA                              | NVIDIA                                  | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://http.download.nvidia.com/opensuse/leap/15.2                                          | 
 2 | google-chrome                       | google-chrome                           | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://dl.google.com/linux/chrome/rpm/stable/x86_64                                        | 
 3 | http-download.opensuse.org-3d474035 | devel:languages:R:released              | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/repositories/devel:/languages:/R:/released/openSUSE_Leap_15.2/ | 
 4 | libdvdcss                           | libdvdcss                               | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://opensuse-guide.org/repo/openSUSE_Leap_15.2/                                          | 
 5 | openSUSE-Leap-${releasever}-0       | openSUSE-Leap-15.2-0                    | No      | ----      | ----    |   99     | NONE   | hd:/?device=/dev/disk/by-id/usb-SanDisk_Ultra_Fit_4C5310015.2725109115.2:0-part1            | 
 6 | packman                             | packman                                 | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://packman.inode.at/suse/openSUSE_Leap_15.2/                                            | 
 7 | repo-debug                          | openSUSE-Leap-15.2-Debug                | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.2/repo/oss/                         | 
 8 | repo-debug-non-oss                  | openSUSE-Leap-15.2-Debug-Non-Oss        | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.2/repo/non-oss/                     | 
 9 | repo-debug-update                   | openSUSE-Leap-15.2-Update-Debug         | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.2/oss/                                    | 
10 | repo-debug-update-non-oss           | openSUSE-Leap-15.2-Update-Debug-Non-Oss | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.2/non-oss/                                | 
11 | repo-non-oss                        | openSUSE-Leap-15.2-Non-Oss              | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.2/repo/non-oss/                           | 
12 | repo-oss                            | openSUSE-Leap-15.2-Oss                  | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.2/repo/oss/                               | 
13 | repo-source                         | openSUSE-Leap-15.2-Source               | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.2/repo/oss/                        | 
14 | repo-source-non-oss                 | openSUSE-Leap-15.2-Source-Non-Oss       | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.2/repo/non-oss/                    | 
15 | repo-update                         | openSUSE-Leap-15.2-Update               | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.2/oss/                                          | 
16 | repo-update-non-oss                 | openSUSE-Leap-15.2-Update-Non-Oss       | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.2/non-oss/                                      | 

[QUOTE=karlmistelberger;3154945]

fdisk -l /dev/nvme0n1 
**Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors**
Disk model: Samsung SSD 970 EVO Plus 2TB             
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 512 bytes 
I/O size (minimum/optimal): 512 bytes / 512 bytes 
Disklabel type: gpt 
Disk identifier: F5B232D0-7A67-461D-8E7D-B86A5B4C6C10 

**Device        ****     Start****       End****   Sectors**** Size****Type**
/dev/nvme0n1p1       2048    1050623    1048576  512M EFI System 
/dev/nvme0n1p2    1050624 3702228991 3701178368  1.7T Linux filesystem 

Pardon me, is nvme0n1 the hdd name? May I know how to get it pls? As you have already observed, my partitions are plain…



linux-3ztp:/home/by79 # fdisk -l /dev/nvme0n1 
fdisk: cannot open /dev/nvme0n1: No such file or directory


linux-3ztp:/home/by79 # **fdisk -l /dev/sda3**
Disk /dev/sda3: 40 GiB, 42952818688 bytes, 83892224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

linux-3ztp:/home/by79 # **fdisk -l /dev/sda4**
Disk /dev/sda4: 889.5 GiB, 955088109568 bytes, 1865406464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
linux-3ztp:/home/by79 # **btrfs subvolume list /**
ID 257 gen 31 top level 5 path @
ID 258 gen 2082477 top level 257 path @/.snapshots
ID 260 gen 2082513 top level 257 path @/opt
ID 261 gen 2082412 top level 257 path @/srv
ID 262 gen 2083120 top level 257 path @/tmp
ID 263 gen 2082412 top level 257 path @/usr/local
ID 264 gen 2083038 top level 257 path @/var/cache
ID 265 gen 2082412 top level 257 path @/var/crash
ID 266 gen 1476382 top level 257 path @/var/lib/libvirt/images
ID 267 gen 1476386 top level 257 path @/var/lib/machines
ID 268 gen 1476382 top level 257 path @/var/lib/mailman
ID 269 gen 1476382 top level 257 path @/var/lib/mariadb
ID 270 gen 1476382 top level 257 path @/var/lib/mysql
ID 271 gen 1476382 top level 257 path @/var/lib/named
ID 272 gen 2077993 top level 257 path @/var/lib/pgsql
ID 273 gen 2083123 top level 257 path @/var/log
ID 274 gen 2082412 top level 257 path @/var/opt
ID 275 gen 2083125 top level 257 path @/var/spool
ID 276 gen 2083101 top level 257 path @/var/tmp
ID 1415 gen 1545889 top level 258 path @/.snapshots/244/snapshot
ID 1416 gen 1549964 top level 258 path @/.snapshots/245/snapshot
ID 1417 gen 1549965 top level 258 path @/.snapshots/246/snapshot
ID 1418 gen 1550043 top level 258 path @/.snapshots/247/snapshot
ID 1419 gen 1550044 top level 258 path @/.snapshots/248/snapshot
ID 1426 gen 1677017 top level 258 path @/.snapshots/253/snapshot
ID 1427 gen 1677017 top level 258 path @/.snapshots/254/snapshot
ID 1689 gen 2079954 top level 258 path @/.snapshots/286/snapshot
ID 1690 gen 2079958 top level 258 path @/.snapshots/287/snapshot
ID 1691 gen 2082399 top level 258 path @/.snapshots/288/snapshot
ID 1692 gen 2083125 top level 258 path @/.snapshots/289/snapshot

linux-3ztp:/home/by79 # **btrfs qgroup show /**
qgroupid         rfer         excl 
--------         ----         ---- 
0/5          16.00KiB     16.00KiB 
0/257        16.00KiB     16.00KiB 
0/258         2.84MiB      2.84MiB 
0/260       515.18MiB    515.18MiB 
0/261         3.43MiB      3.43MiB 
0/262       106.12MiB    106.12MiB 
0/263        16.00KiB     16.00KiB 
0/264         4.29GiB      4.29GiB 
0/265        16.00KiB     16.00KiB 
0/266        16.00KiB     16.00KiB 
0/267        16.00KiB     16.00KiB 
0/268        16.00KiB     16.00KiB 
0/269        16.00KiB     16.00KiB 
0/270        16.00KiB     16.00KiB 
0/271        16.00KiB     16.00KiB 
0/272        56.34MiB     56.34MiB 
0/273         2.57GiB      2.57GiB 
0/274        16.00KiB     16.00KiB 
0/275       272.00KiB    272.00KiB 
0/276        29.99MiB     29.99MiB 
0/1415       11.32GiB     54.73MiB 
0/1416       11.31GiB     50.11MiB 
0/1417       11.19GiB      1.45MiB 
0/1418       11.19GiB     16.00KiB 
0/1419       11.19GiB     16.00KiB 
0/1426       11.48GiB     25.03MiB 
0/1427       11.10GiB     26.40MiB 
0/1689       13.04GiB     40.97MiB 
0/1690       12.72GiB     15.97MiB 
0/1691       12.61GiB    253.05MiB 
0/1692       11.50GiB    761.24MiB 
1/0          24.26GiB      9.15GiB 
255/267      16.00KiB     16.00KiB 

Please do not worry. What he shows is all as it is on his system. Your system is simply different and you should not try to get the same as he has.

The idea is to put everything in one pristine btrfs partition as suggested in post #9. Obviously you would need to backup your data and reformat your drive.

Show your drives:

**erlangen:~ #** inxi -D 
**Drives:**
  **Local Storage:****total:** 10.92 TiB **used:** 3.96 TiB (36.2%) 
  **ID-1:** /dev/nvme0n1 **vendor:** Samsung **model:** SSD 970 EVO Plus 2TB **size:** 1.82 TiB 
  **ID-2:** /dev/sda **vendor:** Seagate **model:** ST8000VN004-2M2101 **size:** 7.28 TiB 
  **ID-3:** /dev/sdc **vendor:** Crucial **model:** CT2000BX500SSD1 **size:** 1.82 TiB 
**erlangen:~ #**

Those subvolumes are not listed in snapper which probably means that metadata was lost. You cannot use snapper to manage them anyway so you can delete them

btrfs subvolume delete /.snapshots/245/snapshot
rm -rf /.snapshots/245

Of course it is in principle possible to add metadata back if you really need … just look at files in existing snapshot directories.

linux-3ztp:/home/by79 # **btrfs qgroup show /**
qgroupid         rfer         excl 
--------         ----         ---- 
...
1/0          24.26GiB      9.15GiB
...

So after deleting all snapshots approximately 9GiB will be freed. Unfortunately due to metadata corruption it is not clear whether this includes also “orphan” snapshots. Running

sudo btrfs qgroup show -c /

will show which snapshots are included in total computations. There are chances that those orphans hold up some more space.

Hi,

linux-3ztp:/home/by79 # inxi -D
Drives:    Local Storage: total: 931.63 GiB used: 61.53 GiB (6.6%) 
           ID-1: /dev/sda vendor: Samsung model: SSD 850 EVO 1TB size: 931.51 GiB 
           ID-2: /dev/sdc type: USB model: IT1165 USB Flash Disk size: 120.5 MiB 

linux-3ztp:/home/by79 # btrfs subvolume delete /.snapshots/245/snapshot
Delete subvolume (no-commit): '/.snapshots/245/snapshot'
linux-3ztp:/home/by79 # rm -rf /.snapshots/245
linux-3ztp:/home/by79 # sudo btrfs qgroup show -c /
qgroupid         rfer         excl child                                                   
--------         ----         ---- -----                                                   
0/5          16.00KiB     16.00KiB ---                                                    
0/257        16.00KiB     16.00KiB ---                                                    
0/258         2.84MiB      2.84MiB ---                                                    
0/260       515.18MiB    515.18MiB ---                                                    
0/261         3.43MiB      3.43MiB ---                                                    
0/262       121.02MiB    121.02MiB ---                                                    
0/263        16.00KiB     16.00KiB ---                                                    
0/264         4.29GiB      4.29GiB ---                                                    
0/265        16.00KiB     16.00KiB ---                                                    
0/266        16.00KiB     16.00KiB ---                                                    
0/267        16.00KiB     16.00KiB ---                                                    
0/268        16.00KiB     16.00KiB ---                                                    
0/269        16.00KiB     16.00KiB ---                                                    
0/270        16.00KiB     16.00KiB ---                                                    
0/271        16.00KiB     16.00KiB ---                                                    
0/272        56.34MiB     56.34MiB ---                                                    
0/273         2.57GiB      2.57GiB ---                                                    
0/274        16.00KiB     16.00KiB ---                                                    
0/275       272.00KiB    272.00KiB ---                                                    
0/276        29.99MiB     29.99MiB ---                                                    
0/1415       11.32GiB    308.20MiB ---                                                    
0/1416          0.00B        0.00B ---                                                    
0/1417       11.19GiB      1.45MiB ---                                                    
0/1418       11.19GiB     16.00KiB ---                                                    
0/1419       11.19GiB     16.00KiB ---                                                    
0/1426       11.48GiB     25.03MiB ---                                                    
0/1427       11.10GiB     26.40MiB ---                                                    
0/1689       13.04GiB     40.97MiB ---                                                    
0/1690       12.72GiB     15.97MiB ---                                                    
0/1691       12.61GiB    253.05MiB ---                                                    
0/1692       11.50GiB    761.36MiB ---                                                    
1/0          24.21GiB      9.10GiB 0/1415,0/1416,0/1418,0/1426,0/1427,0/1689,0/1690,0/1691
255/267      16.00KiB     16.00KiB 0/267                                                  

Did the rebalancing as well…


linux-3ztp:/home/by79 # systemctl start btrfs-balance.service 
linux-3ztp:/home/by79 # systemctl status btrfs-balance.service 
● btrfs-balance.service - Balance block groups on a btrfs filesystem
   Loaded: loaded (/usr/lib/systemd/system/btrfs-balance.service; static; vendor preset: disabled)
   Active: inactive (dead) since Sat 2022-09-03 15:27:33 +08; 22s ago
     Docs: man:btrfs-balance
  Process: 32701 ExecStart=/usr/share/btrfsmaintenance/btrfs-balance.sh (code=exited, status=0/SUCCESS)
 Main PID: 32701 (code=exited, status=0/SUCCESS)

Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]:   METADATA (flags 0x2): balancing, usage=30
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]:   SYSTEM (flags 0x2): balancing, usage=30
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: Done, had to relocate 1 out of 50 chunks
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: After balance of /
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: Data, single: total=36.86GiB, used=31.60GiB
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: System, single: total=32.00MiB, used=16.00KiB
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: Metadata, single: total=1.67GiB, used=1.03GiB
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: GlobalReserve, single: total=89.98MiB, used=0.00B
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: Filesystem      Size  Used Avail Use% Mounted on
Sep 03 15:27:33 linux-3ztp.suse btrfs-balance.sh[32701]: /dev/sda3        43G   36G  7.2G  83% /
linux-3ztp:/home/by79 #  btrfs filesystem usage -T /
Overall:
    Device size:                  40.00GiB
    Device allocated:             38.56GiB
    Device unallocated:            1.44GiB
    Device missing:                  0.00B
    Used:                         32.62GiB
    Free (estimated):              6.70GiB      (min: 6.70GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               89.98MiB      (used: 0.00B)

             Data     Metadata System              
Id Path      single   single   single   Unallocated
-- --------- -------- -------- -------- -----------
 1 /dev/sda3 36.86GiB  1.67GiB 32.00MiB     1.44GiB
-- --------- -------- -------- -------- -----------
   Total     36.86GiB  1.67GiB 32.00MiB     1.44GiB
   Used      31.60GiB  1.03GiB 16.00KiB            
linux-3ztp:/home/by79 # 

The 850 EVO is great for btrfs. One of my machines has it too:

**6700K:~ #** inxi -D 
**Drives:**
  **Local Storage:****total:** 698.65 GiB **used:** 65.93 GiB (9.4%) 
  **ID-1:** /dev/sda **vendor:** Samsung **model:** SSD 850 EVO 500GB **size:** 465.76 GiB 
  **ID-2:** /dev/sdb **vendor:** Crucial **model:** CT250MX500SSD1 **size:** 232.89 GiB 
**6700K:~ #**

Tinkering with leap 15.2 isn’t fun at all. You may try a net install of Leap 15.4 with exactly two partitions:

  1. /dev/sda1 2048 1026047 1024000 500M EFI System [FONT=monospace]- vfat
    [/FONT]
  2. /dev/sda2 1026048 ********* ********* ****** Linux filesystem - btrfs

https://doc.opensuse.org/documentation/leap/startup/html/book-startup/cha-install.html#sec-yast-install-partitioning