Snapper - Can't delete snapshots and almost all disk space used.

Looking at the data below the one conclusion I get is that almost all my space usage is in the /.snapshots directory but snapper is showing no snapshots, which it should because I deleted them all with:

snapper delete nn-mm

** Can someone tell me how to get my space back?**

I’m currently running:

btrfs balance start /.snapshots/

but it’s really not looking like it is helping in any way so far.

If I list all the snapshots I get this:

snapper list
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         

If I show space used in /.snapshosts/* I get this:

btrfs filesystem du -s /.snapshots/* 
     Total   Exclusive  Set shared  Filename
 207.39GiB     3.69GiB   203.70GiB  /.snapshots/2414
 209.52GiB   182.01MiB   209.34GiB  /.snapshots/2415
 206.08GiB    33.93MiB   206.05GiB  /.snapshots/2477
 226.72GiB    60.19MiB   226.67GiB  /.snapshots/2478
 226.73GiB     8.69MiB   226.72GiB  /.snapshots/2480
 229.16GiB     1.62GiB   227.54GiB  /.snapshots/2481
 229.68GiB   114.18MiB   229.57GiB  /.snapshots/2486
 229.79GiB   333.14MiB   229.47GiB  /.snapshots/2487
 250.17GiB     8.33MiB   250.17GiB  /.snapshots/2534
 250.18GiB    20.00KiB   250.18GiB  /.snapshots/2535
 250.18GiB    24.00KiB   250.18GiB  /.snapshots/2536
 249.88GiB    24.54GiB   225.34GiB  /.snapshots/2539
     0.00B       0.00B       0.00B  /.snapshots/2546
     0.00B       0.00B       0.00B  /.snapshots/grub-snapshot.cfg

If I summarize all the main directories that use this file system other than ./snapshots and the space they use I get this:

btrfs filesystem du -s /boot/grub2/i386-pc /boot/grub2/x86_64-efi /opt /tmp /usr/local /var/crash /var/lib/mailman /var/lib/named /var/lib/pgsql /var/log /var/opt /var/spool /var/tmp
     Total   Exclusive  Set shared  Filename
     0.00B       0.00B       0.00B  /boot/grub2/i386-pc
   2.93MiB     2.93MiB       0.00B  /boot/grub2/x86_64-efi
 781.62MiB   781.62MiB       0.00B  /opt
 592.00KiB   592.00KiB       0.00B  /tmp
   4.62MiB     4.62MiB       0.00B  /usr/local
     0.00B       0.00B       0.00B  /var/crash
  28.00KiB    28.00KiB       0.00B  /var/lib/mailman
   8.00KiB     8.00KiB       0.00B  /var/lib/named
     0.00B       0.00B       0.00B  /var/lib/pgsql
   7.32GiB     7.31GiB     6.15MiB  /var/log
     0.00B       0.00B       0.00B  /var/opt
 984.00KiB   984.00KiB       0.00B  /var/spool
 195.62MiB   195.37MiB   256.00KiB  /var/tmp

The file system in general to get an overall picture:

df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G   40M   16G   1% /dev/shm
tmpfs            16G   19M   16G   1% /run
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/md2        475G  464G  9.8G  98% /
/dev/md0        195M   82M  114M  42% /boot/efi
/dev/md2        475G  464G  9.8G  98% /var/log
/dev/md2        475G  464G  9.8G  98% /boot/grub2/x86_64-efi
/dev/md3       1000G  187G  813G  19% /home
/dev/md4        6.3T  4.9T  1.5T  78% /srv
/dev/md2        475G  464G  9.8G  98% /var/crash
/dev/md2        475G  464G  9.8G  98% /usr/local
/dev/md2        475G  464G  9.8G  98% /var/spool
/dev/md2        475G  464G  9.8G  98% /var/opt
/dev/md2        475G  464G  9.8G  98% /var/lib/mailman
/dev/md2        475G  464G  9.8G  98% /var/lib/pgsql
/dev/md2        475G  464G  9.8G  98% /boot/grub2/i386-pc
/dev/md2        475G  464G  9.8G  98% /var/lib/named
/dev/md2        475G  464G  9.8G  98% /var/tmp
/dev/md2        475G  464G  9.8G  98% /.snapshots
/dev/md2        475G  464G  9.8G  98% /opt
/dev/md2        475G  464G  9.8G  98% /tmp
tmpfs           3.2G   64K  3.2G   1% /run/user/1000

Looking around I found another btrfs command that shows some useful information so I thought I would add it here:

btrfs fi usage /
Overall:
    Device size:                 474.74GiB
    Device allocated:            468.06GiB
    Device unallocated:            6.67GiB
    Device missing:                  0.00B
    Used:                        462.10GiB
    Free (estimated):             10.34GiB      (min: 7.00GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 96.00KiB)

Data,single: Size:456.00GiB, Used:452.34GiB
   /dev/md2      456.00GiB

Metadata,DUP: Size:6.00GiB, Used:4.88GiB
   /dev/md2       12.00GiB

System,DUP: Size:32.00MiB, Used:96.00KiB
   /dev/md2       64.00MiB

Unallocated:
   /dev/md2        6.67GiB

It looks like I found the answer which is to use the lower level tool that snapper wraps around. So the command ended up being:

btrfs subvolume delete /.snapshots/{2415,2477,2478,2480,2481,2486,2487,2534,2535,2536,2539}/snapshot

After than rm can clean up what’s left:

rm -r /.snapshots/{2415,2477,2478,2480,2481,2486,2487,2534,2535,2536,2539}

and I went from 1% space left to 51% space left.

I see home on the same partition as root. That is what is causing the huge snaps. Note if you use snapper to roll back the OS with your setup you also roll back your data. Generally that is not something you want :O. Snapper is not for data backup it is for system backup/restore. Not the same thing.