Snapper delete error

When doing updates I noticed I was getting low on space on /. I deleted some old snapshots without issue but ran into a problem with one.

snapper delete 239
Deleting snapshot failed.

That was it nothing else. Here is the list of snapshots.

snapper listType   | #   | Pre # | Date                            | User | Cleanup | Description           | Userdata     
-------+-----+-------+---------------------------------+------+---------+-----------------------+--------------
single | 0   |       |                                 | root |         | current               |              
single | 1   |       | Sat 25 Mar 2017 03:31:22 PM CDT | root |         | first root filesystem |              
single | 239 |       | Fri 05 May 2017 08:48:36 PM CDT | root |         |                       |              
pre    | 320 |       | Mon 05 Jun 2017 09:06:52 AM CDT | root | number  | zypp(zypper)          | important=yes
post   | 321 | 320   | Mon 05 Jun 2017 09:18:34 AM CDT | root | number  |                       | important=yes
pre    | 330 |       | Sun 11 Jun 2017 03:17:05 PM CDT | root | number  | zypp(zypper)          | important=yes
pre    | 331 |       | Sun 11 Jun 2017 04:03:51 PM CDT | root | number  | zypp(zypper)          | important=yes
pre    | 332 |       | Sun 11 Jun 2017 04:06:36 PM CDT | root | number  | zypp(zypper)          | important=yes
pre    | 333 |       | Sun 11 Jun 2017 04:08:25 PM CDT | root | number  | zypp(zypper)          | important=yes
pre    | 334 |       | Sun 11 Jun 2017 04:12:44 PM CDT | root | number  | yast snapper          |              
pre    | 335 |       | Sun 11 Jun 2017 04:16:49 PM CDT | root | number  | zypp(zypper)          | important=yes
pre    | 336 |       | Sun 11 Jun 2017 04:26:30 PM CDT | root | number  | zypp(zypper)          | important=no 
post   | 337 | 336   | Sun 11 Jun 2017 04:26:34 PM CDT | root | number  |                       | important=no 
post   | 338 | 334   | Sun 11 Jun 2017 04:28:02 PM CDT | root | number  |                       |              
pre    | 339 |       | Sun 11 Jun 2017 04:28:06 PM CDT | root | number  | yast snapper          |              
pre    | 340 |       | Sun 11 Jun 2017 05:35:25 PM CDT | root | number  | yast snapper          |              
post   | 341 | 340   | Sun 11 Jun 2017 05:35:51 PM CDT | root | number  |                       |              
pre    | 342 |       | Sun 11 Jun 2017 05:44:09 PM CDT | root | number  | yast snapper          |              
post   | 343 | 342   | Sun 11 Jun 2017 05:45:13 PM CDT | root | number  |                       |              
pre    | 344 |       | Sun 11 Jun 2017 05:45:16 PM CDT | root | number  | yast snapper          |              
post   | 345 | 344   | Sun 11 Jun 2017 05:47:26 PM CDT | root | number  |                       |              



Hi
Was that a rollback point? Try a snapper status on that one to see what it comes back with.

So, since your running tumbleweed (and a lot of snapshots released this week), you need to look at configuring snapper config (/etc/snapper/configs/root), also check the btrfs cronjobs have been enabled;


systemctl status btrfsmaintenance-refresh.service

You may need to enable or at least start it if there is no /etc/cron.weekly/btrfs-balance softlinks.

I would run these cron jobs manually once you configure snapper (/etc/cron.daily/suse.de-snapper) if it’s been a busy week of new Tumbleweed snapshots released.

I was able to delete problem snapshots by creating a snapshot and rolling back to it.

I have run:

systemctl enable btrfsmaintenance-refresh.service
systemctl start btrfsmaintenance-refresh.service

Then:

/etc/cron.daily/suse.de-snapper
/etc/cron.weekly/btrfs-balance

snapper list
Type   | #   | Pre # | Date                            | User | Cleanup | Description             | Userdata     
-------+-----+-------+---------------------------------+------+---------+-------------------------+--------------
single | 0   |       |                                 | root |         | current                 |              
single | 1   |       | Sat 25 Mar 2017 03:31:22 PM CDT | root |         | first root filesystem   |              
pre    | 480 |       | Mon 31 Jul 2017 01:49:51 PM CDT | root | number  | yast snapper            |              
single | 481 |       | Mon 31 Jul 2017 01:50:14 PM CDT | root |         | Snapper Cleanup         |              
post   | 482 | 480   | Mon 31 Jul 2017 01:50:25 PM CDT | root | number  |                         |              
single | 483 |       | Mon 31 Jul 2017 01:54:22 PM CDT | root | number  | rollback backup of #479 | important=yes
single | 484 |       | Mon 31 Jul 2017 01:54:23 PM CDT | root |         |                         |              



Running:

btrfs fi usage /
Overall:
    Device size:                  40.00GiB
    Device allocated:             36.96GiB
    Device unallocated:            3.04GiB
    Device missing:                  0.00B
    Used:                         28.79GiB
    Free (estimated):             10.67GiB      (min: 10.67GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              101.06MiB      (used: 0.00B)

Data,single: Size:35.43GiB, Used:27.80GiB
   /dev/sda3      35.43GiB

Metadata,single: Size:1.50GiB, Used:1022.23MiB
   /dev/sda3       1.50GiB

Then:

du -sh * 2>/dev/null
2.2M    bin
132M    boot
16K     dev
23M     etc
273G    home
0       kdeinit5__0
1.1G    lib
13M     lib64
928G    mnt
784M    opt
0       proc
35M     root
1.6M    run
12M     sbin
0       selinux
1.4M    srv
0       sys
22M     tmp
8.4G    usr
4.6G    var


I still seem to be missing about 14GB on root that I can’t account for. Is this normal?

Hi
With btrfs the disk usage is indicated in the btrfs usage output (du is somewhat deprecated with btrfs since there are snapshots present) . Did you run the clean up jobs manually?

Yes I did.

On Mon 31 Jul 2017 09:16:01 PM CDT, mmontz wrote:

Yes I did.

Hi
Try running the btrfs-balance again and see if it gives some further
space…


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.2|GNOME 3.20.2|4.4.74-18.20-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

/etc/cron.weekly/btrfs-balance
Before balance of /
Data, single: total=36.43GiB, used=28.22GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.50GiB, used=1.00GiB
GlobalReserve, single: total=103.14MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        43G   32G   12G  75% /
Done, had to relocate 0 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=1
Done, had to relocate 0 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate 0 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=20
Done, had to relocate 0 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 47 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 46 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=50
Done, had to relocate 0 out of 46 chunks
Done, had to relocate 0 out of 46 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=1
  SYSTEM (flags 0x2): balancing, usage=1
Done, had to relocate 1 out of 46 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=5
  SYSTEM (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 46 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=10
  SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 46 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=20
  SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 46 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=30
  SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 46 chunks
After balance of /
Data, single: total=35.43GiB, used=28.22GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.50GiB, used=1.00GiB
GlobalReserve, single: total=102.70MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        43G   32G   12G  75% /


Hi
Can you boot from a Tumbleweed rescue CD, then mount you device and run;


mount /dev/sdXn /mnt
btrfs balance start -v -dusage=0 /mnt

Where X is the device and n the / partition. I think the issue is the metadata…

That did nothing.

Hi
So what’s the output from now?


btrfs fi usage /

AFAIK it’s the metadata, if that reduces, then it will recover space.

 btrfs fi usage /
Overall:
    Device size:          40.00GiB
    Device allocated:          35.57GiB
    Device unallocated:           4.43GiB
    Device missing:             0.00B
    Used:              28.99GiB
    Free (estimated):          10.51GiB    (min: 10.51GiB)
    Data ratio:                  1.00
    Metadata ratio:              1.00
    Global reserve:         102.47MiB    (used: 0.00B)

Data,single: Size:34.04GiB, Used:27.96GiB
   /dev/sda3      34.04GiB

Metadata,single: Size:1.50GiB, Used:1.02GiB
   /dev/sda3       1.50GiB

System,single: Size:32.00MiB, Used:16.00KiB
   /dev/sda3      32.00MiB

Unallocated:
   /dev/sda3       4.43GiB