so this morning I couldn’t boot into my openSUSE Tumbleweed system at all, so I booted into tumbleweed rescue cd and with a mix of guides and AI tried to restore a btrfs snapshot the hard way as I had no access to snapper.
I think I messed up somewhere and made things even worse because now my system boots into emergency mode and in the emergency mode mount -a throws file or directory not found for either of the
which kinda makes sense, I guess since when I’m in the rescue cd terminal environment and run
sudo mount /dev/nvme0n1p3 /mnt
sudo mount -o subvol=@/.snapshots /dev/nvme0n1p3 /mnt/.snapshots
I don’t actually see any of the aforementioned subvolumes there only snapshots, really (and also my desperate attempts at creating the subvolumes manually)
linux@localhost:~> sudo btrfs subvolume list /mnt
ID 256 gen 425041 top level 5 path @
ID 265 gen 425044 top level 256 path @/.snapshots
ID 765 gen 377467 top level 265 path @/.snapshots/494/snapshot
ID 790 gen 377467 top level 265 path @/.snapshots/519/snapshot
ID 791 gen 377467 top level 265 path @/.snapshots/520/snapshot
ID 806 gen 377467 top level 265 path @/.snapshots/new-backup
ID 816 gen 425074 top level 265 path @/.snapshots/544/snapshot
ID 823 gen 393147 top level 265 path @/.snapshots/551/snapshot
ID 824 gen 425011 top level 265 path @/.snapshots/552/snapshot
ID 837 gen 400402 top level 265 path @/.snapshots/565/snapshot
ID 838 gen 424992 top level 265 path @/.snapshots/566/snapshot
ID 852 gen 425057 top level 265 path @/.snapshots/580/snapshot
ID 853 gen 424971 top level 265 path @/.snapshots/581/snapshot
ID 859 gen 425057 top level 816 path @/.snapshots/544/snapshot/root.broken
ID 860 gen 425037 top level 859 path @/.snapshots/544/snapshot/root.broken/snapshot
ID 861 gen 425045 top level 859 path @/.snapshots/544/snapshot/root.broken/root
ID 864 gen 425053 top level 816 path @/.snapshots/544/snapshot/@/root
ID 865 gen 425053 top level 816 path @/.snapshots/544/snapshot/@/srv
ID 866 gen 425053 top level 816 path @/.snapshots/544/snapshot/@/opt
ID 867 gen 425053 top level 816 path @/.snapshots/544/snapshot/@/tmp
ID 868 gen 425054 top level 816 path @/.snapshots/544/snapshot/@/var
ID 869 gen 425055 top level 816 path @/.snapshots/544/snapshot/@/.snapshots
ID 870 gen 425056 top level 816 path @/.snapshots/544/snapshot/@/usr/local
ID 871 gen 425057 top level 816 path @/.snapshots/544/snapshot/@/snapshot
so, can I somehow make this right? I’ve been trying to fix this now for about 10 hours. The snapshots should be “healthy” as yesterday everything worked fine. Could be update or something. Anyway, some things on the system have deeper and personal meaning to me and I cannot afford to lose them by reinstalling everything from scratch.
I’m very worried about the amount of available space reported by lsblk -f I remember having only about 40-50gb of free disk space left before this has happened.
I forgot to reply on the question “When was this system installed originally”. I’ve installed it somewhere around last December, I believe, so I’ve been daily driving it for around a year.
It may be useful if you described what you did following “guides and AI”. Normally the default subvolume is the oldest snapshot. Having most recent snapshot as the default suggests that you tried to perform rollback.
Anyway, your subvolumes appear to be gone. No idea how it is possible. You could try to mount top level of the filesystem:
mount -r -o subvol=/ /dev/nvme0n1p3 /mnt
and look under it. May be subvolumes became plain directories and they are still there. You will need to drill down the directory tree.
Otherwise this something for the btrfs mailing list. May be some developer will have an idea how to recover data if it is still possible.
You most certainly should avoid changing this filesystem until you recovered data. “Changing” includes mounting read-write.
Essentially I’ve wanted to restore an earlier btrfs snapshot from when the system was working, (which was the day before yesterday). I’ve only ever done it with snapper before when system broke long time ago which was extremely easy and life saving at the same time, but this time around, I couldn’t get to it, so I’ve had to boot into recovery disk and use btrfs directly and followed guides and AI, since it was new to me.
I used ChatGPT:
First I was told that before restoring snapshot I should clear subvolumes
Then I was told to remove /root but I was skeptical of that so I just ran mv /mnt/root /mnt/root.broken just in case I’ll need it. That suggestion seemed extreme.
I unmouted everything with sudo umount -R /mnt and ran reboot. Ended up in emergency screen.
I’ve then tried to recreate the subvolumes manually and I suspect that’s what all these @/.snapshots/544/snapshot/@/var@/.snapshots/544/snapshot/@/var and so on… most likely are. I believe I tried to restore earlier ones too, hoping it will make a difference which is probably why their ids don’t match the one from the command I showed above.
Is there any hope or am I royally screwed? The earlier snapshots should be fine considering the system worked just fine the day before yesterday, so hopefully they’re good for something. Can they be used to re-create the subvolumes with original files?
I have no idea what /mnt/root was at this point, but as subvolumes are gone, they apparently were mounted below this directory when you were deleting them.
Looks like it. You yourself deleted missing subvolumes. That what you get for blindly following any junk presented by
No. They are snapshots of the root subvolume only, they do not include content of the other subvolumes.
Where content that you want to recover was located? Under /home? Or in some other directory?
Reinstalling stuff is tedious, but I can live with that. The important files were in my user’s home directory, loss of those would be much worse than reinstalling
After reinstallation, simply restore the /home from you last backup
.
We backup our /home subdir once a week. We don’t worry about backing up the system itself - if it crashes in a bad way, it’s easier to simply reinstall the system, then restore /home from the recent backup.
Yeah, but unless Tumbleweed somehow does it, which I don’t think it does (understandably, since that’s up to the user) many of the important things aren’t backed up.
Tbf I’ve been using Tumbleweed heavily since around 2021 on various devices, and I always did it the other way. In the few instances where my system broke, I fixed it with snapper right away and the system was otherwise very stable, so I was kinda counting that on the few occassions when something breaks, I will just rollback with snapper.
Ofc this is as indicated above, my fault, and could’ve been prevented.