btrfs subvolume create boot/grub2/x86_64-efi
ERROR: 'boot/grub2/x86_64-efi' exists
btrfs subvolume create @/boot/grub2/x86_64-efi
ERROR: can't access '@/boot/grub2'
???
btrfs subvolume create boot/grub2/x86_64-efi
ERROR: 'boot/grub2/x86_64-efi' exists
btrfs subvolume create @/boot/grub2/x86_64-efi
ERROR: can't access '@/boot/grub2'
???
On 10/04/2016 09:16 AM, SUSEtoad wrote:
>
> SUSEtoad;2794779 Wrote:
>> So, I was correct that it’s a BTRFS subvolume problem. OK. How do I fix
>> it?
>
> Does anyone know how to fix the BTRFS subvolume issue? There are no
> examples in any BTRFS guide I have read.
>
> What is the “@” for? Is that required for naming subvolumes?
>
> Why does the OpenSUSE fstab file have multiple lines pointing a real
> directory to a subvolume of the same path? Why not just use the
> directory … ??? I don’t undstand what these subvolumes are supposed to
> be for or why this “@” is appearing in the GRUB error message.
The reason that fstab redundantly points to subvolumes is so that they
still show up when you choose a non-default subvolume as your mount, for
example during rollback/recovery. If not done, then those subvolumes,
which are subvolumes and not directories, will not show up when you try to
rollback, and as some of them are really important, that’s bad. It’s
okay, and a good thing, though most of the time (unless you are in
recovery/rollback mode most of the time) it is redundant.
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…
I think that’s internal to how grub handles “btrfs”.
Either create necessary subvolume structure (but you need to know it; you may guess based on /etc/fstab) or reinstall bootloader so it picks up your current layout. I understand that all of this are rather new concepts so I guess reinstalling bootloader will be easier (although the result won’t be truly "openSUSE"ish and may pose issues with snapper).
If you are really interested to do manual recover we may try to go step by step together. I did not do it myself but I know theory and background so I likely will come up with correct commands.
What is the “@” for?
Just part of pathname, like “usr” in “/usr/bin/mv”. Nothing more.
Is that required for naming subvolumes?
Not from the btrfs point of view - it does not care. But Leap installer creates these subvolumes by default.
I don’t undstand what these subvolumes are supposed to be for or why this “@” is appearing in the GRUB error message.
Because this is the real full pathname on btrfs starting from the top. We may ask “why such name”, but it is no worse than “/usr/bin” and certainly better than “$^/78#/jingle/bells” that is also legitimate path name as well.
Ultimately, the solution was to re-install from 0. Then spend several hours re-installing, re-fixing, re-configuring and re-adjusting. Fortunately, my /home directory is on a separate drive so my data, documents and many account settings were unaffected.
This was a disappointing experience, as I thought my regular, daily backups would enable me to simply copy the old install to a new drive and be running again. However, it’s clear to me now that even a Linux OS is so welded to the drive it’s first installed on that moving or copying it to a new drive is almost impossible.
I will need to re-think my backup and recovery plans for the future. It seems that even with regular backups, a hard drive failure or even a drive upgrade, as was my case, inevitably results in hours of lost time looking at installer progress bars.
My backup strategy is to backup files, not file systems, nor complete devices. It has several advantages, like being able to restore a single file a user might have thrown away or broken by incident.
I do of course backup all files in /home (and /root). And other application data that may be outside /home (e.g. /srv on a web-server). From the system I only backup /etc and /boot, containing most system configuration data.
And i also keep data about the system, like a list of all RPMs installed (generated regularly by a script). Also, when installing a system, I make notes what I clicked and choose, including a list of the packages I removed and added to the default list of packages the installer offers to install.
As gogalthorpe suggested, a fresh installation is then often quick enough to keep downtime reasonable (for me).
Whatever backup strategy/policy you choose for whatever cases (broken file, broken disks, burnt out house), always test a restore. And of course this testing should be done in such a way that it is always possible to go back to the original situation. If I understand correctly what you did, the original system disk, should have been removed from the system (as being “broken”), then try to restore the situation with your cloned disk in it’s place. No success? Restore the old disk into it’s place.
My own disaster recover strategy is to reinstall, and then restore “/home” from backup. It never seems worth the effort to try restoring the root file system.
Backups use “dar” which allows restoring of individual files.
I use rsync (using luckubackup GUI) which reduces time needed to backup after first image. Doing file backup is nice in that there are no problem if I decide to change file systems
If you want full system backup back the whole drive not just partitions. A backup program like clonezilla is a good way but you need to pay attention to what it is backing up
And do not forget that when making “clones”, the partitions to be cloned must be unmounted, else a consistent result is not guarantueed.
Thus, when doing this on a root partition, your system must be down.
That is not needed with backups on the file level (well, databases might be required to have stopped, but I assume database backup is outside the present topic).
And yes, using rsync for a backup on the file level, either used by your own typing/script or by a tool using it, is very efficient in that it only copies changed items. You can also arrange that you keep say 10 backups, where unchanged files are hard links and thus taking no extra space, so you can go back more then one backup period for a restore.
And one should consider if a burnt out system is in one of the cases to be reckoned with. And thus a backup to removable mass-storage or a remote system is needed. And a burnt out house scenario should include out of the house storage, again either remote or physical transport to a friends house.
Etc, etc, …
I am interested in this. I noted that the YaST Software Manager can export an inventory of installed packages. Can this be automated / cron’ed? It would have been crazily helpful if such an export was created daily and picked up in my rsync cron runs. Otherwise, it’s only as good as the last time remembered to do an export.
On Thu 06 Oct 2016 01:26:02 AM CDT, SUSEtoad wrote:
hcvv;2794857 Wrote:
> And i also keep data about the system, like a list of all RPMs
> installed (generated regularly by a script).
I am interested in this. I noted that the YaST Software Manager can
export an inventory of installed packages. Can this be automated /
cron’ed? It would have been crazily helpful if such an export was
created daily and picked up in my rsync cron runs. Otherwise, it’s only
as good as the last time remembered to do an export.
Hi
Just use rpm -qa command… there is also /var/log/zypp/history which
has more info, but soon as something is installed, it updates.
There is also snmp to read the hrSWInstalledName.N OID.
Then you have the issue of where did package X come from… zypper and
switches should provide this info.
–
Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.1|GNOME 3.16.2|4.1.31-30-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
A lot of info already posted by Malcolm.
This is the extact statement I use:
rpm -qa --qf '%{NAME}:%{VERSION}:%{RELEASE}:%{INSTALLTID}:%{SUMMARY}
' | sort
I then pipe that further through a read to separate the different terms into parameters that I then use in creating an HTML table row. In other words, I create an HTML table from it that will show in an HTML page that also contains another table with the repositories.
That page is part of a series of pages with information about severaal expects of a system, like hardware found, disk usage, network (interfaces, routes, DNS, NTP, NFS server and client) , booting, users and groups., …
That bunch of scripts can of course be run through cron.
The created HTML pages are gatherd throughout the network to one system where I then have a website with data about all the systems I manage.
I assume this gives you enough to create something to your own liking and needs.