Need config advice re: install incl. mdadm, LUKS2, lvm, btrfs (to begin with)

I am an experiential learner; reading about the multiple conflicting ways I can achieve my objectives has only served to confuse me.

I hoped that if I stated my objectives, where I was with them, and what I thought I need to next I could get some feedback to stop the endless loop of install after reinstall loop. Not only is it doing my head in, my focus on trying to finish it is leaving little time for anything else.

I want to set up a KVM server with a Plasma DE on a Tumbleweed backbone for my Ryzen 3600X 64GB 3200Mhz core machine (can upgrade to 5600x if needs be). Storage wise, it has 3 block devices available to it: 2 x nvme (1 x 500GB, 1x 2TB), and 1 x 1TB SATA SSD. I also have an external 2TB SSD partitioned into two for backup purposes (1 TB for this machine, 1TB for my unfinished Qubes machine). I am out of work right now so cannot buy more equipment.

I want to use the 500GB and part of the 2TB for raid, and have split this capacity via mdadm into 3 RAID devices: a 2GB Boot Mirror, a 50GB High Priority Docs mirror, and the remaining 414GB for the root directory. As things stand, this has all been set up in the installer, with the raid arrays then encrypted with LUKS.

Because this is a work in progress, none of these are in use yet, I have put a temporary install of SUSE on the 1TB SSD (also encrypted), with my GRUB2 EFI partition on an unencrypted USB thumb drive. As things stand, whole disk encryption is working (only needing to type my password once), but when the machine is finished I have two FIDO2 keys that I’d like to use to login via their HOTP functionality in conjunction with systemd on the boot drive. However I see this as something to do once the rest of the config is complete given the PC isn’t being used for anything else at present. Similarly I’d like to have TPM2 and Secure Boot setup but will sort that out at the end too.

As I see it, I will then need to split the remaining 1.5TB of the nvme drive into several partitions, a Bitlocker encrypted 200GB for Windows 10 (if that can be achieved), and two physical volumes 300GB and 1TB. I will also need to replace the temp SUSE install on the 1TB with an empty physical volume. These two physical volumes will then need to be encrypted with the same password as the mdadm arrays to maintain only needing to type my password once.

Ideally at this point I would then save it, but there does not appear to be a way to do that in the installer. Could I do so by using the the LiveCD instead (possibly by chrooting?)?

After that (regardless of whether I can save it or not) I would then need to create two volume groups, one for root and the other for home. The root vg would be comprised of the boot array, the root raid0 array, and the 300G physical volume. The home vg would be comprised of everything else except for the Windows partition.

For the root volume (and this is where I am really unsure about how to do it) I would then create a logical volume for the boot and root arrays and link them (using device mapper?) to the encrypted physical volumes they represent. The 300G of capacity that remains would then be split into 3; 2 x 32GB swap lvs (for striping) [can they be assigned to just the root vg? If so how?]; and 1 x thin pool (root directories that will take up a lot of space on the raid 0 for only a marginal benefit would then be placed inside a thin volume). The mirrored boot would then be formatted with FAT fs, with the raid0 and thin volume formatted with BTRFS.

NOTE: I am unsure if thin volumes can be formatted with BTRFS, but if not ext4 I guess. But in case you are thinking 700G raw space is overkill for the root volume, the rationale is that once the install is complete and I have a working snapshot of it that I can revert to, I would like to try making it a multiboot environment with other BTRFS compatible distros, most particularly Arch but also possibly Jellyfin and maybe Nobara or Garuda as well - I like to use different distros for different purposes. If that can’t be done I will have to virtualise them instead. But virtualisation is the key for the Var directory on raid0 too. Only Suse would be acting as a vm host though.

The home volume would then comprise the remaining 1.8TB of space and be split into 5 logical volumes, 2 x 32GB swap, 1 x 50GB High Priority RAID1 (mapped similarly to the underlying physical raid volume), 1 x 400GB thin pool (with a thin volume for each distro) and the rest as a shared media storage volume for ISOs and Entertainment (I work on the road a lot and would like to be able to remote in to access shows etc). Other desired usage (via a VM) probably would be Nextcloud and maybe Jellyfin too. Before someone suggests proxmox I plan to get an independant NAS for that but want to be au fait with how the products I want to run it work before setting that up too.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.