Suggestions on reinstalling tumbleweed

Hi all,
I’m getting a new nvme disk soon and am reconsidering a reinstallation of tumbleweed.
I was thinking about the following options.

Option 1

nvme0n1 of 1Tb:
	- 512GB ntfs Windows 11
	- 512GB btrfs Tumbleweed system /root and /swap 

LVM 4Tb xfs for /home
	- nvme1n1 2Tb
	- nvme2n1 2Tb

What I like about this configuration is that keeping the home separate on the two disks allows me to have a single 4tb space, and the operating systems on a separate disk. In case of problems I can replace and/or reformat either one or the other and recover from backups.

Option 2

nvme0n1 of 1Tb:
	- 1Tb Tumbleweed system /root and /swap 

LVM 4Tb xfs for /home
	- nvme1n1 2Tb
	- nvme2n1 2Tb  

In this case the system will be all Linux; I can install Windows on a virtual machine.

The only reason to have a Windows setup is to have a backup machine for my architect wife (archicad,autocad,fusion) but I don’t have much experience with vm’s especially in the case of GPU passthrough.
Also in this way the VM would take away data space since it would be part of home so proably is not a good option.

Just to give an idea, my current situation is:

➜  ~ lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT  
NAME            FSTYPE        SIZE MOUNTPOINT  
sda             xfs           3.6T /volumes/backup  
sdb             xfs           3.6T /volumes/restic  
nvme0n1                     931.5G    
├─nvme0n1p1     ntfs          529M    
├─nvme0n1p2     vfat           99M    
├─nvme0n1p3                    16M    
├─nvme0n1p4     ntfs        930.1G    
└─nvme0n1p5     ntfs          824M    
nvme1n1         LVM2_member   1.9T    
└─system-home   xfs             3T /home  
nvme2n1                       1.8T    
├─nvme2n1p1     vfat          512M /boot/efi  
└─nvme2n1p2     LVM2_member   1.8T    
 ├─system-root btrfs       620.4G /var  
 ├─system-swap swap         31.3G [SWAP]  
 └─system-home xfs             3T /home 

So basically the main 1Tb is only for windows, as this was my wife’s cad machine.
I installed tumbleweed on the LVM (nvme1n1+nvme2pn1).
I like the LVM approach to have a single disk but the only drawback is that in case of problems with either disk I would lose both the OS and the data.
On the same machine I have a 1-1 backup and incremental backups with restic, as well as backups of everything to NAS.

I would like to take the opportunity to upgrade a disk and fix my setup permanently. I am leaning more toward option 1 but I also value other approaches that I may not be considering.

Thank you for your suggestions!

@boredcollie Regarding vm’s and gpu passthrough as long as the gpu your wanting to use is in it’s own iommu group you shouldn’t have any issues. To verify you need to add the kernel boot option for iommu for your cpu, in my case intel so I have intel_iommu=on it also depends on the GPU hardware as well.

Regarding VM storage, with libvirt/virt-manager you can create a storage pool anywhere for it to use…

I have a Quadro K620 allocated for vm’s here using vfio-pci, easy to setup…

  Device-1: NVIDIA GM107GL [Quadro K620] vendor: Hewlett-Packard
    driver: vfio-pci v: N/A arch: Maxwell pcie: speed: Unknown lanes: 63
    bus-ID: 01:00.0 chip-ID: 10de:13bb

thanks @malcolmlewis I’m learning and experimenting so your post is very much appreciate. Today I tried with virtualbox because it was the only one I know, but going deep in the rabbit hole I discovered kvm/qemu and libvirt but I was tired to continue. I’ll try your suggestions. Out of curiosity when you say “you can create a storage pool anywhere for it to use” in can be also on a remote folder on my NAS or for example an external disk usb-c? (with maybe limitation on speed I guess).

@boredcollie I would also add that when, for example Windows 11 Pro vm is running I can connect from any device running the remote-client ( virt-viewer) with spice.

Here I’m on a HP 11 Stream, dual core 2Gb of RAM, MicroOS with Hyprland … connected over wifi to my host running the Windows VM using the flatpak version org.virt_manager.virt-viewer

If your NAS can be connected with iSCSI I suspect it wouldn’t be too bad? But just pop in some separate storage?

I’ve also run Windows with qemu also with a PCIeX1 SATA card with vfio-pci so windows was on it’s own separate SSD…

Edit: Windows 11 Pro has 12 Logical cpu’s and 32GB of RAM allocated

wow wow wow!

Returning to the topic of partitions, starting with the assumptions that:

  • I don’t need windows
  • I can always fiddle around with virtualizations later as soon as I get some knowledge
  • my wife doesn’t care about having a backup of the pc

If I opt for option 2, does it make sense 1Tb just for tumbleweed root, swap ? Wouldn’t it remain almost empty ?

For the time being my current root is taking about ~122Gb:

➜  ~ sudo btrfs filesystem usage /
Overall:
    Device size:                 620.42GiB
    Device allocated:            122.07GiB
    Device unallocated:          498.35GiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                        110.05GiB
    Free (estimated):            507.37GiB      (min: 258.19GiB)
    Free (statfs, df):           507.36GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              213.58MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,single: Size:114.01GiB, Used:104.99GiB (92.09%)
   /dev/mapper/system-root       114.01GiB

Metadata,DUP: Size:4.00GiB, Used:2.53GiB (63.32%)
   /dev/mapper/system-root         8.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
   /dev/mapper/system-root        64.00MiB

Unallocated:
   /dev/mapper/system-root       498.35GiB

@boredcollie I run my development box on a 1TB NVme, I also have a couple of 500GB SSD’s on hardware RAID 1 and an older SSD just used for building on.

I had no issues with btrfs, but I didn’t use snapper either… This recent rebuild I went with ext4.

RAID:
  Hardware-1: Intel sSATA Controller [RAID Mode] driver: ahci v: 3.0
    port: 7040 bus-ID: 00:11.4 chip-ID: 8086:2827 rev: N/A class-ID: 0104
  Supported mdraid levels: raid1
  Device-1: md126 type: mdraid level: mirror status: active size: 453.09 GiB
  Info: report: 2/2 UU blocks: 475099136 chunk-size: N/A
  Components: Online: 0: sdb 1: sda
  Device-2: md127 type: mdraid level: N/A status: inactive size: N/A
  Info: report: N/A blocks: 10402 chunk-size: N/A
  Components: Online: N/A Spare: 0: sdb 1: sda
Drives:
  Local Storage: total: raw: 1.97 TiB usable: 564.87 GiB
    used: 275.08 GiB (48.7%)
  ID-1: /dev/nvme0n1 vendor: Silicon Power model: SPCC M.2 PCIe SSD
    size: 953.87 GiB speed: 63.2 Gb/s lanes: 4 tech: SSD serial: <filter>
    fw-rev: SN13683 temp: 37.9 C scheme: GPT
  ID-2: /dev/sda vendor: Silicon Power model: SPCC Solid State Disk
    size: 476.94 GiB speed: 6.0 Gb/s tech: SSD serial: <filter> fw-rev: 269
  ID-3: /dev/sdb vendor: Silicon Power model: SPCC Solid State Disk
    size: 476.94 GiB speed: 6.0 Gb/s tech: SSD serial: <filter> fw-rev: 269
  ID-4: /dev/sdc vendor: OCZ model: VERTEX460A size: 111.79 GiB
    speed: 6.0 Gb/s tech: SSD serial: <filter> fw-rev: 1.01 scheme: GPT
Partition:
  ID-1: / size: 933.89 GiB used: 115.66 GiB (12.4%) fs: ext4
    dev: /dev/nvme0n1p2
  ID-2: /boot/efi size: 3.99 GiB used: 164 KiB (0.0%) fs: vfat
    dev: /dev/nvme0n1p1
Swap:
  Alert: No swap data was found.

How much RAM in your system, I don’t use swap, but don’t hibernate/suspend etc, if I think I need swap I just activate zram instead.

My virtual machines reside on the NVMe.

On a single disk, I run windows 10 as dual boot with openSUSE. I also have windows 10 in a libvirt vm. This one is camouflaged to not look like a vm as I let scammers log onto it. I have a backup clone.

I have both reinstalled openSUISE and done a fresh install of openSUSE over the last year or so. I have lost nothing from /home in either process. I have not needed to reinstall the windows 10 partition. I have had no issues with grub finding everything.
I like that openSUSE can mount the windows ntfs partition an read it so I can just get a file as needed.

Hi everyone,

I wanted to share my recent experience with reinstalling openSUSE Tumbleweed, which went smoothly. I took this opportunity to complete my recovery guide, documenting every step I took to return to my previous state, including GPU driver installation.

Here are the highlights.

Backup and Recovery
I successfully restored my $HOME directory from a backup, allowing me to test my backup strategy. So far, everything is running smoothly, and I’m very satisfied with the results.

GPU Drivers
I installed the AMD GPU drivers by following the instructions from openSUSE’s official guide and managed to set up OpenCL on the first try. With the new drivers, my usual performance test in Darktable is now faster by 1 second! :rofl:

➜  performance ./test.sh
    >>>>>>>>>>PREVIOUS SETUP<<<<<<<<<<             
     3.4612 [dev_process_export] pixel pipeline processing took 1.895 secs (2.745 CPU)
     3.0868 [dev_process_export] pixel pipeline processing took 1.565 secs (1.991 CPU)
     3.3576 [dev_process_export] pixel pipeline processing took 1.823 secs (2.009 CPU)
     3.4581 [dev_process_export] pixel pipeline processing took 1.673 secs (2.956 CPU)
     3.0378 [dev_process_export] pixel pipeline processing took 1.535 secs (2.048 CPU)
     3.4570 [dev_process_export] pixel pipeline processing took 1.926 secs (2.138 CPU)
     >>>>>>>>>>CURRENT SETUP RUSTICL<<<<<<<<<<   
    11.5726 [dev_process_export] pixel pipeline processing took 9.827 secs (2.587 CPU)
     7.3652 [dev_process_export] pixel pipeline processing took 5.552 secs (2.534 CPU)
    12.7867 [dev_process_export] pixel pipeline processing took 10.982 secs (2.458 CPU)
    11.7370 [dev_process_export] pixel pipeline processing took 9.824 secs (2.403 CPU)
    11.7623 [dev_process_export] pixel pipeline processing took 9.780 secs (2.768 CPU)
     >>>>>>>>>>CURRENT SETUP ROCM OPENCL<<<<<<<<<<
     2.7589 [dev_process_export] pixel pipeline processing took 1.172 secs (2.456 CPU)
     2.7885 [dev_process_export] pixel pipeline processing took 1.177 secs (2.449 CPU)
     2.7494 [dev_process_export] pixel pipeline processing took 1.184 secs (2.408 CPU)
     2.8028 [dev_process_export] pixel pipeline processing took 1.217 secs (2.395 CPU)

This experience has reinforced my belief that Tumbleweed is the best Linux distribution out there.
The process was seamless, and I’m thrilled with the performance improvements.

Next adventure will be trying the setup of a Window VM just for fun and get the GPU pass-through working.

Thanks for reading, and I hope this encourages others to try out Tumbleweed!

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.