If it is a mirror - you need to lvreduce and then pvreduce it. If it is not you will have to back it up, lvreduce all the mapping, and vgreduce the volume and recreate it and restore it.
use vgdisplay to get a map of the devices
use lvreduce to remove all /dev/sdb[0-9] mappings
use pvreduce to remove /dev/sdb
something like this if vg00 is the volume and lvol[0-6] are logical volumes in it
Huh? You use pvmove to move LV off specific PV, that’s all. If LVs are too big to fit into remaining space, one need to reduce filesystem size and then reduce LV size. No need to destroy anything, depending on filesystem type this may even work online. Of course, having backup is always highly recommended.
Never had much luck with pvmove - I had not tried that since 1998 and not at all in Linux ( I was using Veritas in the 1980’s and Veritas OEM’ed the code to most Unix vendors and they customized it to be different - LVM was what HP called it, Disk Array Plus is NCR’s). Ironically Veritas was used to combine 64mb disks into one larger disk to the OS. Now we use LVM to make smaller mirrors (what most HPUX customers used LVM for).
My remembrance was a corrupted file system - but we did not have Journaling file systems back then. Post mortem to find out why found that Veritas it keep adding to the physical drive we were trying to remove when it thought it was done, it did move all the extents that were there when it started but not those while it was running. There were no bootable Unix for those machines back then - they were considered large if they had 4 MB of ram. The largest could go to 64 MB - virtual memory was not the greatest back then. Unix still had PANIC code that showed up unannounced. Powering off without shutdown caused 2 hour fsck’s.
Thank you for the reply. I’m not sure what you meant by “mirror” though. Do I need to do these operations offline in a live usb session?
I found also a post : How to Extend/Reduce LVM's (Logical Volume Management) in Linux - Part II, except that this tutorial demonstrates how to reduce a logical volume, whereas I need also to remove a pv from a logical volume. I find your commands easier to understand and to execute.
edit: I forgot to mention that my lvm has luks encryption on the top too.
Not sure how luks encryption will affect the removal.
steps I would take: (I assume your vgname is SS and the lvol names are RT HM and SWP)
Boot off OpenSUSE live media and under /mnt make 2 mount points
mkdir /mnt/root
mkdir /mnt/home
mount HM on /mnt/home and mount RT on /mnt/root (something like
mount /dev/SS/RT /mnt/root
mount /dev/SS/HM /mnt/home
If you can see the files in /mnt/root and /mnt/home luks encryptions will not be an issue go on to step 3. Swap is not a problem and can be remade with a mkswap command.
If you cannot see the files luks is preventing shrinking from the live boot. - boot off your normal OS make sure you have a good backup and then do step 3.
remove the volume with a pvmove this will move it off /dev/sdb to the other nvme drive - /dev/nvme0n1p4 is too small (I assume your vgname is SS)
pvmove /dev/sdb1 /dev/nvme0n1p3
vgreduce SS /dev/sdb
Using ‘pvmove /dev/mapper/***’ just shows me I have no allowable space.
It’s strange that ‘pvs’ shows every pv is completely full while disk analyzer tells me I have less than 400GB data in total at the moment:
The command is run while the machine is online
#:/pvdisplay
--- Physical volume ---
PV Name /dev/mapper/cr-auto-1
VG Name SS
PV Size 420.00 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 107519
Free PE 0
Allocated PE 107519
PV UUID ***
--- Physical volume ---
PV Name /dev/mapper/cr_ata-SanDisk_SDSSDA480G_***-part1
VG Name SS
PV Size 447.13 GiB / not usable 3.82 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 114464
Free PE 0
Allocated PE 114464
PV UUID ***
--- Physical volume ---
PV Name /dev/mapper/cr_nvme-WDC_WDS512G***-part4
VG Name SS
PV Size 56.30 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 14411
Free PE 0
Allocated PE 14411
PV UUID ***
I guess it has something to do with the luks encryption on top?
I suspect the luks encryption makes the drives “look” full - as far as pvmove is concerned - we can see if luks is the issue by see how much disk is in use.
what does df -v show - I hope it show total usage is below 40% - if more there is not enough space free to use
you should see that /dev/mapper entries have the same same major/minor number as /dev/nvmen0* and /dev/sdb*
df -v
ll /dev/mapper/*
ll /dev/nvme*
ll /dev/sdb*
here is my non lvm (I used to use LVM - I supported ovr 10,000 computers with LVM until I retired 13 years ago)
My largest read disk is only 17% full.
‘df -v’ shows just none of those logical volumes are full, but they don’t show physical volume.
Anyway I’m giving up on this task as I searched online and found no info on removing a pv from a vg when luks encryption is on top at all.
Backup->reinstall->restore might be easier.
Edit: I also start to doubt the benefit of setting up lvm at home. Never did lvm make life easier for me as I hardly did any lvm resizing, nor did I use lvm backup gimmicks.
This command shows none of logical volumes at all. Are they mounted?
no info on removing a pv from a vg when luks encryption is on top at all.
I do not know which side is up from your point of view, but PV is still PV, whether this PV itself is encrypted or not. As already told several times, you need to reduce filesystem size, reduce LV size, move LV off PV you want to remove and then remove PV from VG. Whether this PV itself is encrypted is irrelevant. Whether the first step can be done depends on filesytsem type which was never shown.
Backup->reinstall->restore might be easier.
Backup is needed anyway, so yes, it is the most straightforward way.