This all started when I tried a dup from .2 to 15.3 and my desktop froze and yast in xfce console had a question. I couldn’t interact with the program so I rebooted and tried again.
I forget the rest but now the 41.3G btrfs lvm partution is COMPLETELY FULL, even though there is 1.7G free somewhere, I can’t do an installation because it’s full.
I’ve tried deleting about 70,000 files of maybe 200 - 300MB in the modules folder and with the 1.7G free, suse still says only 200+ MB is available, and still cant install packages to the system.
No space left on device.
Even with the 51.3G lvm partition that I expanded it to, why does suse not see this new size. Also, I did the following command:
rm -r /.snapshots/410 and now I lost grub and can’t boot either.
At this point, LVM is a pile of elephant $hit that is way too complicated to work with, very annoyed by the moronic complexity vs just a root, and home partition.
Can I copy my LVM root partition to a regular btrfs partition without having to use LVM? I’d be done just as fast as I could get the data copied to a new partition.
At least to me it is complet unclear what you did (and you do only describe in very general terms, not giving any ideas about the commands you used).
I am not sure, also by reading the rest of the post, if you understand what a partition is, what a logical volume is (and no they are not the same) on one side, and what a file system is on the other side.
It may be that you mean here that you enlarged the partition where the Logical Volume Group is on. But then you have of course also to enlarge the Logical Volume Group and the you have to enlarge the particular Logical Volume that contains the root file system. And after that you have to enlarge the root file system itself.
It would be best if you provided information on what you have now. E.g.
Volume Groups:
name: system
Format LVM2
Metadata Areas 1
" " Sequence # 7
Acces: R/W
Status: Resizable
MAX LV 0
Cur LV 3
OpenLV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 107.74GB (From 97.* GB)
PE Size 4.00GB
Total PE 27582.0
Alloc PE/ 27582.0 / 107.74 GB
Free PE / 0 / 0
Logical Volumes
Path /dev/system/swap
VG Name system
Path /dev/system/home
LV Name home
VG Name system
/dev/system/root
VG Name system
LV read / write access
Interesting LV Size
<b><51.22GB </b>
Current LE 13112
Segments 1
Allocation inherit
Read ahead sectors auto (256)
So why does the SUSE Installer see the old 42.x GB lvm size instead of the new 52GB? If I could fix this, I’d be able to upgrade my installtion without overwriting everything.
There is an important, but not easy to find feature on the forums.
Please in the future use CODE tags around copied/pasted computer text in a post. It is the # button in the tool bar of the post editor. When applicable copy/paste complete, that is including the prompt, the command, the output and the next prompt.
We really want to see the commands and the output together. Only so can we see where you were, who you are, what you did and what you got. Helping people must be accommodated to come to their own conclusions based on computer facts.
--- Volume group ---
VG Name system
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 107.74 GiB
PE Size 4.00 MiB
Total PE 27582
Alloc PE / Size 27582 / 107.74 GiB
Free PE / Size 0 / 0
--- Logical volume ---
LV Path /dev/system/swap
LV Name swap
VG Name system
LV UUID xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
LV Write Access read/write
LV Creation host, time install, xxxx-xx-xx xx:xx:xx -xx00
LV Status available
# open 0
LV Size <3.76 GiB
Current LE 962
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/system/home
LV Name home
VG Name system
LV UUID xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
LV Write Access read/write
LV Creation host, time install, 20xx-xx-xx xx:xx:xx -xx00
LV Status available
# open 0
LV Size <52.77 GiB
Current LE 13508
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/system/root
LV Name root
VG Name system
LV UUID xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
LV Write Access read/write
LV Creation host, time install, xxxx-xx-xx xx:xx:xx -xx00
LV Status available
# open 0
LV Size <51.22 GiB
Current LE 13112
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
In another OS, on a separate drive, I can access the LV by clicking the 51.3GB disk icon in a file browser.
When I am at the / of that disk and look at properties, it looks like the part of the system doesn’t know that I want that specific LV piece expanded to the available free space in the overall VG.
This is what I find annoying about this new system. As stated, there are four different pieces to this puzzle.
Partitions have two. Resize one part and expand another or add. Done.
I think I’m missing this fourth part mentioned and would like to understand it more.
Apologies to my very rude form of ezpressuon of frustration. Having partitions is something I enjoy understanding how to do and how easy it is. LVM makes having a setup with it aggravating and complex.
I think an overall bulk command to resize all the needed components at once would make it much simpler; and less prone to error.
I would like to use LVM to learn it, so when I understand it’s abilities more, I’ll use it. I’ll still use it.
Anyway, I had what is known as “slack space” and I ended up having to mount the device to a particular folder, I guess / (root) isn’t really working, so when I
My device slack was just the same as the amount of missing space. So I did
sudo btrfs filesystem resize max ~/Downloads/btrfs
Hope this helps someone else, I hadn’t mounted the /dev/system/root to a folder like /mnt or a custom directory, and I couldn’t do the command properly.
Now I have another issue
::::::::::::::::::::::::::::::::::::::::
I cannot install 15.2 base system on the system. It gives the error
“Couldn’t initilize the target directory”, which I believe is a install medium issue that I’m currently working to solve.
I would not use lvm for the OS, perhaps that is more suited for xfs partitions for data to reside, eg /home?
I spend most of my time in Tumbleweed and don’t run snapshots, but have split out various directories that write a lot, maybe the next install I would consider lvm for these. My / is btrfs with a size of 60GB, of that 18GB allocated and 13GB used even with recent big updates.
Configured in yet another setup tool with snapper. I think the defaults are far too much for near 40GB or so partitions. Once I get it running I’ll change the snapshot settings and go through my several kernel upgrades.
Forty gigs isn’t enough with snapshots, unless managed and maybe only one. Kernels as well.
In general if you increase the container you also have to increase the file system and or partition usage. The file system/partitions do not get their info from the container you have to tell what size you want them.
LVM is a type of container that allows spanning multiple disks and also applying encryption over multiple virtual partitions and file systems
No need to split. Copy on write may get in the way. Turning off this feature for selected subtrees is straight forward, as is turning off snapshots for subvolumes, such as /var, /usr/local, /srv, /root, /opt, /home and others.
A single partition occupying all available space on a drive and seamless snapshots are the most significant features of btrfs.