Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

  1. #1

    Default STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Hello,

    This all started when I tried a dup from .2 to 15.3 and my desktop froze and yast in xfce console had a question. I couldn't interact with the program so I rebooted and tried again.

    I forget the rest but now the 41.3G btrfs lvm partution is COMPLETELY FULL, even though there is 1.7G free somewhere, I can't do an installation because it's full.

    I've tried deleting about 70,000 files of maybe 200 - 300MB in the modules folder and with the 1.7G free, suse still says only 200+ MB is available, and still cant install packages to the system.

    No space left on device.

    Even with the 51.3G lvm partition that I expanded it to, why does suse not see this new size. Also, I did the following command:

    rm -r /.snapshots/410 and now I lost grub and can't boot either.

    At this point, LVM is a pile of elephant $hit that is way too complicated to work with, very annoyed by the moronic complexity vs just a root, and home partition.

    Can I copy my LVM root partition to a regular btrfs partition without having to use LVM? I'd be done just as fast as I could get the data copied to a new partition.

  2. #2
    Join Date
    Jun 2008
    Location
    Netherlands
    Posts
    30,676

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Quote Originally Posted by pc_btrfs View Post
    Even with the 51.3G lvm partition that I expanded it to, why does suse not see this new size. Also, I did the following command:
    At least to me it is complet unclear what you did (and you do only describe in very general terms, not giving any ideas about the commands you used).

    I am not sure, also by reading the rest of the post, if you understand what a partition is, what a logical volume is (and no they are not the same) on one side, and what a file system is on the other side.

    It may be that you mean here that you enlarged the partition where the Logical Volume Group is on. But then you have of course also to enlarge the Logical Volume Group and the you have to enlarge the particular Logical Volume that contains the root file system. And after that you have to enlarge the root file system itself.

    It would be best if you provided information on what you have now. E.g.
    Code:
    lsblk -f
    Code:
    vgdisplay
    Code:
    lvdisplay
    Code:
    mount
    Henk van Velden

  3. #3

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Quote Originally Posted by hcvv View Post
    At least to me it is complet unclear what you did (and you do only describe in very general terms, not giving any ideas about the commands you used).

    I am not sure, also by reading the rest of the post, if you understand what a partition is, what a logical volume is (and no they are not the same) on one side, and what a file system is on the other side.

    It may be that you mean here that you enlarged the partition where the Logical Volume Group is on. But then you have of course also to enlarge the Logical Volume Group and the you have to enlarge the particular Logical Volume that contains the root file system. And after that you have to enlarge the root file system itself.

    It would be best if you provided information on what you have now. E.g.
    Code:
    lsblk -f
    Code:
    vgdisplay
    Code:
    lvdisplay
    Code:
    mount
    lsblk I have three sections on the drive I am using inside a logical volume that doesn't use more than half the drive:

    It's sdc, so the third partition on a drive, using LVM.

    sdc1 swap (system-swap
    sdc2 system-home (xfs)
    sdc3 system-root (btrfs)

    Volume Groups:
    name: system
    Format LVM2
    Metadata Areas 1
    " " Sequence # 7
    Acces: R/W
    Status: Resizable

    MAX LV 0
    Cur LV 3
    OpenLV 0
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 107.74GB (From 97.* GB)
    PE Size 4.00GB
    Total PE 27582.0
    Alloc PE/ 27582.0 / 107.74 GB
    Free PE / 0 / 0

    _______

    Logical Volumes
    Path /dev/system/swap
    VG Name system

    Path /dev/system/home
    LV Name home
    VG Name system

    /dev/system/root
    VG Name system
    LV read / write access

    Interesting LV Size
    <b><51.22GB </b>
    Current LE 13112
    Segments 1
    Allocation inherit
    Read ahead sectors auto (256)

    So why does the SUSE Installer see the old 42.x GB lvm size instead of the new 52GB? If I could fix this, I'd be able to upgrade my installtion without overwriting everything.

  4. #4
    Join Date
    Jun 2008
    Location
    Netherlands
    Posts
    30,676

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Please, please

    There is an important, but not easy to find feature on the forums.

    Please in the future use CODE tags around copied/pasted computer text in a post. It is the # button in the tool bar of the post editor. When applicable copy/paste complete, that is including the prompt, the command, the output and the next prompt.

    An example is here: Using CODE tags Around your paste.

    We really want to see the commands and the output together. Only so can we see where you were, who you are, what you did and what you got. Helping people must be accommodated to come to their own conclusions based on computer facts.
    Henk van Velden

  5. #5

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    [noneya@beezwaX ~]$ lsblk -f

    Code:
    └─sdc3          LVM2_member LVM2 001
    xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      ├─system-swap swap        1                     b35c
    xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx                  
      ├─system-home xfs                               e335 
    xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx                 
      └─system-root btrfs                             a2db
    xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx                  
    zram0                                                 
    
    [SWAP]
    $ vgdisplay

    Code:
      --- Volume group ---
      VG Name               system
      System ID             
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  7
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                3
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               107.74 GiB
      PE Size               4.00 MiB
      Total PE              27582
      Alloc PE / Size       27582 / 107.74 GiB
      Free  PE / Size       0 / 0
    Code:
      --- Logical volume ---
      LV Path                /dev/system/swap
      LV Name                swap
      VG Name                system
      LV UUID                xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
      LV Write Access        read/write
      LV Creation host, time install, xxxx-xx-xx xx:xx:xx -xx00
      LV Status              available
      # open                 0
      LV Size                <3.76 GiB
      Current LE             962
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:0
       
      --- Logical volume ---
      LV Path                /dev/system/home
      LV Name                home
      VG Name                system
      LV UUID                xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
      LV Write Access        read/write
      LV Creation host, time install, 20xx-xx-xx xx:xx:xx -xx00
      LV Status              available
      # open                 0
      LV Size                <52.77 GiB
      Current LE             13508
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:1
       
      --- Logical volume ---
      LV Path                /dev/system/root
      LV Name                root
      VG Name                system
      LV UUID                xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx
      LV Write Access        read/write
      LV Creation host, time install, xxxx-xx-xx xx:xx:xx -xx00
      LV Status              available
      # open                 0
      LV Size                <51.22 GiB
      Current LE             13112
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:2

  6. #6

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    In another OS, on a separate drive, I can access the LV by clicking the 51.3GB disk icon in a file browser.

    When I am at the / of that disk and look at properties, it looks like the part of the system doesn't know that I want that specific LV piece expanded to the available free space in the overall VG.

    This is what I find annoying about this new system. As stated, there are four different pieces to this puzzle.

    Partitions have two. Resize one part and expand another or add. Done.

    I think I'm missing this fourth part mentioned and would like to understand it more.
    Apologies to my very rude form of ezpressuon of frustration. Having partitions is something I enjoy understanding how to do and how easy it is. LVM makes having a setup with it aggravating and complex.

    I think an overall bulk command to resize all the needed components at once would make it much simpler; and less prone to error.

  7. #7
    Join Date
    Jan 2014
    Location
    Erlangen
    Posts
    3,837

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Quote Originally Posted by pc_btrfs View Post
    Hello,

    This all started when I tried a dup from .2 to 15.3 and my desktop froze and yast in xfce console had a question. I couldn't interact with the program so I rebooted and tried again.

    I forget the rest but now the 41.3G btrfs lvm partution is COMPLETELY FULL, even though there is 1.7G free somewhere, I can't do an installation because it's full.

    I've tried deleting about 70,000 files of maybe 200 - 300MB in the modules folder and with the 1.7G free, suse still says only 200+ MB is available, and still cant install packages to the system.

    No space left on device.

    Even with the 51.3G lvm partition that I expanded it to, why does suse not see this new size. Also, I did the following command:

    rm -r /.snapshots/410 and now I lost grub and can't boot either.

    At this point, LVM is a pile of elephant $hit that is way too complicated to work with, very annoyed by the moronic complexity vs just a root, and home partition.

    Can I copy my LVM root partition to a regular btrfs partition without having to use LVM? I'd be done just as fast as I could get the data copied to a new partition.
    If you don't need LVM don't use it. Two partitions suffice:
    Code:
    erlangen:~ # fdisk -l /dev/nvme0n1 
    Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: Samsung SSD 970 EVO Plus 2TB             
    Units: sectors of 1 * 512 = 512 bytes 
    Sector size (logical/physical): 512 bytes / 512 bytes 
    I/O size (minimum/optimal): 512 bytes / 512 bytes 
    Disklabel type: gpt 
    Disk identifier: F5B232D0-7A67-461D-8E7D-B86A5B4C6C10 
    
    Device             Start       End   Sectors SizeType
    /dev/nvme0n1p1       2048    1050623    1048576  512M EFI System 
    /dev/nvme0n1p2    1050624 3804628991 3803578368  1.8T Linux filesystem 
    erlangen:~ #
    Btrfs disk space:
    Code:
    erlangen:~ # btrfs filesystem usage -T / 
    Overall: 
        Device size:                   1.77TiB 
        Device allocated:            435.07GiB 
        Device unallocated:            1.35TiB 
        Device missing:                  0.00B 
        Used:                        405.85GiB 
        Free (estimated):              1.37TiB      (min: 716.12GiB) 
        Free (statfs, df):             1.37TiB 
        Data ratio:                       1.00 
        Metadata ratio:                   2.00 
        Global reserve:              512.00MiB      (used: 0.00B) 
        Multiple profiles:                  no 
    
                      Data      Metadata System               
    Id Path           single    DUP      DUP      Unallocated 
    -- -------------- --------- -------- -------- ----------- 
     1 /dev/nvme0n1p2 429.01GiB  6.00GiB 64.00MiB     1.35TiB 
    -- -------------- --------- -------- -------- ----------- 
       Total          429.01GiB  3.00GiB 32.00MiB     1.35TiB 
       Used           402.20GiB  1.83GiB 64.00KiB             
    erlangen:~ #
    i7-6700K (2016), i5-8250U (2018), AMD Ryzen 5 3400G (2020), 5600X (2022) openSUSE Tumbleweed, KDE Plasma

  8. #8

    Default [Solved] Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    I would like to use LVM to learn it, so when I understand it's abilities more, I'll use it. I'll still use it.

    Anyway, I had what is known as "slack space" and I ended up having to mount the device to a particular folder, I guess / (root) isn't really working, so when I

    Code:
    mount /dev/system/root ~/Downloads/btrfs
    I was able to CD to that folder, and perform

    Code:
    sudo btrfs device usage ~/Downloads/btrfs
    Output:
    Code:
    /dev/sda1, ID: 1
       Device size:             7.71GiB
       Device slack:              0.00B
       Data,single:             5.68GiB
       Metadata,DUP:            2.00GiB
       System,DUP:             16.00MiB
       Unallocated:            13.00MiB
    
    [fedora@fedora ~]$
    My device slack was just the same as the amount of missing space. So I did

    Code:
    sudo btrfs filesystem resize max ~/Downloads/btrfs
    Hope this helps someone else, I hadn't mounted the /dev/system/root to a folder like /mnt or a custom directory, and I couldn't do the command properly.

    Now I have another issue
    ::::::::::::::::::::::::::::::::::::::::

    I cannot install 15.2 base system on the system. It gives the error

    "Couldn't initilize the target directory", which I believe is a install medium issue that I'm currently working to solve.

  9. #9
    Join Date
    Jun 2008
    Location
    East of Podunk
    Posts
    33,085
    Blog Entries
    15

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    Hi
    Did you read the upgrade SDB with respect to /var/cache? https://en.opensuse.org/SDB:System_upgrade

    I would not use lvm for the OS, perhaps that is more suited for xfs partitions for data to reside, eg /home?

    I spend most of my time in Tumbleweed and don't run snapshots, but have split out various directories that write a lot, maybe the next install I would consider lvm for these. My / is btrfs with a size of 60GB, of that 18GB allocated and 13GB used even with recent big updates.
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    SUSE SLE, openSUSE Leap/Tumbleweed (x86_64) | GNOME DE
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  10. #10

    Default Re: STUMPED! Out of space 41.3GB LV, expanded to 51.x and SUSE still reading lower size

    The snapshots take up most of the space in

    /.snapshots

    Configured in yet another setup tool with snapper. I think the defaults are far too much for near 40GB or so partitions. Once I get it running I'll change the snapshot settings and go through my several kernel upgrades.

    Forty gigs isn't enough with snapshots, unless managed and maybe only one. Kernels as well.

Page 1 of 2 12 LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •