Subject: How to create thin pool lvm

I realized I was running out of space on my current partition and decided to create an lvm group with a single thin pool volume on it. Unfortunately, I can’t seem to figure out how to do this and the way I’m currently trying is unsupported by the kernel (interestingly enough.) Searching, using modprobe with the keywords “mapper” or “device” is not turning anything up either.
Note that the UUIDs were changed for anticracker reasons but everything else is correct.


sudo lvm
lvm> pvcreate --pvmetadatacopies 2 /dev/sda4
lvm> vgcreate --clustered n --maxlogicalvolumes 7 --maxphysicalvolumes 255 --vgmetadatacopies 2 sda44 /dev/sda4
lvcreate -a ay --chunksize 512KiB -L 400G -V 773.92G --thinpool sda44
  Cannot read thin-pool target version.
  thin: Required device-mapper target(s) not detected in your kernel
lvm> lvdisplay
lvm> pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda4
  VG Name               sda44
  PV Size               773.93 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              198124
  Free PE               198124
  Allocated PE          0
  PV UUID               xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  
lvm> vgdisplay
  --- Volume group ---
  VG Name               sda44
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                7
  Cur LV                0
  Open LV               0
  Max PV                255
  Cur PV                1
  Act PV                1
  VG Size               773.92 GiB
  PE Size               4.00 MiB
  Total PE              198124
  Alloc PE / Size       0 / 0   
  Free  PE / Size       198124 / 773.92 GiB
  VG UUID               xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  
lvm>

Telling what version of openSUSE you use is always appreciated.

And while you think your kernel is involved, telling it’s version (specialy when it is different from the one released with your yet unknown openSUSE version) might also be of interest.

Are you trying to change an exiting partition to LVM??

I’m not sure that is possible

I’m afraid I miss logical connection between first and second parts of this sentence. If you run out of space, you need to add more space; having thin provisioning pool does not magically increase available space.

How clumsy of me (I’ve only made this mistake 20 times?:shame: Someday I’ll learn…)
I’m running opensuse 12.3 and the kernel is kernel-vanilla 3.11.6-4.1 x86_64.

I have a hard disk that has 200GiB I partition it out so that I have 3 partitions and 100GiB left over for a fourth if I one day see it necessary to have another partition or expand one of the current ones. I try to partition and I run into problems so I ask you guys.
I find your lack of faith in my intelligence disturbing…

choke…cough…wheeze

If you make the 4th partition an extended you can then have multiple logical partitions in the extended

But in any case I don’t think it would be wise to try to convert an existing partition to LVM. That needs setup from the start. Even if possible it would put the data to a significant risk. Better to just backup (you would need to anyway) and start from fresh using LVM for your data partition. If you put root on a LVM you will need a separate boot partition (200 meg min) since Grub really does not understand LVM

With LVM you can splice multiple partitions together even across disks into a single logical partition.

On the other hand you could just partition the 100 gig unused and mount it at a convent location for your data.

I also doubt the usefullness of using LVM to reach your goal. But it is only a feeling. My feeling also being that you ended up going for LVM just because you heard about it and think it is a sort ot magical curre for all partition size problems. Sorry when my feelings are wrong.

But as gogalthorp says, there are many possibilities and I assume the we (at least I) can only give more detailed advice when you start providing real information. Thus not only a story about “I have … disk wiyh … partitions …”, bur output of commands like

fdisk -l
mount | grep /dev/sd

and then tetlling what problem you have with what partition (called by the names the above statements showed us), etc.

Only then the whole vague story will become understandable with hard facts. And we probably will then know what your real problem is and not your problems with one of the the steps you assume will lead to a solution (LVM) (see: http://www.catb.org/~esr/faqs/smart-questions.html#goal ).

I tried to give a basic example, not something exactly correct. I don’t understand why you care so much about what partition is where all I needed to know is why the above command failed.


% mount | grep /dev/sd
/dev/sda6 on / type ext4 (rw,relatime,data=ordered)
% fdisk -l

This is the layout at runlevel 5. If I mount my two extra storage partitions I now have:


% mount | grep /dev/sd
/dev/sda6 on / type ext4 (rw,relatime,data=ordered)
/dev/sda2 on /media/sda33 type xfs (rw,nosuid,nodev,noexec,relatime,attr2,inode64,noquota,user=ballsystemlord)
/dev/sda3 on /media/sda22 type xfs (rw,nosuid,nodev,noexec,relatime,attr2,inode64,noquota,user=ballsystemlord)

And as you can see I’ve got a lack of space on my extra storage partitions, I’ve done a lot of mirroring of computer info to my hard disk from the intranet during the past four years (actually it’s not that large since it’s only about 0.136986301369863 GiB per day.)


% df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6       193G   88G  104G  46% /
devtmpfs        3.5G   44K  3.5G   1% /dev
tmpfs           3.5G  2.6M  3.5G   1% /dev/shm
tmpfs           3.5G  4.6M  3.5G   1% /run
tmpfs           3.5G     0  3.5G   0% /sys/fs/cgroup
tmpfs           3.5G  4.6M  3.5G   1% /var/lock
tmpfs           3.5G  4.6M  3.5G   1% /var/run
/dev/sda2       203G  196G  7.2G  97% /media/sda33
/dev/sda3       199G  193G  5.6G  98% /media/sda22

/dev/sda5 is another linux distro. /dev/sda1 is the extended partition which I use for testing and fooling and experimentation with linux OSes.
I decided That I should start using lvm because it can span multiple disks and can hold multiple partitions. Because I never expected to have 400GiB worth of data on my harddrive I decided that the next time I needed more room I would use a thin volume lvm pool so that I could have just one partiton and it could grow to whatever size I needed for the mirrored data.
Now, could we find an answer?

why can’t you use Yast for this?
Yast/Partitioner have options for LVM creation.
all you need to do is make sure you pick the right drive and all.

on my new server build, I am using BTRFS >:), it combines the functions of LVM and MD RAID into files system
not supported in yast for multidevice raid though but works nicely from CLI so far. have been playing with it for the last 6 month.

even recovered the system drive once. (yes my root and /home are on btrfs partitions as well)

Yast states that creating a thin volume is not supported if I remember rightly.