boot problem after lvm2 update

Hi,

Yesterday is installed the lvm2 udpate (version lvm2-2.02.98-0.28.5.1.x86_64). After I rebooted the system I had an issue with several filesystems that wouldn’t mount. On one of my systems I have several volumegroups. Alle the logicalvolumes in those other volumegroups weren’t available. On a system that has only one volumegroup I get this message when I use a lvm command:
“WARNING: Failed to connect to lvmetad: No such file or directory. Falling back to internal scanning.”

The “fault” it seems is the change that was made to lvm.conf. With the update use_lvmetad was changed from 0 to 1 with the purpose of not useing lvm2 activation generation (bug 854413). I got my system working by booting with emergency.target and enabling the lvm2-lvmetad service. This helped for the system that didnt want to boot and for the lvm command giving a warning.

Now my question is: should lvmetad be started automaticly by some other service or by the updated rpm, or did I do something unexpected that I need lvmetad?

Regards,
Arjan de Jong

PLease, pleease, please. ALLWAYS tell which version of openSUSE you use!

We are not clairvoyant.

Hello,

same problem here. I run SuSE 13.1 and did a package update yesterday. Today my computer didn’t boot anymore because the home lv was not activated on boot (it is in another vg than the system itself). So the boot process hung after a timeout.
Booted with emergency stick, found this thread, deactivated use_lvmetad in lvm.conf and everything is back to normal.

Hi,

Yes, you are absolutely right. I should have named the version I’m runnig. My apologsy.

I’m running OpenSUSE 13.1 64bit.

Regards,
Arjan

I also had OpenSUSE 13.1, updated, and had the system unable to boot with the failed to connect to lvmetad message. The system also has a second volume group like you (which is where it seemed to fail). I’m going to try to reproduce this later in VirtualBox, but it seems like everyone with a second LVM volume group may have been rendered unbootable. :frowning: I’m going to try the lvm.conf trick you suggested.

LVM update was removed from mirrors.

Same here (opensuse 13.1), after an update 2 days ago which included an lvm update, my 2nd VG failed to mount and that inhibited the boot. The 2nd VG contained the old suse 11.4 version so I could boot into 11.4 and modified /etc/fstab to add the “nofail” option to all non-essential file systems (seems like a good precaution anyway). That restored the boot of 13.1, but all LVs of the failing VG did not mount. This could be solved by “vgchange -ay VGname”, after which all LVs were mounted. I have added the vgchange command to boot.local as a fudge to get around the lvm problems.

In fact, this problem was earlier described in https://bugzilla.redhat.com/show_bug.cgi?id=989607

Regards,

Hi
This bug;
https://bugzilla.novell.com/show_bug.cgi?id=862076
and this ML thread;
http://lists.opensuse.org/opensuse/2014-02/msg00103.html

Hello community,

I have the same problem (OS 13.1 64bit). However, my root partition is on an encrypted lvm (not the one cauding the problem) so I have some difficulties accessing it from the (12.2) live cd in order to change lvm.conf. I tried the following:

sudo modprobe dm-crypt
sudo cryptsetup luksOpen /dev/sdd cheer

but I get:

 cannot open device sdd

Any suggestions?

sudo modprobe dm-crypt
sudo cryptsetup luksOpen /dev/sda2 cheer

is ok. (sorry for being to quick with asking, but the thing that it gives me errors about sdd is irritating (I only have three physical hdds in the box)

Are you saying that after accessing the encrypted volume with luksOpen you were then able to edit lvm.conf as mentioned above? I had the same issue hit me last night and was going to try to fix when I get back home.

That is almost certainly wrong. Maybe try “/dev/sdd1”. You would only use “/dev/sdd” if disk is not partitioned at all, and the entire disk is an LVM. That’s unlikely.

It is OK to ask of course (one wants to be sure), but I would say it is not that unlikely (well all is gradualy). When I use disks through LVM, I see no need to partition them. Looks to me as doing things twice.

hcvv wrote:
> nrickert;2622325 Wrote:
>> That is almost certainly wrong. Maybe try “/dev/sdd1”. You would only
>> use “/dev/sdd” if disk is not partitioned at all, and the entire disk is
>> an LVM. That’s unlikely.
> It is OK to ask of course (one wants to be sure), but I would say it is
> not that unlikely (well all is gradualy). When I use disks through LVM,
> I see no need to partition them. Looks to me as doing things twice.

I’ve been bitten a couple of times by doing that. It makes the LVM
metadata be very vulnerable to overwriting by any low-level software
that is accidentally used (boot s/w, raid s/w, partitioning s/w etc)

IIRC the LVM docs recommend not doing it. I certainly don’t any longer.

On 2014-02-06 11:30, Dave Howorth wrote:

> I’ve been bitten a couple of times by doing that. It makes the LVM
> metadata be very vulnerable to overwriting by any low-level software
> that is accidentally used (boot s/w, raid s/w, partitioning s/w etc)
>
> IIRC the LVM docs recommend not doing it. I certainly don’t any longer.

Sorry, I’m a bit thick today. Recommends using or not using a standard
partition table? I think you mean “using”. :-?

On the mail list someone was against using a partition table, he said,
because searching for the metadata in case of disaster is easier, as it
is a plain text string. I have my doubts about this.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

Carlos E. R. wrote:
> On 2014-02-06 11:30, Dave Howorth wrote:
>
>> I’ve been bitten a couple of times by doing that. It makes the LVM
>> metadata be very vulnerable to overwriting by any low-level software
>> that is accidentally used (boot s/w, raid s/w, partitioning s/w etc)
>>
>> IIRC the LVM docs recommend not doing it. I certainly don’t any longer.
>
> Sorry, I’m a bit thick today. Recommends using or not using a standard
> partition table? I think you mean “using”. :-?

I think the LVM docs recommend using a partition table.

> On the mail list someone was against using a partition table, he said,
> because searching for the metadata in case of disaster is easier, as it
> is a plain text string. I have my doubts about this.

The pvcreate man page (where I would expect such a recommendation) says:

pvcreate initializes PhysicalVolume for later use by the Logical Volume Manager (LVM). Each PhysicalVolume can be a disk partition, whole disk, meta device, or loopback file. For DOS disk partitions, the partition id should be set to 0x8e using fdisk(8), cfdisk(8), or a equivalent. For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. This can be done by zeroing the first sector with:

dd if=/dev/zero of=PhysicalVolume bs=512 count=1

It does only explain that whole disk it is one of the possibilities and explains that there shouldn’t be an old partitions table. IMHO that is a safety device against using a still valid partition table by accident. (“Valid” and “old” partition tables in this case expressing if the partition table is a left over from earlier usage or not).

But of course, everybody may value his/her own precautions against typing wrong commands. And discussions of he pros and cons of doing it might well fill a many post thread in Soapbox. The only thing I tried to point out that using the whole disk in this case is very valid and thus not so unlikely as it may seem at the first glance.

hcvv wrote:
> The pvcreate man page (where I would expect such a recommendation)

OK so you forced me to go check the facts:

http://www.tldp.org/HOWTO/LVM-HOWTO/initdisks.html

"Not Recommended

Using the whole disk as a PV (as opposed to a partition spanning the
whole disk) is not recommended because of the management issues it can
create. Any other OS that looks at the disk will not recognize the LVM
metadata and display the disk as being free, so it is likely it will be
overwritten. LVM itself will work fine with whole disk PVs."

BTW, there’s also currently a longstanding bug that can cause kernel
lockup or data corruption if you do use an entire very big disk for LVM.

Do you have link to this bug?

arvidjaar wrote:
> Do you have link to this bug?

There’s a thread discussing the issue at

http://lkml.org/lkml/2014/1/30/399