LVM sharing through snapshot

Hi,

I have a LVM logical volume mounted on one server A. I want to do snapshots of this volume to do backups. But I need to mount the snapshot and access it read-only to do the backup on an other server B. The LV is on a SAN accessible by both servers. Both servers run suse.
Note that the logical volume is never accessed concurrently (only mounted by server A), but, due to the copy-on-write behavior of LVM snapshots, I suspect I need cLVM instead of just LVM. Am I correct ?

If so, is cLVM sufficient, or do I need to install/configure other stuffs ? I do not understand how cLVM works and what are the dependencies. In particular, do I need to install a crm/pacemaker/drm, or does all the required stuffs come along with the lvm2-clvm package ?

Thank you very much in advance for your answers and advices.

Regards,
Olivier

olobry wrote:
> Hi,
>
> I have a LVM logical volume mounted on one server A. I want to do
> snapshots of this volume to do backups. But I need to mount the snapshot
> and access it read-only to do the backup on an other server B. The LV is
> on a SAN accessible by both servers. Both servers run suse.
> Note that the logical volume is never accessed concurrently (only
> mounted by server A), but, due to the copy-on-write behavior of LVM
> snapshots, I suspect I need cLVM instead of just LVM. Am I correct ?
>
> If so, is cLVM sufficient, or do I need to install/configure other
> stuffs ? I do not understand how cLVM works and what are the
> dependencies. In particular, do I need to install a crm/pacemaker/drm,
> or does all the required stuffs come along with the lvm2-clvm package ?

I don’t know - I’d never heard of cLVM so thanks for the heads up. After
a quick read, I suspect you may be right. Have you seen:

http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/cha.ha.clvm.html

> Thank you very much in advance for your answers and advices.
>
> Regards,
> Olivier

Telling which version of openSUSE you use is basic information when you search for help :wink:

Hi,

Thanks for the link, I already had a glance to this documentation. The fact is that it is really cluster-oriented, when I’m not. My concern is more about just sharing of meta data and without race condition on data (at least from the user point of view). Though I’m really confident that LVM alone is not sufficient (I just can’t figure out how it could manage consistent copy-on-write without kind of distributed mechanisms), using tools that ensure consistency in a cluster looks like overkill…

Thanks anyway for your help !

Regards
Olivier

Hi,

Well actually my question was more about concepts and functionality, that’s why I didn’t mention them. Not sure versions will help a lot.
Anyway : I have SLES11 SP1 installed on the server A (the one that access the data LV) and SLES11 SP2 on server B (the other that does the backup through the snapshot).

Regards,

Olivier

I’m sorry to reopen such an old thread, but I’d like to know if the OP found an answer to his question. I did not find any valid documentation, aside from warnings without any details.

In the meantime I built a PoC, and lvm2 seems to share knowledge about snapshots between nodes without any external help.
In detail, I have managed to write continuously on a journalized file system within a LV without any interruption or corruption, while creating a snapshot, mounting it, copying the frozen data elsewhere, unmounting it and removing it, all on a second node sharing the same iSCSI target. I just had to activate the LV on both nodes.

Next step would be to do this with some read-world LVs with virtual machines on top, with rsnapshot on a second node, periodically… is this a bad idea? If it is, how can I achieve the same without building a complete CLVM?

Well, I may have found it.

The problem is not the snapshot inner working or the concurrent access to data blocks: it’s the LVM metadata. Source: http://community.opennebula.org/shared_lvm
As a solution, at OpenNebula they planned to only modify metadata on the front-end node.
In my case, it will be the backup machine.

Following their choice, on the other node (the one with all the virtual machines) I’ve blocked metadata with this:


    # Type of locking to use. Defaults to local file-based locking (1).
    # Turn locking off by setting to 0 (dangerous: risks metadata corruption
    # if LVM2 commands get run concurrently).
    # Type 2 uses the external shared library locking_library.
    # Type 3 uses built-in clustered locking.
    # Type 4 uses read-only locking which forbids any operations that might 
    # change metadata.
    locking_type = 4

Unless I come back to rectify, this worked.