PM2.0 + HA iSCSI Storage. Interesting situation

E

eugenyh

Guest
So, I have PM2.0 cluster (4 nodes) and HA iSCSI storage (2 nodes) for keep VM.

HA iSCSI storage based on Ubuntu+DRBD(primary/secondary)+iSCSItarget+HeartBeat.
At storage I have /dev/sdc (about 6T) for data.

At storage I created:
PV : for whole /dev/sdc
VG : "PVELUNSVG0" at this PV
And LV about 128G size "lv.lun1" for use it as back block bevice for DRBD

So, in DRBD config, block /dev/drbd1 pointed to disk /dev/PVELUNSVG0/lv.lun0
(drbd1 because drbd0 used for another LV, named "iscsi-metadata" for keep IET config)

And finaly, iSCSI LUN0 pointed to /dev/drbd1 as blockio.

After that, at PM cluster I successfuly added iSCSItarget,
then added LVM group (my LUN0, 128G) named "NAS01LUN0VG0",
and then append VM with disk (vm-100-disk-1) at this group.


After all, at storage I saw:
pvscan:
Found volume group "PVELUNSVG0" using metadata type lvm2
Found volume group "NAS01LUN0VG0" using metadata type lvm2

Is it normal?
I cannot understand, why I can see "NAS01LUN0VG0" volume group?
Moreover, at lvscan I can see "/dev/NAS01LUN0VG0/vm-100-disk-1"...
 
PS. Seems to be I get somethig like "nested LVM"...
Hmmm...
But what to do next?
 
Hehe! Answer by myself. Mabe someone it will useful

Here is exactly "Nested LVM" occured.

When I created PV, then local GV, then LV, then incapsulate it to DRBD resource and public it as iSCSI block device.
Next stage, the PM2.0 was used this iSCSI disk as PV, then make own VG, and LV as VM image.

So, for local storage, this LV is not just raw (as I thinked before)! The system see all manipulation made by PM, as if I was doing them locally!
Look like:
pvcreate /dev/drbd1,
vgcreate NAS01LUN0VG0 /dev/drbd1
and
lvcreate --name vm-100-disk-1 -S128G NAS01LUN0VG0

LVM can see this, and can lock it! Thats why after reboot the node, my DRBD cannot start!

So! To avoid this, I corrected my lvm.conf:

filter = ["a|sd.*|", "r|.*|"] # accept only /dev/sdX and reject others

write_cache_state = 0

and

delete /etc/lvm/cache/.cache.See details at: http://www.drbd.org/users-guide/s-nested-lvm.html

For now all working fine:
DRBD working at both nodes.

Test:
After rebooting primary node, secondary start working as primary, and VM keep working.
After first node back to online, need wait to sync DRBD, and then reboot the second node. Hehe! VM anyway keep working :D
First node will returned to work as primary as was at first time.
 
Hi,
if you don't use the complete disk in the VM for the lvm this don't happens.
Simply use parted inside the VM and use the partion instead the whole disk - than the lvm from the host don't see VM-lvms.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!