SOLVED: Using clvmd in KVMs disrupts ProxMox frontend Storage view

WaltervdSchee

Active Member
Mar 2, 2011
42
0
26
Rotterdam Area, the Netherlands
First off, lets talk about what I'm doing.
Running 2 DELL M610 blades as a ProxMox VE 2.1 cluster with iSCSI shared storage + a quorum disk for HA.

I've created two KVM's with CentOS, and a DRBD replicated storage with GFS2 + RedHat/CentOS cluster inside those KVM's.
This is to replicate in KVM what we are going to deploy on bare-metal later on.

However, since I've created the clvmd-based Clustered GFS2 logical volume, this 'interfers' with viewing the 'contents' tabs in the storage views on the ProxMox frontends. It gives this grayed out display with a message like 'exec /sbin/lvs --noheaders ............' exit status 500'

It doesn't interfere with the working of the ProxMox cluster or the VM's though.
Running the complete command at the command prompt, gives a message about 'skipping Clustered LVM storage'. I'm guessing this is what interferes with the parsing code of the contents view.

Is there a way that I can still use clvmd/RHCS, but, also have full view capabilities in the ProxMox VE Frontend ?

BTW: The platform absolutely rocks and is stable as anything I've seen thusfar !
 
Last edited:
Re: Using clvmd in KVMs disrupts ProxMox frontend Storage view

First off, lets talk about what I'm doing.
Running 2 DELL M610 blades as a ProxMox VE 2.1 cluster with iSCSI shared storage + a quorum disk for HA.

I've created two KVM's with CentOS, and a DRBD replicated storage with GFS2 + RedHat/CentOS cluster inside those KVM's.
This is to replicate in KVM what we are going to deploy on bare-metal later on.

However, since I've created the clvmd-based Clustered GFS2 logical volume, this 'interfers' with viewing the 'contents' tabs in the storage views on the ProxMox frontends. It gives this grayed out display with a message like 'exec /sbin/lvs --noheaders ............' exit status 500'

It doesn't interfere with the working of the ProxMox cluster or the VM's though.
Running the complete command at the command prompt, gives a message about 'skipping Clustered LVM storage'. I'm guessing this is what interferes with the parsing code of the contents view.

Is there a way that I can still use clvmd/RHCS, but, also have full view capabilities in the ProxMox VE Frontend ?

BTW: The platform absolutely rocks and is stable as anything I've seen thusfar !
Hi Walter,
I guess do you use the VM-HDD inside the VM without partitioning?! So that the host see the lvm-information inside the lv?
If you create an partition inside the VM and use e.g. sdb1 instaed of sdb the host should not see the content.

Udo
 
Re: Using clvmd in KVMs disrupts ProxMox frontend Storage view

Udo,

Thanks for your reply. Indeed I'm using a cLVM on the raw (with DRBD replication) (v)hdd.

So, basically, if I create a partition, /dev/vhb1 and use that as my DRBD0 device inside my VM's, create the cLVM ontop of the DRBD0 device,
the host should then not see the VG/LV ?
 
Re: Using clvmd in KVMs disrupts ProxMox frontend Storage view

Udo,

I now created a partition in my KVM (v)hdd, as /dev/vdb1 , but, unfortunately, it didn't help.
Then I thought, wait a second, I didn't remove the vdisks so the LVM meta data is still on there....

I removed and recreated the disks, and now it works !
A Clustered GFS2 filesystem over 2 KVMs with DRBD sync, and a working 'contents' tab on the Proxmox VE frontend.

Thanks Udo !
(and of course, now that I think of why it happened in the first place, its actually very logical.... )
 
Re: Using clvmd in KVMs disrupts ProxMox frontend Storage view

Udo,

I now created a partition in my KVM (v)hdd, as /dev/vdb1 , but, unfortunately, it didn't help.
Then I thought, wait a second, I didn't remove the vdisks so the LVM meta data is still on there....
...
Hi Walter,
with "pvremove" it's possible to blank the lvm-data from a existing disks - so you don't have to recreate the disks... (for the next one :p ).

Udo