[SOLVED] VG-PV from VM visible on PVE / Volume Communication Failure

Romain B

Active Member
Jan 7, 2019
2
0
41
30
Hello,
Following the creation of two VM (263 and 264) and the addition of disk, these two disks are now as PV and VG on the nodes:
Cap1.png

This has been a problem loading volumes on PVE and migrate disk:
Cap2.png

But VM working

Here are the commands launched on the VM to integrate the disks:
Code:
  220  pvdisplay
  221  df -h
  222  vgdisplay
  223  pvs
  224  vgdisplay
  225  pvdisplay
  226  fdisk -l
  227  pvcreate /dev/sdb
  228  pvdisplay
  229  vgdisplay
  230  vgcreate backup /dev/sdb
  231  vgdisplay
  232  lvdisplay
  233  lvcreate -n backup -L 20G backup
  234  lvdisplay
  235  df -h
  236  mkfs.ext4 /dev/backup/backup
  237  resize2fs /dev/backup/backup
  238  fdisk -l
  239  df -h
  240  vi /etc/fstab
  241  mount -a
  242  df –h

I don't know why I see the VG and PV on the nodes, the commands launched on the VM are independent of the PVE, do you have an explanation?
Thank you

Romain B.
Proxmox Virtual Environment 5.1-41
 
you have a duplicate vg 'backup', this is bad and should not happen

i guess what happened is that you have mulitple vms with the same vg, and lvm detects this

you can add disks/volumes to the lvm blacklist, so that the host does not scan the guest vgs -> /etc/lvm/lvm.conf
 
I am in the same situation, my output:

root@hyper2:/etc/lvm# pvs
PV VG Fmt Attr PSize PFree
/dev/DLOW1/vm-1009-disk-2 cloud lvm2 a-- 800.00g 0
/dev/DLOW1/vm-1009-disk-3 cloud lvm2 a-- 420.00g 0
/dev/DLOW1/vm-1011-disk-0 hamail lvm2 a-- 780.00g 0
/dev/DLOW1/vm-1012-disk-1 hamail lvm2 a-- 770.28g 0
/dev/mapper/DFAST1 vDFAST1 lvm2 a-- 1.50t 1.40t
/dev/mapper/DLOW1 DLOW1 lvm2 a-- 6.64t 3.24t

my lvm.conf

global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/mapper/hamail-hamail--.*|", "r|/dev/mapper/cloud-cloud--.*|" ]

I constantly have the communication failure 0
 
try adding '/dev/DLOW1/.*' to your global_filter?
 
I only have one doubt, when making the change in the lvm.conf file it is necessary to restart the server and the other is that DLOW1 and DFAST1 are shared storage by iSCSI, then add them with LVM, could this change affect my storage?
 
I only have one doubt, when making the change in the lvm.conf file it is necessary to restart the server
AFAIR the config-change should be picked up by each invocation of lvs - so at least you'll see if it helps - rebooting is still recommended during a quiet moment - so that you know that your changes don't have negative side-effects

are shared storage by iSCSI, then add them with LVM, could this change affect my storage?
without knowing your exact setup that's hard to say with certainity... but
if you just have added 2 LUNs to your host and configured them as LVM on your host (pvcreate, vgcreate) and have added them as LVM storages to your `storage.cfg` then it should make no difference that they are not local disks. If this is a shared storage for a PVE-cluster - make sure to change the global_filter on all cluster nodes. - Please make sure to read the comments in '/etc/lvm/lvm.conf' - they do explain the effects of global_filter quite well

hope this helps!
 
AFAIR the config-change should be picked up by each invocation of lvs - so at least you'll see if it helps - rebooting is still recommended during a quiet moment - so that you know that your changes don't have negative side-effects


without knowing your exact setup that's hard to say with certainity... but
if you just have added 2 LUNs to your host and configured them as LVM on your host (pvcreate, vgcreate) and have added them as LVM storages to your `storage.cfg` then it should make no difference that they are not local disks. If this is a shared storage for a PVE-cluster - make sure to change the global_filter on all cluster nodes. - Please make sure to read the comments in '/etc/lvm/lvm.conf' - they do explain the effects of global_filter quite well

hope this helps!

I explain my setup a bit, I have a freenas for iscsi and I am seeing two volumes (DLOW1 and DFAST), I am multipath for the volumes , Here my doubt as would be the syntax for these two volumes in lvm.conf?

lvm

root@node-srv3:~# lvm
lvm> vgs
VG #PV #LV #SN Attr VSize VFree
DLOW1 1 18 0 wz--n- 6.64t 3.24t
cloud 2 1 0 wz--n- 1.19t 0
hamail 1 1 0 wz--n- 780.00g 0
hamail 1 1 0 wz--n- 770.28g 0
vDFAST1 1 4 0 wz--n- 1.50t 1.40t


lvm> pvs
PV VG Fmt Attr PSize PFree
/dev/DLOW1/vm-1009-disk-2 cloud lvm2 a-- 800.00g 0
/dev/DLOW1/vm-1009-disk-3 cloud lvm2 a-- 420.00g 0
/dev/DLOW1/vm-1011-disk-0 hamail lvm2 a-- 780.00g 0
/dev/DLOW1/vm-1012-disk-1 hamail lvm2 a-- 770.28g 0
/dev/mapper/DFAST1 vDFAST1 lvm2 a-- 1.50t 1.40t
/dev/mapper/DLOW1 DLOW1 lvm2 a-- 6.64t 3.24t
 
as said:
try adding '/dev/DLOW1/.*' to your global_filter

The idea here is that you don't want lvm/pvs/vgs to scan disk-devices which are LVs on your existing VGs - otherwise all VGs configured inside guests would show up in the hypervisor.

AFAIK there should be no specifics w.r.t. iSCSI or multipath with that regular expression
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!