LVM Volume Groups with same name

Bryn Ellis

New Member
Jun 17, 2018
8
0
1
50
I am creating multiple VM's on my single node PVE host and I've hit a problem today where everything was hanging in the web console and 'vgdisplay' hung for >3 minutes before completing.

Looking into it it seems that having LVM volume groups called the same thing on each VM is an issue. Each VM has a VG called 'swap' and another called 'docker'.

I've renamed them now so that they are unique and that has made everything in the web console start working fine again but that is a bit of an issue for me as to make my installation of OpenShift easier I have a script that will run on all the VM's to set up the Docker storage and configure swap and it relies on the volume groups all having the same name.

Is this a bug or a 'feature' and is there some way to make it work properly so I can have my 'swap' and 'docker' volume groups all called 'swap' and 'docker'?

MORE DETAIL:
Below you can see where I've renamed the swap and docker VG's to be unique (in the VG column) but I need them to be the same

root@pvenode1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 68.08g 8.50g
/dev/sdb vg_vmdata lvm2 a-- 820.02g 19.82g
/dev/vg_vmdata/vm-101-disk-2 swap lvm2 a-- 4.00g 0
/dev/vg_vmdata/vm-101-disk-3 docker lvm2 a-- 32.00g 1.53g
/dev/vg_vmdata/vm-102-disk-2 swap10 lvm2 a-- 10.00g 0
/dev/vg_vmdata/vm-102-disk-3 docker64 lvm2 a-- 64.00g 3.07g


root@pvenode1:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-3
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

 
you can edit the file '/etc/lvm/lvm.conf' on you host and customize the 'global_filter'
this is a list of patterns that should not be scanned by lvm

by default this includes the default 'pve-.*' volume groups, but you can add yours there
 
Thanks Dominik! That could be a huge timesaver for me. I'll hopefully get chance to try it out today.
I've not worked with LVM that much in the past other than the basic creating, extending etc. so could you tell me what issues I may run in to if my VG's aren't scanned? What does the scanning do?
 
the scanning is only relevant if you want to use the vg on the host, but since they actually get used inside the vm this is useless
 
OK, perfect! Thanks very much Dominik, and thanks for really quick response. Have a great day.
 
I tried this today and don't seem to be able to get it to work.

Code:
root@pvenode1:~# ls /dev/mapper
control               pve-data       pve-root        vg_vmdata-lv_vmdata_tdata   vg_vmdata-vm--101--disk--1    vg_vmdata-vm--102--disk--2
docker-docker--pool       pve-data_tdata  pve-swap        vg_vmdata-lv_vmdata_tmeta   vg_vmdata-vm--101--disk--2    vg_vmdata-vm--102--disk--3
docker-docker--pool_tdata  pve-data_tmeta  swap-swap        vg_vmdata-lv_vmdata-tpool   vg_vmdata-vm--101--disk--3    vg_vmdata-vm--103--disk--1
docker-docker--pool_tmeta  pve-data-tpool  vg_vmdata-lv_vmdata    vg_vmdata-vm--100--disk--1  vg_vmdata-vm--102--disk--1

The following command took almost 5 minutes to return:
Code:
root@pvenode1:~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  docker      1   1   0 wz--n-  64.00g  3.07g
  docker      1   1   0 wz--n-  32.00g  1.53g
  pve         1   3   0 wz--n-  68.08g  8.50g
  swap        1   1   0 wz--n-  10.00g     0
  swap        1   1   0 wz--n-   4.00g     0
  vg_vmdata   1   9   0 wz--n- 820.02g 19.82g

My global_filter in lvm.conf looks like this:

global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|", "r|/dev/mapper/docker-docker--.*|", "r|/dev/mapper/swap-.*|" ]

I've rebooted the server but it still doesn't work. All the lvm commands like vgs and vgdisplay hang for ages before returning.

Have I done something wrong?
 
you should not exclude the vgs from the guests, but the one on the host containing the guests (that is the one lvm is scanning)
so i guess you should add:
Code:
"r|/dev/mapper/vg_vmdata-.*|"

edit: typo
 
  • Like
Reactions: Bryn Ellis
My exclusion previously posted was on the pve host.
yes but you blacklisted the vgs from 'inside the guests'

you want to blacklist the vgs containing the guest disks, so that lvm does not detect the duplicate vgs, see my previous post
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!