Duplicate VG names

swildig

New Member
Apr 27, 2022
5
0
1
Hi,

I expanded a partition for one of our VM storage arrays after a RAID rebuild, after doing so we're now getting a warning:

Code:
  WARNING: VG name vgubuntu is used by VGs CMKuvB-gshG-Tpz4-I6xu-U8wb-6ZJ8-BipGjK and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.

It looks like the volume groups from inside the VM's are now showing at the proxmox level which was not happening before. How would we go about stopping these volume groups showing and erroring?

Any help is greatly appreciated!

Here's the disk layout, /dev/sdc3 is the partition that has been expanded.
1651080565129.png

Here's the Proxmox view of the LVM's:
1651080604398.png
 
I am not sure why you only now started seeing this warning, however it is expected to see it. The disks that you passthrough to VMs are still seen by hypervisor itself and so when it scans the volume groups it will naturally detect the default name collisions across various VMs.

While not critical, you can prevent the messages by carefully creating an LVM filter line in /etc/lvm/lvm.conf
It seems some work has been done in that area but it did not make it to release for some reason: https://lists.proxmox.com/pipermail/pve-devel/2016-August/022422.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
we actually set global_filter to filter zvols and set scan_lvs to 0 to skip scanning all nested LVM ;) which version are you on?
 
  • Like
Reactions: Stoiko Ivanov
We're on version 7.0-8.

I ran a partprobe and a device rescan using /sys/class/block/sdc/device/rescan to update the device size after the RAID rebuild so I wonder if that picked up the nested LVM's somehow?
 
you can check what's in lvm.conf - e.g.:

Code:
$ lvmconfig --typeconfig full devices/scan_lvs devices/global_filter
scan_lvs=0
global_filter=["r|/dev/zd.*|"]
 
It's got:

Code:
$ lvmconfig --typeconfig full devices/scan_lvs devices/global_filter
scan_lvs=0
global_filter="r|/dev/zd.*|"
 
so then something must have triggered VG activation bypassing the regular filters/settings.. you can check the logs around the time you did the partprobe - maybe something interesting shows up there?
 
Just found the below in syslog.

Code:
pr 27 15:23:24 vigilante systemd[1]: Starting LVM event activation on device 253:52...
Apr 27 15:23:24 vigilante lvm[1282804]:   pvscan[1282804] PV /dev/mapper/pve--ssd-vm--107--disk--0p2 online, VG centos is complete.
Apr 27 15:23:24 vigilante lvm[1282804]:   pvscan[1282804] VG centos run autoactivation.
Apr 27 15:23:24 vigilante lvm[1282804]:   PVID kUGkD4-VFpC-FbTl-JDAm-opt9-nqrc-gJAw5O read from /dev/mapper/pve--ssd-vm--107--disk--0p2 last written to /dev/sda2.
Apr 27 15:23:24 vigilante lvm[1282804]:   pvscan[1282804] VG centos not using quick activation.
Apr 27 15:23:24 vigilante lvm[1282804]:   3 logical volume(s) in volume group "centos" now active
Apr 27 15:23:24 vigilante systemd[1]: Finished LVM event activation on device 253:52.
Apr 27 15:23:24 vigilante systemd[1]: Starting LVM event activation on device 253:58...
Apr 27 15:23:24 vigilante lvm[1282845]:   pvscan[1282845] PV /dev/mapper/pve--hdd-vm--106--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:24 vigilante lvm[1282845]:   pvscan[1282845] VG vgubuntu run autoactivation.
Apr 27 15:23:24 vigilante lvm[1282845]:   PVID 8RC9xv-1xgc-yQh4-3o5b-PDZI-uAns-mids13 read from /dev/mapper/pve--hdd-vm--106--disk--0p5 last written to /dev/sda5.
Apr 27 15:23:24 vigilante lvm[1282845]:   pvscan[1282845] VG vgubuntu not using quick activation.
Apr 27 15:23:24 vigilante lvm[1282845]:   2 logical volume(s) in volume group "vgubuntu" now active
Apr 27 15:23:25 vigilante systemd[1]: Finished LVM event activation on device 253:58.
Apr 27 15:23:25 vigilante systemd[1]: Starting LVM event activation on device 253:63...
Apr 27 15:23:25 vigilante lvm[1282879]:   pvscan[1282879] PV /dev/mapper/pve--hdd-vm--105--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:25 vigilante lvm[1282879]:   pvscan[1282879] VG vgubuntu skip autoactivation.
Apr 27 15:23:25 vigilante systemd[1]: Finished LVM event activation on device 253:63.
Apr 27 15:23:25 vigilante systemd[1]: Starting LVM event activation on device 253:73...
Apr 27 15:23:25 vigilante lvm[1282955]:   pvscan[1282955] PV /dev/mapper/pve--hdd-vm--119--disk--0p2 online, VG SangomaVG is complete.
Apr 27 15:23:25 vigilante lvm[1282955]:   pvscan[1282955] VG SangomaVG run autoactivation.
Apr 27 15:23:25 vigilante lvm[1282955]:   PVID h5YHiQ-HOwF-eD2r-nCxb-Vt2V-XYY7-Jf7Dif read from /dev/mapper/pve--hdd-vm--119--disk--0p2 last written to /dev/sda2.
Apr 27 15:23:25 vigilante lvm[1282955]:   pvscan[1282955] VG SangomaVG not using quick activation.
Apr 27 15:23:25 vigilante lvm[1282955]:   WARNING: VG name vgubuntu is used by VGs TlkN9H-kn0j-sQD8-E6Sh-2d5n-GXmq-f0fQnU and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
Apr 27 15:23:25 vigilante lvm[1282955]:   Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Apr 27 15:23:25 vigilante lvm[1282955]:   2 logical volume(s) in volume group "SangomaVG" now active
Apr 27 15:23:25 vigilante systemd[1]: Finished LVM event activation on device 253:73.
Apr 27 15:23:26 vigilante systemd[1]: Starting LVM event activation on device 253:80...
Apr 27 15:23:26 vigilante lvm[1283011]:   pvscan[1283011] PV /dev/mapper/pve--hdd-vm--111--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:26 vigilante lvm[1283011]:   pvscan[1283011] VG vgubuntu skip autoactivation.
Apr 27 15:23:26 vigilante systemd[1]: Finished LVM event activation on device 253:80.
Apr 27 15:23:26 vigilante systemd[1]: Starting LVM event activation on device 253:89...
Apr 27 15:23:26 vigilante lvm[1283079]:   pvscan[1283079] PV /dev/mapper/pve--ssd-vm--122--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:26 vigilante lvm[1283079]:   pvscan[1283079] VG vgubuntu skip autoactivation.
Apr 27 15:23:26 vigilante systemd[1]: Starting LVM event activation on device 253:92...
Apr 27 15:23:26 vigilante lvm[1283107]:   pvscan[1283107] PV /dev/mapper/pve--ssd-vm--120--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:26 vigilante lvm[1283107]:   pvscan[1283107] VG vgubuntu skip autoactivation.
Apr 27 15:23:26 vigilante systemd[1]: Finished LVM event activation on device 253:89.
Apr 27 15:23:26 vigilante systemd[1]: Finished LVM event activation on device 253:92.
Apr 27 15:23:26 vigilante systemd[1]: Starting LVM event activation on device 253:95...
Apr 27 15:23:26 vigilante lvm[1283141]:   pvscan[1283141] PV /dev/mapper/pve--ssd-vm--118--disk--0p5 online, VG ubuntu-vg is complete.
Apr 27 15:23:26 vigilante lvm[1283141]:   pvscan[1283141] VG ubuntu-vg run autoactivation.
Apr 27 15:23:26 vigilante lvm[1283141]:   PVID 857Rte-lsed-G5Fd-QMDP-zDqc-LT7e-KIZOIZ read from /dev/mapper/pve--ssd-vm--118--disk--0p5 last written to /dev/sda5.
Apr 27 15:23:26 vigilante lvm[1283141]:   pvscan[1283141] VG ubuntu-vg not using quick activation.
Apr 27 15:23:26 vigilante lvm[1283141]:   WARNING: VG name vgubuntu is used by VGs TlkN9H-kn0j-sQD8-E6Sh-2d5n-GXmq-f0fQnU and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
Apr 27 15:23:26 vigilante lvm[1283141]:   Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Apr 27 15:23:26 vigilante lvm[1283141]:   WARNING: VG name vgubuntu is used by VGs N06OOE-fme8-JeM8-PnoY-WosT-Dm8s-N6fymE and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
Apr 27 15:23:26 vigilante lvm[1283141]:   Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Apr 27 15:23:26 vigilante lvm[1283141]:   WARNING: VG name vgubuntu is used by VGs hi0IrQ-RLci-Y5Ej-QJzP-Jl20-UHY0-MsBek3 and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
Apr 27 15:23:26 vigilante lvm[1283141]:   Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Apr 27 15:23:26 vigilante lvm[1283141]:   WARNING: VG name vgubuntu is used by VGs wQuavO-8iBy-7VC2-mEKU-3kWN-IS0x-bzD9Mt and aUDujs-z23s-SEOU-KVLE-ShgZ-r0DM-XgcNmf.
Apr 27 15:23:26 vigilante lvm[1283141]:   Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Apr 27 15:23:26 vigilante lvm[1283141]:   WARNING: PV /dev/mapper/pve--ssd-vm--118--disk--0p5 in VG ubuntu-vg is using an old PV header, modify the VG to update.
Apr 27 15:23:26 vigilante lvm[1283141]:   2 logical volume(s) in volume group "ubuntu-vg" now active
Apr 27 15:23:26 vigilante systemd[1]: Finished LVM event activation on device 253:95.
Apr 27 15:23:26 vigilante systemd[1]: Starting LVM event activation on device 253:103...
Apr 27 15:23:26 vigilante lvm[1283205]:   pvscan[1283205] PV /dev/mapper/pve--ssd-vm--116--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:26 vigilante lvm[1283205]:   pvscan[1283205] VG vgubuntu skip autoactivation.
Apr 27 15:23:27 vigilante systemd[1]: Starting LVM event activation on device 253:110...
Apr 27 15:23:27 vigilante systemd[1]: Finished LVM event activation on device 253:103.
Apr 27 15:23:27 vigilante lvm[1283260]:   pvscan[1283260] PV /dev/mapper/pve--ssd-vm--114--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:27 vigilante lvm[1283260]:   pvscan[1283260] VG vgubuntu skip autoactivation.
Apr 27 15:23:27 vigilante systemd[1]: Finished LVM event activation on device 253:110.
Apr 27 15:23:27 vigilante systemd[1]: Starting LVM event activation on device 253:138...
Apr 27 15:23:27 vigilante lvm[1283456]:   pvscan[1283456] PV /dev/mapper/pve--ssd-vm--101--disk--0p5 online, VG vgubuntu is complete.
Apr 27 15:23:27 vigilante lvm[1283456]:   pvscan[1283456] VG vgubuntu skip autoactivation.
Apr 27 15:23:27 vigilante systemd[1]: Finished LVM event activation on device 253:138.
 
Last edited:
that is strange and unexpected! I'll try to reproduce this..
 
Just checked the LVM's again and the VM level ones have all gone!

Not sure what changed to make them appear/disappear as I didn't change anything.

Thanks for your help!
 
Same issue here:
root@pve223:~# lvs WARNING: VG name SangomaVG is used by VGs lgBrPv-szSW-YzKD-Z4zc-IRjR-jwOD-cwqoYW and aRJ1Iy-dvZz-1tN3-yUlU-XaL3-45tj-wwB2ZI. Fix duplicate VG names with vgrename uuid, a device filter, or system IDs. WARNING: Not using device /dev/rbd23p2 for PV 0MTrCq-QvZ4-3mK2-ayIS-EBU3-TEOt-mcLf4U. WARNING: Not using device /dev/rbd24p2 for PV 150HMk-pOEh-XtQV-3Rce-ozKx-easd-vMfcvz. WARNING: Not using device /dev/rbd25p2 for PV 150HMk-pOEh-XtQV-3Rce-ozKx-easd-vMfcvz. WARNING: PV 0MTrCq-QvZ4-3mK2-ayIS-EBU3-TEOt-mcLf4U prefers device /dev/rbd22p2 because device was seen first. WARNING: PV 150HMk-pOEh-XtQV-3Rce-ozKx-easd-vMfcvz prefers device /dev/rbd21p2 because device was seen first. WARNING: PV 150HMk-pOEh-XtQV-3Rce-ozKx-easd-vMfcvz prefers device /dev/rbd21p2 because device was seen first. WARNING: PV /dev/rbd11p2 in VG vg_ideafix is using an old PV header, modify the VG to update. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root SangomaVG -wi------- <25.50g root SangomaVG -wi------- <25.50g swaplv1 SangomaVG -wi------- <3.20g swaplv1 SangomaVG -wi------- <3.20g osd-block-db697f8c-8b96-4883-a28a-fc5b7a8aadb9 ceph-1b8d1861-db6c-4656-b213-860acfcc1875 -wi-ao---- 558.91g osd-block-66c04bd4-3b1f-4f0f-af20-193892d0518c ceph-443616cb-d26e-4b09-9e4d-6a50ac2222e2 -wi-ao---- 558.91g osd-block-3eed86ca-08ec-484f-b919-540f81ae915e ceph-5ee1677a-a81a-4d56-924c-c1a4055209b0 -wi-ao---- 558.91g osd-block-04e0b0ae-014a-4516-b029-2b33bc4f7807 ceph-61ae2d98-3f26-4066-8262-f881045b7288 -wi-ao---- <558.79g osd-block-467a389d-3848-412b-9625-8c3ab20c861f ceph-678274f7-2b24-41b7-b77c-fe9163e8392d -wi-ao---- 558.91g osd-block-6b17ad80-4c7f-483c-b720-0658bdea8a7a ceph-686706f2-a5b4-4f28-8eb7-b3addca917b7 -wi-ao---- <558.79g osd-block-9ebc182a-a0a9-4c14-a625-cd8112c72cba ceph-8246166e-016b-49e2-a282-e0d0255cdead -wi-ao---- 558.91g osd-block-db8cec0b-ce8d-4604-9c40-ca725aa4cd04 ceph-9f5dde4c-0a02-4299-82f0-1d7a31672479 -wi-ao---- 558.91g osd-block-9354b8e6-1fbd-4149-8239-5c4a4c3a4278 ceph-a684ac58-2992-4227-afc7-44738d6e79bc -wi-ao---- 2.91t osd-block-d07c4ba6-6eff-41b7-a31e-de2dd16a6a6c ceph-a861a755-e5b1-4e76-adea-8ddde3d2ee3e -wi-ao---- 558.91g osd-block-5f236e18-428f-4de3-8dcb-67cff91fc4d8 ceph-be26e116-78a8-4c5e-87da-510be6edaeb7 -wi-ao---- 558.91g osd-block-fc0e46e6-93a7-4f39-8fb5-a222cd21303f ceph-ee067b07-cec8-487a-b6b4-e0d9192a2915 -wi-ao---- 558.91g root pve -wi-ao---- <29.50g lv_home vg_ideafix -wi------- 25.63g lv_root vg_ideafix -wi------- 50.00g lv_swap vg_ideafix -wi------- <3.88g

root@pve223:~# pveversion -s Unknown option: s USAGE: pveversion [--verbose] root@pve223:~# pveversion -v proxmox-ve: 8.0.2 (running kernel: 6.2.16-6-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) pve-kernel-6.2: 8.0.5 proxmox-kernel-helper: 8.0.3 proxmox-kernel-6.2.16-8-pve: 6.2.16-8 proxmox-kernel-6.2: 6.2.16-8 proxmox-kernel-6.2.16-6-pve: 6.2.16-7 pve-kernel-6.2.16-4-bpo11-pve: 6.2.16-4~bpo11+1 pve-kernel-6.2.11-2-pve: 6.2.11-2 ceph: 17.2.6-pve1+3 ceph-fuse: 17.2.6-pve1+3 corosync: 3.1.7-pve3 criu: 3.17.1-2 glusterfs-client: 10.3-5 ifupdown2: 3.2.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-4 libknet1: 1.25-pve1 libproxmox-acme-perl: 1.4.6 libproxmox-backup-qemu0: 1.4.0 libproxmox-rs-perl: 0.3.1 libpve-access-control: 8.0.4 libpve-apiclient-perl: 3.3.1 libpve-common-perl: 8.0.7 libpve-guest-common-perl: 5.0.4 libpve-http-server-perl: 5.0.4 libpve-rs-perl: 0.8.5 libpve-storage-perl: 8.0.2 libspice-server1: 0.15.1-1 lvm2: 2.03.16-2 lxc-pve: 5.0.2-4 lxcfs: 5.0.3-pve3 novnc-pve: 1.4.0-2 proxmox-backup-client: 3.0.2-1 proxmox-backup-file-restore: 3.0.2-1 proxmox-kernel-helper: 8.0.3 proxmox-mail-forward: 0.2.0 proxmox-mini-journalreader: 1.4.0 proxmox-offline-mirror-helper: 0.6.2 proxmox-widget-toolkit: 4.0.6 pve-cluster: 8.0.3 pve-container: 5.0.4 pve-docs: 8.0.4 pve-edk2-firmware: 3.20230228-4 pve-firewall: 5.0.3 pve-firmware: 3.7-1 pve-ha-manager: 4.0.2 pve-i18n: 3.0.5 pve-qemu-kvm: 8.0.2-4 pve-xtermjs: 4.16.0-3 qemu-server: 8.0.6 smartmontools: 7.3-pve1 spiceterm: 3.3.0 swtpm: 0.8.0+pve1 vncterm: 1.8.0 zfsutils-linux: 2.1.12-pve1
 
you need to exclude guest volumes that contain PVs from being used on the host - by default, zvols and (nested) LVM volume groups are excluded, in your case you likely also want to exclude RBD devices (see the devices.global_filter line in /etc/lvm/lvm.conf)
 
you need to exclude guest volumes that contain PVs from being used on the host - by default, zvols and (nested) LVM volume groups are excluded, in your case you likely also want to exclude RBD devices (see the devices.global_filter line in /etc/lvm/lvm.conf)
Thanks!

This worked for me:
devices { # added by pve-manager to avoid scanning ZFS zvols global_filter=["r|/dev/zd.*|", "r|/dev/rbd.*|"] }
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!