WARNING: Duplicate VG name pve

degrootm

Renowned Member
Jun 27, 2012
15
0
66
Installed 2.1-1 about two weeks ago and all seemed well until I scheduled backups and moved things around.
Well, I haven't setup HA yet but backup is a bit more important.

To give some additional background,I've been running ProxMox for at least 3 years on version 1.X. I got new servers and figured backup and migrate would be the easy way to go. So I started with some test VMs. One moved with no issue and one has some sort of configuration issue with Tomcat that I am still trying to solve. This however has ground the migration to a halt. Major show-stopper if backups can't be done.

When I had just one VM backups(to NFS) were working as expected, then I migrated the first test VM (VM-t1) from the first cluster node to the second node. Backup ran with no issue. When I moved it back to the first node, I got the error:

INFO: WARNING: Duplicate VG name pve: Existing Brs7Uk-3SZt-0dHH-mHvR-UcZl-iBR0-LarbiI (created here) takes precedence over kSMBV6-RbO2-bndK-Bz0b-Orh5-UbFs-Xjrf3H
INFO: Logical volume "vzsnap-vps-0" already exists in volume group "pve"
ERROR: Backup of VM 101 failed - command 'lvcreate --size 1024M --snapshot --name vzsnap-vps-0 /dev/pve/data' failed: exit code 5

Adding a second VM to the same node seems to have exasperated the issue.

Like Pavlov's dog, I Google-ed to see what may be the issue, as best I can tell there is a conflict with the logical volume manger but I'm not a pro at LVM. So I'm trying to get educated on what exactly has gone off the rails. The reason I'm posting this is that I didn't really do anything fancy just yet. My plan is to use HA and to add my iSCSI (ReadyNAS) to the setup. Right now however we are talking local disks and NFS mount for backup.

I'm wondering if there is someone who's run into this or at least smarter than I about Lunix LVMs to shed some light on how best to make things work in a predictable manner. It looks like there is an issue creating the snap volume from the error message. I'm not sure how to control that and make sure those are unique.

Here is some of the out put from the few commands I figured out were relevant:

#pvscan -u
WARNING: Duplicate VG name pve: Existing Brs7Uk-3SZt-0dHH-mHvR-UcZl-iBR0-LarbiI (created here) takes precedence over kSMBV6-RbO2-bndK-Bz0b-Orh5-UbFs-Xjrf3H
PV /dev/sdb2 with UUID hNeJGx-6r7D-WWTc-h27x-Hv9l-K4Xh-1ctPjR VG pve lvm2 [232.38 GiB / 16.00 GiB free]
PV /dev/sda2 with UUID cendxC-G7f8-E0U6-yQin-VbRg-QcQ8-WYdEJ8 VG pve lvm2 [232.38 GiB / 15.00 GiB free]
Total: 2 [464.77 GiB] / in use: 2 [464.77 GiB] / in no VG: 0 [0 ]

#lvscan -a -v
Finding all logical volumes
WARNING: Duplicate VG name pve: Existing Brs7Uk-3SZt-0dHH-mHvR-UcZl-iBR0-LarbiI (created here) takes precedence over kSMBV6-RbO2-bndK-Bz0b-Orh5-UbFs-Xjrf3H
WARNING: Duplicate VG name pve: Brs7Uk-3SZt-0dHH-mHvR-UcZl-iBR0-LarbiI (created here) takes precedence over kSMBV6-RbO2-bndK-Bz0b-Orh5-UbFs-Xjrf3H
inactive '/dev/pve/swap' [7.00 GiB] inherit
ACTIVE '/dev/pve/root' [58.00 GiB] inherit
inactive Original '/dev/pve/data' [151.39 GiB] inherit
inactive Snapshot '/dev/pve/vzsnap-vps-0' [1.00 GiB] inherit

I've seen lots about renaming the VG in searching and there are also lots of notes about this being a bug with LVM, but before just going into "monkey do" mode, I want to make sure that I didn't miss something in configuring the environment.
I have at least another six VMs to migrate. I would rather go back to scratch and fix things before I start using it in earnest.


Any help at all would be greatly appreciated.
 
...
#pvscan -u
WARNING: Duplicate VG name pve: Existing Brs7Uk-3SZt-0dHH-mHvR-UcZl-iBR0-LarbiI (created here) takes precedence over kSMBV6-RbO2-bndK-Bz0b-Orh5-UbFs-Xjrf3H
PV /dev/sdb2 with UUID hNeJGx-6r7D-WWTc-h27x-Hv9l-K4Xh-1ctPjR VG pve lvm2 [232.38 GiB / 16.00 GiB free]
PV /dev/sda2 with UUID cendxC-G7f8-E0U6-yQin-VbRg-QcQ8-WYdEJ8 VG pve lvm2 [232.38 GiB / 15.00 GiB free]
Total: 2 [464.77 GiB] / in use: 2 [464.77 GiB] / in no VG: 0 [0 ]

...
Hi,
this looks, that you put your hdd from one pve-installation in another pve-server. This can't work because of the same vg-name.

To solve the problem, put back sdb to the old server, start from live-cd (like grml) and rename the vg (like pve to pve_old).
Then you can put the hdd in the new server and mount the old filesystem as /dev/mapper/pve_old-data.

Udo
 
Udo, thanks for the response.

Ok, here is an update. What I did yesterday is migrate one of the VMs to one sever and the other to the alternate cluster node.
Both servers ran backups successfully. So I think this is more of a configuration or procedural issue. It almost seems like there is some cleanup that didn't run prior with the snapshot(my guess) that made the error occur.

These are new servers, never used before with new drives. My configuration is all new, the only thing old is the VMs used. Once again, backup was working before I started migrating things around and adding an additional VM. There were no hardware configuration changes since the initial install from DVD of 2.1-1. The only possibility of hdd being from another server is if the other cluster node is somehow in conflict. Which is possible if we are talking about the snapshot LVM being moved from one cluster node to another.(totally guessing) In that case, I would need to have a way to make sure that snapshot volumes are unique or removed after backups are completed.

The end goal for the environment is that it will be an HA environment and the VMs will move between physical servers sharing common iSCSI storage.

My next test will be to move the VMs all back to a single node and see if the error reoccurs.
 
Udo, thanks for the response.

Ok, here is an update. What I did yesterday is migrate one of the VMs to one sever and the other to the alternate cluster node.
Both servers ran backups successfully. So I think this is more of a configuration or procedural issue. It almost seems like there is some cleanup that didn't run prior with the snapshot(my guess) that made the error occur.

These are new servers, never used before with new drives. My configuration is all new, the only thing old is the VMs used. Once again, backup was working before I started migrating things around and adding an additional VM. There were no hardware configuration changes since the initial install from DVD of 2.1-1. The only possibility of hdd being from another server is if the other cluster node is somehow in conflict. Which is possible if we are talking about the snapshot LVM being moved from one cluster node to another.(totally guessing) In that case, I would need to have a way to make sure that snapshot volumes are unique or removed after backups are completed.

The end goal for the environment is that it will be an HA environment and the VMs will move between physical servers sharing common iSCSI storage.

My next test will be to move the VMs all back to a single node and see if the error reoccurs.

Hi,
but your pvscan shows two partitions on different disks belongs to a vg with the same name!
Why this can happens on a fresh install?

Do you use softwareraid? (but then should the uuid be the same?!)
To see the disk do following:
Code:
fdisk -l
blkid
dmesg | grep sd
Udo
 
Hi Udo,

Thanks again for the response.

There is no software RAID, but you these servers are Sun servers (X2250) which have a Windows Based RAID component in the BIOS. I'm going to see if I can verify that it's turned off at the BIOS level, so I can rule that out of the possible reasons.

Also as a side note, I found the fix for my Tomcat issue was just to add a second CPU resource. So that's one step forward today. Let's shoot for two.
 
Hi Udo,

Thanks again for the response.

There is no software RAID, but you these servers are Sun servers (X2250) which have a Windows Based RAID component in the BIOS. I'm going to see if I can verify that it's turned off at the BIOS level, so I can rule that out of the possible reasons.

Also as a side note, I found the fix for my Tomcat issue was just to add a second CPU resource. So that's one step forward today. Let's shoot for two.

And what's about the output of the commands (fdisk -l, blkid ...)?
 
Sorry, it's been a while since I've been able to get back to this.

I did go in an turn off the BIOS setting for RAID. So that's no longer a possible source of issues.

Backups have been running without issue for the past several days, without any changes, however I still see the same duplicate VG name error.

One additional observation is that the issue is only on one of the two nodes, they are physically the same and built at the same time.
So it's odd that only one node shows this if it were an OS level issue.

None-the-less here is the output you asked about:

# blkid
/dev/sda1: UUID="8528d91d-4fd6-4d5c-a34a-3acc7bb4de30" TYPE="ext3"
/dev/sda2: UUID="cendxC-G7f8-E0U6-yQin-VbRg-QcQ8-WYdEJ8" TYPE="LVM2_member"
/dev/sdb1: UUID="862adcbe-9b09-4e18-b6c0-b9d380e5614b" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb2: UUID="hNeJGx-6r7D-WWTc-h27x-Hv9l-K4Xh-1ctPjR" TYPE="LVM2_member"
/dev/mapper/pve-root: UUID="0343ddb2-e8f4-427c-921b-9227f5df2bd3" TYPE="ext3"
/dev/mapper/pve-swap: UUID="3cbbc83e-d539-4420-bde6-cbf80a7dc22f" TYPE="swap"
/dev/mapper/pve-data: UUID="7482f095-6977-4c61-88b7-e0e776506d71" TYPE="ext3"
/dev/sdc1: UUID_SUB="4d818b47-03eea94b-6e1d-00144f451edd" UUID="4d818b2f-f5ae4e8b-9e30-00144f451edd" TYPE="VMFS_volume_member"
/dev/sdd1: UUID_SUB="4d818d3f-d5ec8f02-5640-00144f451edd" UUID="4d818d26-262ce2a2-0112-00144f451edd" TYPE="VMFS_volume_member"
/dev/sde1: UUID_SUB="4f2aefb8-0d6388d2-b6c9-002219346f76" UUID="4f2aefb5-cc992756-eda0-002219346f76" TYPE="VMFS_volume_member"
/dev/sdf1: UUID_SUB="4d818c91-0c739399-be54-00144f451edd" UUID="4d818c8a-fdc46815-5a9c-00144f451edd" TYPE="VMFS_volume_member"
/dev/mapper/pve-data-real: UUID="48abb6f6-b765-43bb-a62c-8fe390613bda" SEC_TYPE="ext2" TYPE="ext3"

#fdisk -l


Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b6c24


Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 30402 243674112 8e Linux LVM


Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000796fe


Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 30402 243674112 8e Linux LVM


Disk /dev/dm-0: 62.3 GB, 62277025792 bytes
255 heads, 63 sectors/track, 7571 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-0 doesn't contain a valid partition table


Disk /dev/dm-1: 7516 MB, 7516192768 bytes
255 heads, 63 sectors/track, 913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-1 doesn't contain a valid partition table


Disk /dev/dm-2: 162.6 GB, 162550251520 bytes
255 heads, 63 sectors/track, 19762 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-2 doesn't contain a valid partition table


Disk /dev/dm-3: 162.6 GB, 162550251520 bytes
255 heads, 63 sectors/track, 19762 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-3 doesn't contain a valid partition table


Disk /dev/sdc: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ccaa9


Device Boot Start End Blocks Id System
/dev/sdc1 1 13054 104856191 fb VMware VMFS


Disk /dev/sdd: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00083c23


Device Boot Start End Blocks Id System
/dev/sdd1 1 13054 104856191 fb VMware VMFS


WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sde: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Device Boot Start End Blocks Id System
/dev/sde1 1 26109 209715199+ ee GPT


Disk /dev/sdf: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005d4cf


Device Boot Start End Blocks Id System
/dev/sdf1 1 13054 104856191 fb VMware VMFS
 
Sorry, it's been a while since I've been able to get back to this.

I did go in an turn off the BIOS setting for RAID. So that's no longer a possible source of issues.

Backups have been running without issue for the past several days, without any changes, however I still see the same duplicate VG name error.

One additional observation is that the issue is only on one of the two nodes, they are physically the same and built at the same time.
So it's odd that only one node shows this if it were an OS level issue.

None-the-less here is the output you asked about:

# blkid
/dev/sda1: UUID="8528d91d-4fd6-4d5c-a34a-3acc7bb4de30" TYPE="ext3"
/dev/sda2: UUID="cendxC-G7f8-E0U6-yQin-VbRg-QcQ8-WYdEJ8" TYPE="LVM2_member"
/dev/sdb1: UUID="862adcbe-9b09-4e18-b6c0-b9d380e5614b" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb2: UUID="hNeJGx-6r7D-WWTc-h27x-Hv9l-K4Xh-1ctPjR" TYPE="LVM2_member"
/dev/mapper/pve-root: UUID="0343ddb2-e8f4-427c-921b-9227f5df2bd3" TYPE="ext3"
/dev/mapper/pve-swap: UUID="3cbbc83e-d539-4420-bde6-cbf80a7dc22f" TYPE="swap"
/dev/mapper/pve-data: UUID="7482f095-6977-4c61-88b7-e0e776506d71" TYPE="ext3"
/dev/sdc1: UUID_SUB="4d818b47-03eea94b-6e1d-00144f451edd" UUID="4d818b2f-f5ae4e8b-9e30-00144f451edd" TYPE="VMFS_volume_member"
/dev/sdd1: UUID_SUB="4d818d3f-d5ec8f02-5640-00144f451edd" UUID="4d818d26-262ce2a2-0112-00144f451edd" TYPE="VMFS_volume_member"
/dev/sde1: UUID_SUB="4f2aefb8-0d6388d2-b6c9-002219346f76" UUID="4f2aefb5-cc992756-eda0-002219346f76" TYPE="VMFS_volume_member"
/dev/sdf1: UUID_SUB="4d818c91-0c739399-be54-00144f451edd" UUID="4d818c8a-fdc46815-5a9c-00144f451edd" TYPE="VMFS_volume_member"
/dev/mapper/pve-data-real: UUID="48abb6f6-b765-43bb-a62c-8fe390613bda" SEC_TYPE="ext2" TYPE="ext3"

...
Hi,
the big question is - why you have different pve-vgs on sda2 and sdb2?? It's not from an fake-raid or so, because the uuids are different...

You must answer this yourself, because only you know what you have installed an which hdd.

Udo
 
I once had the same problem after installing proxmox to 2 different disks in the same host.
The tool vgimportclone is able to fox this by renaming one of the pve volume groups to another name
 
I didn't get a chance to try the vgimportclone because I re-installed both nodes in the new cluster to verify consistent configurations. So far it works fine. I've got software RAID 1 running on both nodes. I'm thinking this was an issue with the hardware in combination with backup process. Once the HW RAID was taken out of the picture all has been running fine.
The irony is that there is no Linux support for the version of the BIOS I'm running. Thanks for all the help. Now I'm on to setting up shorewall. We really need to get a security API for ProxMox.
 
I once had the same problem after installing proxmox to 2 different disks in the same host.
The tool vgimportclone is able to fox this by renaming one of the pve volume groups to another name

Has anyone here tried the abovementioned to fix the problem? Experiencing the exact same issue on 4.4.

If so, how? and is there any risk of data loss?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!