Restoring a VM broke my Local-LVM

fmnamado

New Member
Jan 13, 2023
7
3
3
Hello,

So I am having some troubles ATM.

I installed Proxmox on a 240GB SSD, triple-booting alongside with Windows and Ubuntu.
No problems here, everything is working, the 3 OS.

I was configuring Proxmox, installing VMs etc.
There is one VM I already have on another PVE node, that gave me too much work installing it and configuring it.
So I would like to copy it to my this new node.

The best way I found out using google and this support forums, would be to backup the VM and restoring it on a new node.

So I mounted a SMB share on my old Proxmox Virtual Environment 7.4-19 and backed up.
I mounted the same share on my new Proxmox and Virtual Environment 8.3.1 browsed the share.
Via web gui, in the storage option, there is my samba share.
In the backup tab, I found out the VM I wanted, the only one there, and clicked restore.
I just changed the VM ID from 100 (default) to a number I would like and pressed restore.

I got this error message:
restore vma archive: zstd -q -d -c /mnt/pve/NAS720pBackup/dump/vzdump-qemu-707-2024_12_05-09_51_12.vma.zst | vma extract -v -r /var/tmp/vzdumptmp2737.fifo - /var/tmp/vzdumptmp2737
CFG: size: 808 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 85899345920 devname: drive-sata0
CTIME: Thu Dec 5 09:51:13 2024
WARNING: VG name pve is used by VGs 1x08S3-BfeD-uQk2-eo3Y-aFRQ-LEZv-HL7qr1 and pc0vDP-m3Q1-XQxf-vqq6-gNPJ-SL4l-aJ7W1r.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
WARNING: VG name pve is used by VGs 1x08S3-BfeD-uQk2-eo3Y-aFRQ-LEZv-HL7qr1 and pc0vDP-m3Q1-XQxf-vqq6-gNPJ-SL4l-aJ7W1r.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
WARNING: VG name pve is used by VGs 1x08S3-BfeD-uQk2-eo3Y-aFRQ-LEZv-HL7qr1 and pc0vDP-m3Q1-XQxf-vqq6-gNPJ-SL4l-aJ7W1r.
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Multiple VGs found with the same name: skipping pve
no lock found trying to remove 'create' lock
error before or during data restore, some or all disks were not completely restored. VM 707 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/NAS720pBackup/dump/vzdump-qemu-707-2024_12_05-09_51_12.vma.zst | vma extract -v -r /var/tmp/vzdumptmp2737.fifo - /var/tmp/vzdumptmp2737' failed: activating LV 'pve/data' failed: Use --select vg_uuid=<uuid> in place of the VG name.

And now my local-lvm section is with que gray question mark.
None of my other VMs work, as they were stored in this local-lvm.

  1. What have I done wrong?
  2. What does the message means? I google if already but came to no conclusion.
  3. Is this recoverable? Tell me it is because I already reinstalled proxmox one time because of this, I tought it was because I change the hostname, but it seems it was during restore. But in the other attempt I managed to run this restored VM, so I don't know what I did differently.

Please help :mad:

Thank you very much!
 
Well in the meantime trying and thinking in other things, lead to me questioning why are duplicate?
Is there another filesystem?

I have another proxmox installation in other SSD, to try things out.
I disconnected this in the BIOS and the problem was solved.
This proxmox instalation was conflicting with the other instalation, even if turned off.

Makes sense?
 
  • Like
Reactions: Kingneutron
Just so you know, proxmox reeeeeeaally isn't designed for multiboot. If you ever have to reinstall it, it will wipe the destination disk - and the other 2 OS will not be able to boot anymore, they will be gone.

Give it a dedicated disk and you shouldn't have to worry - but still keep backups.
 
Hello,
I know that, thank you for your concerns.
That is why I installed this in a very specific order:

  1. Proxmox, leaving free space for other OS
  2. Windows 11
  3. Ubuntu
I can multi boot them.
I am aware the reinstalling any OS will probably broke the others.
But I am not planning into reinstalling anything.

This other two OS are not there to be used, they are there in a just in case scenario.

It is my DIY NAS, it may be helpful to access data via a GUI in an emergency (via Ubuntu, my HDDs are being passed to the VM inside Proxmox) or to run a specific tool in Windows.

But feel free to advise me in any way.
Thank you!

EDIT:
But as you can see, the problem wasn't the multi boot part, but the existance of other LVM on OTHER disk with the same name.
At leasts seems to me it was that.
I wasn't expecting that.

But that second boot drive will be removed, it just has OS installed to trial different options, so it won't be there to install other OS.
 
Last edited:
LVM does not support having two VG's with the same name, which is exactly your issue here. You may try to add filters in lvm.conf in each of the PVE installation excluding the other one so it won't be scanned by the kernel on boot so you have only one "pve" VG active in each.
 
  • Like
Reactions: Kingneutron
I didn't know that. Now I now.
It could have saved me a clean reinstall of the 3 OS...
I wasn't expecting that a non-booted proxmox install in other ohysical drive would impact the one that is running.

As I said,this other drive is only for test driving changes, so I can disconnect one or another in the BIOS so they conflict.
Now that I know they can conflict.

So I don't need to mess with the filters.

But thank you for the information, may help others in the future :)
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!