[SOLVED] Proxmox 7.1-4- migration aborted: no such logical volume pve/data

mrE

Renowned Member
Apr 21, 2014
26
2
68
Grettings,

I broke my own rule and jump on the recent update and it hit me in the face:
I have a cluster with 3 Proxmox servers (pmx1 & pmx3; pmx2 used just for quorum)

The sequence used to upgrade was:
  1. I upgrade pmx2 (the one just for quorum, it doesn't run any vm) and reboot
  2. Migrate all vms from pmx1 -> pmx3, upgrade pmx1 and reboot
  3. Migrate al from pmx3 -> pmx1, without any issue, then I upgrade pmx3 and reboot
(I have attached 2 files with the logs of pmx1, pmx3)
Now I have this in the cluster
pmx1-pmx3-vms.png


I use a Synology NAS as network storage with NFS shared folders
2nasstorage.png
This is the cluster storage
storage-cluster.png


Code:
Now I try to migrate some VMs/CTs from pmx1 -> pm3 and this error occurs any time:
2021-11-20 01:32:38 starting migration of CT 206 to node 'pmx1' (192.168.16.61)
2021-11-20 01:32:38 ERROR: no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
2021-11-20 01:32:38 aborting phase 1 - cleanup resources
2021-11-20 01:32:38 start final cleanup
2021-11-20 01:32:38 ERROR: migration aborted (duration 00:00:00): no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
TASK ERROR: migration aborted

The same occurs if I try to migrate some CT from pmx1 -> pm2:

Code:
2021-11-20 01:38:13 shutdown CT 201
2021-11-20 01:38:16 starting migration of CT 201 to node 'pmx2' (192.168.16.62)
2021-11-20 01:38:16 ERROR: no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
2021-11-20 01:38:16 aborting phase 1 - cleanup resources
2021-11-20 01:38:16 start final cleanup
2021-11-20 01:38:16 start container on source node
2021-11-20 01:38:18 ERROR: migration aborted (duration 00:00:05): no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
TASK ERROR: migration aborted

If I try to migrate from pmx3 -> pmx1 a CT that is offline the error persists:
Code:
2021-11-20 01:44:51 starting migration of CT 206 to node 'pmx1' (192.168.16.61)
2021-11-20 01:44:51 ERROR: no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
2021-11-20 01:44:51 aborting phase 1 - cleanup resources
2021-11-20 01:44:51 start final cleanup
2021-11-20 01:44:51 ERROR: migration aborted (duration 00:00:00): no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.
TASK ERROR: migration aborted

This is what df shows in pmx1, pmx3 (IP 192.168.16.11 is another NAS that was used some weeks ago, the new nas and final is 192.168.16.10)

df-pmx1.png

df-pmx3.png

At this moment (11am), staff is working on VMs and I can't restart PMX1.
Thank you for your help.

Regards
 

Attachments

Looks like I won't be spared from reinstalling my 3 servers and recreating my cluster.

Luckily, by having a PBS and recent backups this may be "simple".

Even luckier: I took a chance and restarted my server running the VM/CTs... and they all started.
o_O,:rolleyes:,:oops:.
 
With no solution, the only workaround was to reinstall my cluster.

This time I was careful to reinstall each Proxmox with an 8GB Swap partition.

I recommend having a Proxmox Backup Server installed on your cluster; PBS is a lifesaver and is quick to restore backups.
+1 for PBS.:)

----
Edit / 2021.12.13:
A possible solution to my problem would have been with @Fabian_E 's comment that @mlanner has confirmed. But now already reinstalled I can't corroborate it on my cluster.

Thanks for the responses!
 
Last edited:
@mrE

It doesn't make sense to me that you marked this as "SOLVED". It appears that you were able to migrate VMs/CTs between nodes perfectly fine before the upgrade, but after the upgrade the functionality is no longer there. Having backups of everything is obviously good, but it's far from a solution to the problem you encountered. It's a disaster recovery tactic. I ended up finding your post as I ran into the same issue and was hopeful that "SOLVED" actually would solve my problem. Reinstalling my cluster might get me to a point where things are working again, but without truly understanding what's going on here and why it suddenly stopped working after an upgrade, I'd be inclined to consider this a bug.

Perhaps some PVE developer can chime in here with some input?

@t.lamprecht You seem to have been busy looking into various issues reported as part of the 7.1 release announcement. Do you happen to have any comments or insights on this problem?
 
Also, for what it's worth, it appears the servers in the screenshot above are using ZFS for the boot disks. I have the same in a mirror-0 config.
 
Hi,
the issue comes from an enabled local-lvm storage, which doesn't actually exist, i.e. no logical volume pve/data. Simply disable/remove that storage (or restrict it to the nodes that actually have it).
 
  • Like
Reactions: mrE
Thanks @Fabian_E , that makes sense. I did disable it on all my nodes and migration does indeed work now. What I'm curious about is why and how it could and used to work in previous versions of PVE? Maybe it was a "bug" in earlier releases? It just came as a surprise that it suddenly didn't work.
 
Thanks @Fabian_E , that makes sense. I did disable it on all my nodes and migration does indeed work now. What I'm curious about is why and how it could and used to work in previous versions of PVE? Maybe it was a "bug" in earlier releases? It just came as a surprise that it suddenly didn't work.
The check for LVM-thin storages is new in version 7.1. Previously, it was just assumed that they are always active, but they might not be.
 
  • Like
Reactions: mrE
Hi,
the issue comes from an enabled local-lvm storage, which doesn't actually exist, i.e. no logical volume pve/data. Simply disable/remove that storage (or restrict it to the nodes that actually have it).
I encontered the same issue but it apeared when i was trying to add the sata ports, as a pci device, to the vm. And in my case there is no local-lvm storage that actually don't exist. Any idea of what could solve it?
 
Last edited:
Hi,
I encontered the same issue but it apeared when i was trying to add the sata ports, as a pci device, to the vm. And in my case there is no local-lvm storage that actually don't exist. Any idea of what could solve it?
what exactly is the error message you get and what is the command/action that triggers it? If it's migration (or another task), please provide the full task log. Please provide the output of pveversion -v.
 
Hi,

what exactly is the error message you get and what is the command/action that triggers it? If it's migration (or another task), please provide the full task log. Please provide the output of pveversion -v.
I use the command qm start and with the sata ports assingned as a pci device on the vm it displays this error message: no such logical volume pve/data at /usr/share/perl5/PVE/Storage/LvmThinPlugin.pm line 219.Opera Instantâneo_2021-12-16_063312_10.0.0.108.pngOpera Instantâneo_2021-12-16_063352_10.0.0.108.pngOpera Instantâneo_2021-12-16_063521_10.0.0.108.pngOpera Instantâneo_2021-12-16_063521_10.0.0.109.png
 
What is the output of lvs? Your VM references two volumes on local-lvm and those should be part of the default thin pool pve/data.
 
  • Like
Reactions: Defaultmox
What is the output of lvs? Your VM references two volumes on local-lvm and those should be part of the default thin pool pve/data.
One is the EFI and the other is a partition of the storage where the sistem of the vm is installed. The disk is partitioned in three and proxmox is installed in it.
 

Attachments

  • Opera Instantâneo_2021-12-16_110502_10.0.0.108.png
    Opera Instantâneo_2021-12-16_110502_10.0.0.108.png
    6 KB · Views: 14
I don`t know if it could be related but the sata ports aren`t alone at the group. I`ve already used pcie_acs_override=downstream,multifunction but it didn't separate. Sorry, but I`m new at this my intent was to enable the sata ports so that i could freely swipe my disks with the fisical machines when necessary and in a plug and play way. What did i do something wrong or is this not suported?
 

Attachments

  • IOMMU.png
    IOMMU.png
    6.7 KB · Views: 13
Did you already try to follow this article? Or as an easier alternative, you could also attach the disk by adding, e.g. sata0: /dev/sda (or using the more stable /dev/disk/by-id/XYZ path) to your VM config.
 
If i attach the disk making it a storage on the pve by creating a lvm thin pool will the data still be accessible by other devices or will it be only usable on vm on proxmox?
 
I ran into the same problem. In my case, it's a lvmthin I created called hd8thin. Commented it out of the /etc/pve/storage.cfg and was able to migrate VMs back to the node.

#lvmthin: hd8thin
# thinpool hd8thin
# vgname hd8t
# content images,rootdir

# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert hd8thin hd8t twi-aotz-- <7.28t 7.45 13.92 hd8tvz hd8t Vwi-aotz-- 7.27t hd8thin 7.45 data pve twi-aotz-- <680.61g 0.00 0.27 root pve -wi-ao---- 200.00g swap pve -wi-ao---- 16.00g

I am new to lvm thin pool. I need to read more documents to full understand it. You are welcome to lecture me:)
 
Hi,
I ran into the same problem. In my case, it's a lvmthin I created called hd8thin. Commented it out of the /etc/pve/storage.cfg and was able to migrate VMs back to the node.

#lvmthin: hd8thin
# thinpool hd8thin
# vgname hd8t
# content images,rootdir

# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert hd8thin hd8t twi-aotz-- <7.28t 7.45 13.92 hd8tvz hd8t Vwi-aotz-- 7.27t hd8thin 7.45 data pve twi-aotz-- <680.61g 0.00 0.27 root pve -wi-ao---- 200.00g swap pve -wi-ao---- 16.00g

I am new to lvm thin pool. I need to read more documents to full understand it. You are welcome to lecture me:)
is the storage available on all nodes? Otherwise, you should reflect this in your storage configuration. Can be done in the UI or with:
Code:
pvesm set hd8thin --nodes <list of nodes where it's actually available>
 
Hi,

is the storage available on all nodes? Otherwise, you should reflect this in your storage configuration. Can be done in the UI or with:
Code:
pvesm set hd8thin --nodes <list of nodes where it's actually available>
Hi Fabian,

Yes, hd8thin is local on the target node. I ran the command and now I was able to migrate VMs to the nodes with the storage available.

Thank you!

James
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!