Proxmox High Availability Migrate Issue

joshua11

New Member
Sep 12, 2023
5
0
1
I have encountered an issue with Proxmox High Availability (HA). The problem occurred when one of our servers went down, causing the VM to move to another server where HA is implemented. Now, when we tried to migrate it back to the original host/server using Proxmox, it did not proceed successfully, and there are error messages appearing. Screenshot 2024-01-10 154331.png
The storage used by the VM, where it is originally hosted, is the local-lvm.
 
How did you implement HA with local-lvm storage? This normally only works with shared storage ( shared LVM, ZFS HA, CEPH, NFS / CIFS) and the error you see is expected in a local setup.

Maybe explain your two environments a little bit more and how you migrated the VM from the local storage to the unknown-to-us HA storage.
 
How did you implement HA with local-lvm storage? This normally only works with shared storage ( shared LVM, ZFS HA, CEPH, NFS / CIFS) and the error you see is expected in a local setup.
I just set it up in HA, even though the VM is using local-lvm. It works with HA, with no errors. I'm using Proxmox version 7.1-7.

Maybe explain your two environments a little bit more and how you migrated the VM from the local storage to the unknown-to-us HA storage.
My servers are in different server appliances, but they are clustered.
 
I just set it up in HA, even though the VM is using local-lvm. It works with HA, with no errors. I'm using Proxmox version 7.1-7.
How did you set it up? Your error message implies that it's not working "with no errors". That's why I ask. HA is much more complicated than "check HA" in the options.
 
How did you set it up? Your error message implies that it's not working "with no errors". That's why I ask. HA is much more complicated than "check HA" in the options.
The error in my screenshot is encountered when it migrates back to its original host.
Even if I manually migrate it or let it happen automatically via High Availability (HA), it doesn't succeed.
I've also tried removing the VM from High Availability (HA) and disabling it, but the error messages that appear are 'no storage ID' and 'no such storage.
 
Hi,
@LnxBil is absolutely right. You need shared or replicated storage to use HA: https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#_requirements

I just set it up in HA, even though the VM is using local-lvm. It works with HA, with no errors. I'm using Proxmox version 7.1-7.
IIRC the particular error was already fixed a while ago (won't help with the general issue that HA with local storage won't work), please upgrade to the latest 7.x version to get security updates and fixes:
https://pve.proxmox.com/wiki/Package_Repositories#_proxmox_ve_7_x_repositories
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
 
Same error here, Proxmox 9.1.2 on the "target" (where the VM is currently) and 9.1.4 on the source (where the VM came and where it must returns).

Anyone fixed this?

Exactly the same case: Enabled HA (with a linked clone on ZFS), shutdown the machine, HA worked BUT it's unable to get the machine back to their original proxmox instance.

EDIT:

This is the shell log:

Code:
Header
Proxmox
Virtual Environment 9.1.2
Virtual Machine 800 (TestHALinked) on node 'b1'
No Tags
Logs
task started by HA resource agent
2026-01-17 00:06:25 starting migration of VM 800 to node 'b2' (X.X.X.X)
2026-01-17 00:06:25 found local, replicated disk 'local-zfs:base-900-disk-0/vm-800-disk-0' (attached)
2026-01-17 00:06:25 ERROR: Problem found while scanning volumes - can't migrate 'local-zfs:base-900-disk-0/vm-800-disk-0' as it's a clone of 'base-900-disk-0' at /usr/share/perl5/PVE/QemuMigrate.pm line 571.
2026-01-17 00:06:25 aborting phase 1 - cleanup resources
2026-01-17 00:06:25 ERROR: migration aborted (duration 00:00:00): Problem found while scanning volumes - can't migrate 'local-zfs:base-900-disk-0/vm-800-disk-0' as it's a clone of 'base-900-disk-0' at /usr/share/perl5/PVE/QemuMigrate.pm line 571.
TASK ERROR: migration aborted
 
Last edited: