Proxmox HA Issue- Urgent Help

Jan 23, 2020
8
0
21
35
Proxmox HA issue, my primary cluster was restarted due to an issue and the HA tried to move the VM to the secondary cluster. Still, the secondary cluster didn't have the replication of the VM so the HA went to an error state as they was no disk present.

When my primary cluster comeback online I tried to move the VM back to it as it has the disk and everything able on it but got the below error.

Really urgent need to get this VM on with any data loss.



ask started by HA resource agent
2022-06-21 23:07:41 starting migration of VM 110 to node 'WNVMHOST04' (100.127.1.3)
2022-06-21 23:07:42 found local disk 'local-zfs:vm-110-disk-0' (in current VM config)
2022-06-21 23:07:42 found local disk 'local-zfs:vm-110-state-Before_Upgrade' (referenced by snapshot(s))
2022-06-21 23:07:42 copying local disk images
cannot open 'rpool/data/vm-110-disk-0': dataset does not exist
usage:
snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
For the property list, run: zfs set|get
2022-06-21 23:07:43 ERROR: Failed to sync data - storage migration for 'local-zfs:vm-110-disk-0' to storage 'local-zfs' failed - zfs error: For the delegated permission list, run: zfs allow|unallow
2022-06-21 23:07:43 aborting phase 1 - cleanup resources
2022-06-21 23:07:43 ERROR: found stale volume copy 'local-zfs:vm-110-disk-0' on node 'WNVMHOST04'
2022-06-21 23:07:43 ERROR: migration aborted (duration 00:00:02): Failed to sync data - storage migration for 'local-zfs:vm-110-disk-0' to storage 'local-zfs' failed - zfs error: For the delegated permission list, run: zfs allow|unallow
TASK ERROR: migration aborted
 
remove the guest from HA, then move the config file manually in /etc/pve so that it's back on the original node, then you should be able to start the VM (and re-enable HA once the replication had its first successful run).
 
agent: 1
bootdisk: scsi0
cores: 20
ide2: none,media=cdrom
memory: 40960
name: MyHUBServer
net0: virtio=FA:62:CD:AF:F4:FD,bridge=vmbr1,tag=102
net1: virtio=76:85:6D:4A:C2:77,bridge=vmbr1,tag=102
numa: 0
onboot: 1
ostype: l26
parent: Before_Upgrade
scsi0: local-zfs:vm-110-disk-0,cache=unsafe,iothread=1,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=fa80117e-fc4f-4ce9-aa39-9ae5171dc9dc
sockets: 1
vmgenid: 8ed7cbe9-a26d-4c38-873e-8958b2d82f9f

[Before_Upgrade]
agent: 1
bootdisk: scsi0
cores: 20
ide2: none,media=cdrom
memory: 40960
name: MyHUBServer
net0: virtio=FA:62:CD:AF:F4:FD,bridge=vmbr1,tag=102
net1: virtio=76:85:6D:4A:C2:77,bridge=vmbr1,tag=102
numa: 0
onboot: 1
runningcpu: kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep
runningmachine: pc-i440fx-5.1+pve0
scsi0: local-zfs:vm-110-disk-0,cache=unsafe,iothread=1,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=fa80117e-fc4f-4ce9-aa39-9ae5171dc9dc
snaptime: 1655738148
sockets: 1
vmgenid: 11018e0b-d440-436f-9ece-74fe943dfbd0
vmstate: local-zfs:vm-110-state-Before_Upgrade
 
if you want to get back to the state of the snapshot "Before_Upgrade", you need to do a rollback to that snapshot - else the guest will of course use whatever state it currently has.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!