Hi,
I'm running a three node hyperconverged cluster. I've been running Proxmox 7 for some time and decided to take the leap to Proxmox 8. Since then, I've noticed some migration issues regarding disks. Below is an example of one of the errors I got when trying to migrate. The migration actually succeeds and the node is moved, but clearly something goes a little wrong.
Any ideas?
Thanks,
Chris.
I'm running a three node hyperconverged cluster. I've been running Proxmox 7 for some time and decided to take the leap to Proxmox 8. Since then, I've noticed some migration issues regarding disks. Below is an example of one of the errors I got when trying to migrate. The migration actually succeeds and the node is moved, but clearly something goes a little wrong.
Code:
task started by HA resource agent2023-09-07 16:52:01 use dedicated network address for sending migration traffic (10.0.0.111)
2023-09-07 16:52:02 starting migration of VM 103 to node 'pve01' (10.0.0.111)
rbd: sysfs write failed
can't unmap rbd device /dev/rbd-pve/24f246db-267a-4a95-9346-2142944edec8/ceph_data/vm-103-disk-0: rbd: sysfs write failed
2023-09-07 16:52:02 ERROR: volume deactivation failed: ceph_data_krbd:vm-103-disk-0 at /usr/share/perl5/PVE/Storage.pm line 1234.
2023-09-07 16:52:03 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems
Any ideas?
Thanks,
Chris.