Proxmox 8 Migration Issues

chrispage1

Member
Sep 1, 2021
86
39
23
32
Hi,

I'm running a three node hyperconverged cluster. I've been running Proxmox 7 for some time and decided to take the leap to Proxmox 8. Since then, I've noticed some migration issues regarding disks. Below is an example of one of the errors I got when trying to migrate. The migration actually succeeds and the node is moved, but clearly something goes a little wrong.

Code:
task started by HA resource agent2023-09-07 16:52:01 use dedicated network address for sending migration traffic (10.0.0.111)
2023-09-07 16:52:02 starting migration of VM 103 to node 'pve01' (10.0.0.111)
rbd: sysfs write failed
can't unmap rbd device /dev/rbd-pve/24f246db-267a-4a95-9346-2142944edec8/ceph_data/vm-103-disk-0: rbd: sysfs write failed
2023-09-07 16:52:02 ERROR: volume deactivation failed: ceph_data_krbd:vm-103-disk-0 at /usr/share/perl5/PVE/Storage.pm line 1234.
2023-09-07 16:52:03 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems

Any ideas?

Thanks,
Chris.
 
Interesting, I have seen those same errors in some other migrations I was making.

So in your case did this cause any physical problems or just throwing up some errors?

Are you running/have you recently upgraded to Proxmox 8?

Chris.
I repurposed 3 of my nodes into a separate Proxmox cluster and fresh installed Proxmox 8 on all 3 nodes. I think that backups and migrations make use of:

Code:
rbd -p <POOLNAME> map <IMAGE>
# do work here
rbd unmap /dev/[rbd device]

*I likely also migrated my vm as to test my new Proxmox8 cluster

The error is harmless - although I did destroy the vm disk while trying to fix it on the Proxmox host.

I don't think I saw this in Proxmox7 (although - I might be wrong) - but it does suggest that LVM on the host is spotting an entire RBD as its own (the block device within the VM was not partitioned and the whole disk image was used for LVM within the VM).
 
Similar situation then. We've run a lot of migrations in Proxmox 6 & 7 and since the 8 update is actually the first time I've ever seen migrations completing with warnings.

So I think the might be something introduced with the release of 8.

Chris.
 
After 'failed' migrations I get this when making updates / updates to grub -

Code:
Generating grub configuration file ...
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Found linux image: /boot/vmlinuz-6.2.16-10-pve
 
Hi,

I'm running a three node hyperconverged cluster. I've been running Proxmox 7 for some time and decided to take the leap to Proxmox 8. Since then, I've noticed some migration issues regarding disks. Below is an example of one of the errors I got when trying to migrate. The migration actually succeeds and the node is moved, but clearly something goes a little wrong.

Code:
task started by HA resource agent2023-09-07 16:52:01 use dedicated network address for sending migration traffic (10.0.0.111)
2023-09-07 16:52:02 starting migration of VM 103 to node 'pve01' (10.0.0.111)
rbd: sysfs write failed
can't unmap rbd device /dev/rbd-pve/24f246db-267a-4a95-9346-2142944edec8/ceph_data/vm-103-disk-0: rbd: sysfs write failed
2023-09-07 16:52:02 ERROR: volume deactivation failed: ceph_data_krbd:vm-103-disk-0 at /usr/share/perl5/PVE/Storage.pm line 1234.
2023-09-07 16:52:03 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems

Any ideas?

Thanks,
Chris.
I got the same problem before and that makes me think a little..
I fixed it just turning on the replication making the migration work better and faster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!