unable to start or move VM on drbd disk after upgrade PVE 8 to 9

kshesq

Well-Known Member
May 7, 2020
47
3
48
49
After upgrading one of my cluster machines (pve2) from pve 8 to pve 9 I cannot start or migrate any vm there.

I upgraded per https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and also changed the linbit repo from deb http://packages.linbit.com/proxmox/ proxmox-8 drbd-9 to deb http://packages.linbit.com/proxmox/ proxmox-9 drbd-9



After the upgrade and reboot of pve2 everything seems to be running fine


root@pve2:~# drbdadm status r0
r0 role:Primary
disk:UpToDate
peer role:Primary
replication:Established peer-disk:UpToDate


root@pve2:~# cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
srcversion: DC0D1C23A61F23BE0ABB8A2
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:3435091 dw:4315897 dr:189952 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0



But when starting a vm I get



WARNING: Not using device /dev/sdc1 for PV kE5dwK-hNfv-Ydaf-KlbJ-YrVT-DWav-dMtRzy.
WARNING: PV kE5dwK-hNfv-Ydaf-KlbJ-YrVT-DWav-dMtRzy prefers device /dev/drbd0 because device size is correct.
WARNING: Not using device /dev/sdc1 for PV kE5dwK-hNfv-Ydaf-KlbJ-YrVT-DWav-dMtRzy.
WARNING: PV kE5dwK-hNfv-Ydaf-KlbJ-YrVT-DWav-dMtRzy prefers device /dev/drbd0 because device size is correct.



TASK ERROR: can't activate LV '/dev/datavg/vm-101-disk-1': Cannot activate LVs in VG datavg while PVs appear on duplicate devices.



and on online migrate i get



2026-02-15 13:55:23 starting migration of VM 101 to node 'pve2' (192.168.208.82)
2026-02-15 13:55:23 starting VM 101 on remote node 'pve2'
2026-02-15 13:55:24 [pve2] can't activate LV '/dev/datavg/vm-101-disk-1': Cannot activate LVs in VG datavg while PVs appear on duplicate devices.
2026-02-15 13:55:24 ERROR: online migrate failure - remote command failed with exit code 255
2026-02-15 13:55:24 aborting phase 2 - cleanup resources
2026-02-15 13:55:24 migrate_cancel
2026-02-15 13:55:24 ERROR: migration finished with problems (duration 00:00:01)
TASK ERROR: migration problems



Offline migrate works, but then I get the start error above.



So situation now is I have a PVE8 running fine and abe to start and stop VM. And one PVE9 doing nothing.
How can I fix this?