Feature Request: Migrate VM even with PCI Device entries

dooferorg

Member
Apr 12, 2024
40
18
8
I can understand why there might well be a reason to not allow migration of a VM if the target host might not support the same PCI Device pass-through entries, but it should at least be an option you can click saying you understand the implications, or be able to indicate that a target node is in the same logical group and it's understood that the capabilities are exactly the same.

I have 5 identical nodes within the cluster, all having GPUs installed on the same slot and thus have exactly the same PCIExpress entries and mediated devices (mdevs).

It's annoying to have to remove the PCI device entry, migrate the VM, only to then re-add the exact same entry on the new host.

Surely I can't be the only one to have a relatively homogenous cluster and want to do this?
 
Hi,

are you talking about offline or live-migration? Live-migration with PCIe devices is a whole different topics and only possible under special circumstances.

Offline migration should be possible with proper Resource mappings.
 
True, I didn't specify. I'm talking offline migration.

When trying to migrate a windows VM with a vgpu specified, I get an error message at the bottom of the migration dialog 'Can't migrate VM with local resources hostpci0'.

Thanks for pointing out the 'Resource mappings' thing. That certainly was not obvious. Maybe some simple way to 'promote' a local config entry to a cluster wide resource mapping would be helpful?

When trying to do that mapping, it seems to let me make one against just one node in the cluster but you can't add other nodes to that mapping, at least via the GUI it is not intuitive on how to actually get that to work. I did notice too that the mediate devices (i.e. the same list as 'mdevctl types') did not show when checking 'use with mediated devices'

If there's a way to set it up, cool, but can you explain how to do that?

1742304124510.png