Hey guys!
I am currently having troubles distributing multiple VirGL VMs over two GPUs, mostly because I was unable to find any documentation for the tool.
As per tests, VirGl is looking for a renderer in /dev/dri/ and uses it for the VM. After installing two cards (that are visible in lspci and nvidia-smi) I've encountered an issue that no matter how many (currently it's 7) VMs I launch, all of these are being allocated to the latest card whilst the first one is chilling with 0% load.
I wish to understand where can I get a better understanding of VirGL internals and maybe some advice or directions on what to take a better look at.
Cheers!
I am currently having troubles distributing multiple VirGL VMs over two GPUs, mostly because I was unable to find any documentation for the tool.
As per tests, VirGl is looking for a renderer in /dev/dri/ and uses it for the VM. After installing two cards (that are visible in lspci and nvidia-smi) I've encountered an issue that no matter how many (currently it's 7) VMs I launch, all of these are being allocated to the latest card whilst the first one is chilling with 0% load.
I wish to understand where can I get a better understanding of VirGL internals and maybe some advice or directions on what to take a better look at.
Cheers!