I've searched the forum and Google but can't find an answer to my odd use case.
I use a HP Z800 for gaming (currently dual X5570's fitted but have a pair of X5670's to go in), it's not going to be the fastest machine around but I can live with the CPU limitations as I don't really play the latest and greatest games. It'll take quite a heafty graphics card before these Xeons become a severe bottleneck. Currently use a GTX1050 and it's constantly at 95%+ utilization, and that's a fairly reasonable mid-range card. CPU's are sitting at 15-20% with no single core anywhere near 100%.
I plan on starting a game streaming channel and have used Proxmox to virtualize the gaming and streaming machines as having 6/12 decade old cores has a lot of unused CPU resources but no way to use them all.
I have found most games are fine with Hyperthreading enabled and all 16 cores assigned to the VM, except one. The Unity engine (or rather this specific version of it) has an issue with hyperthreading and it kills performance. If I disable HT, I get a slight drop in performance in some games, but also other uses (compiling code for example).
My question is does KVM present CPU's in the same physical layout as a bare metal OS would see them? For example if cores 0, 2, 4 and 8 are the 'real' cores, does that translate to the same in a guest?
I have also found that trying to record gameplay doesn't work properly. I'm using a CPU encoder and some times it's silky smooth, then part way through it just drops frames and develops massive artifacts with no increase in CPU usage, then goes back to normal. My assumption is that the encoder process is being assigned to a thread that's on the same CPU core as the game engine or another intensive process. Normally I'd just use CPU afinity in Windows however I'm unsure if the virtual CPU's align with the physical topology, or even the same NUMA node.
The technical bits behind virtualization are a bit beyond my outdated skill-set and finding relevent information for such old CPU's isn't going well.
I did read about an issue with older HT implementations that led to cache latency if a process is moved from one 'fake' core inside the CPU to another.
I'd just boot straight from the Windows SSD to work it out without KVM being in the way, however the machine has a BIOS and the VM uses UEFI so I can't just boot on bare metal and see if the issues go away.
Thanks for any advice you may have
I use a HP Z800 for gaming (currently dual X5570's fitted but have a pair of X5670's to go in), it's not going to be the fastest machine around but I can live with the CPU limitations as I don't really play the latest and greatest games. It'll take quite a heafty graphics card before these Xeons become a severe bottleneck. Currently use a GTX1050 and it's constantly at 95%+ utilization, and that's a fairly reasonable mid-range card. CPU's are sitting at 15-20% with no single core anywhere near 100%.
I plan on starting a game streaming channel and have used Proxmox to virtualize the gaming and streaming machines as having 6/12 decade old cores has a lot of unused CPU resources but no way to use them all.
I have found most games are fine with Hyperthreading enabled and all 16 cores assigned to the VM, except one. The Unity engine (or rather this specific version of it) has an issue with hyperthreading and it kills performance. If I disable HT, I get a slight drop in performance in some games, but also other uses (compiling code for example).
My question is does KVM present CPU's in the same physical layout as a bare metal OS would see them? For example if cores 0, 2, 4 and 8 are the 'real' cores, does that translate to the same in a guest?
I have also found that trying to record gameplay doesn't work properly. I'm using a CPU encoder and some times it's silky smooth, then part way through it just drops frames and develops massive artifacts with no increase in CPU usage, then goes back to normal. My assumption is that the encoder process is being assigned to a thread that's on the same CPU core as the game engine or another intensive process. Normally I'd just use CPU afinity in Windows however I'm unsure if the virtual CPU's align with the physical topology, or even the same NUMA node.
The technical bits behind virtualization are a bit beyond my outdated skill-set and finding relevent information for such old CPU's isn't going well.
I did read about an issue with older HT implementations that led to cache latency if a process is moved from one 'fake' core inside the CPU to another.
I'd just boot straight from the Windows SSD to work it out without KVM being in the way, however the machine has a BIOS and the VM uses UEFI so I can't just boot on bare metal and see if the issues go away.
Thanks for any advice you may have