2 gamers 1 CPU setup

papajo

New Member
Nov 18, 2020
6
0
1
35
So this is this is the PC

CPU: i9 9900k (8c 16t)
RAM: 32GB (2*16 GB) 3200 cl16
Mobo: z370 taichi
GPU for proxmox: IGPU from the cpu
GPU1: Sapphire rx vega 64 nitro+
GPU2: Sapphire rx vega 64 nitro+
Storage for the VM1 OS (windows 10) : 512GB nvme ssd*
Storage for the VM2 OS (windows 10) : 512GB nvme ssd*
Storage for proxmox: Intel optane 16GB
Shared storage for both VMs: 6TB WD blue HDD

*I already uploaded a topic about this actually it is a 1 TB nvme partitioned in two 512GB partitions that I want to pass through in case that doesnt work I will buy a secondary 1tb nvme

Target:

I want to have two windows 10 VMs (VM1, VM2) to run simultaneously, its for my nephews, the usage will be mainly gaming on steam and other platforms and other such typical usage (browsing,videocalls, streaming, word excel etc)

I wou like to share 50% of the system resources (or as close to that) for each VM and especially in gaming I would like each VM to have near bare metal performance (considering half the CPU will be used)

No other dockers,NAS,plex etc functionalities will be required.

Question:

What are the minimum requirements in terms of CPU cores and ram to dedicate to proxmox in order to ensure maximum performance and stability for the two VMs running simultaneously while also trying to give them as many of the resources as possible?

Note: I managed to OC the CPU via the bios at 5.2 GHz all cores and it is stable.
 
Last edited:
*I already uploaded a topic about this actually it is a 1 TB nvme partitioned in two 512GB partitions that I want to pass through in case that doesnt work I will buy a secondary 1tb nvme
Possible, though not via the GUI. On the CLI, you can simply do qm set <vmid> -scsi0 /dev/nvmeXnYpZ, with X and Y replaced with your disk path and Z with the partition number, i.e. 0 for one VM and 1 for the other. If you create the VM via the GUI just give it some dummy disk in the wizard, e.g. 1 GB on 'local', then remove it later.

What are the minimum requirements in terms of CPU cores and ram to dedicate to proxmox in order to ensure maximum performance and stability for the two VMs running simultaneously while also trying to give them as many of the resources as possible?
You have 8 cores, so assigning 3 cores (6 vCPUs/threads) to each one would be a good call. You could probably get away with giving one VM 4/8, but that's up to you if you have a preferred nephew ;) RAM wise you don't need a lot for the host, especially if you do direct disk passthrough (meaning the storage cache will mostly be in the guest). Leave 2-4 GB for the host I'd say, if you're certain nothing else will run on it.

Note: I managed to OC the CPU via the bios at 5.2 GHz all cores and it is stable.
Quick note back: I've seen the case where a CPU was fully stable on an overclock but failed as soon as VT-x/VT-d (virtualization extensions) kicked in, so just be aware of that if you run into stability trouble in the future.

GPU1: Sapphire rx vega 64 nitro+
GPU2: Sapphire rx vega 64 nitro+
Vega cards used to a bit tricky, especially on VM restarts, not sure how the current support is...

Other than that make sure to read our wiki, good luck and feel free to ask if anything else comes up (after searching the forum if anyone else asked already please :) )
 
Possible, though not via the GUI. On the CLI, you can simply do qm set <vmid> -scsi0 /dev/nvmeXnYpZ, with X and Y replaced with your disk path and Z with the partition number, i.e. 0 for one VM and 1 for the other. If you create the VM via the GUI just give it some dummy disk in the wizard, e.g. 1 GB on 'local', then remove it later.
Thanks for the reply ! :)

I see that -scsi trigger is there anything to worry about that? (like not having trimm or other things that wont work but need to work in for the nvme ssd to be healthy and perform close to its potential performance? )


Also if I pass through the nvme is it possible to boot from the nvme without promox? I ask that in case 1 nephew stays home and the other is visiting some friends or what not and nephew number 1 wants to leverage full hardware power for his windows installation :p

Or do I need to create separate proxmox vms for that?

You have 8 cores, so assigning 3 cores (6 vCPUs/threads) to each one would be a good call. You could probably get away with giving one VM 4/8, but that's up to you if you have a preferred nephew ;) RAM wise you don't need a lot for the host, especially if you do direct disk passthrough (meaning the storage cache will mostly be in the guest). Leave 2-4 GB for the host I'd say, if you're certain nothing else will run on it.

should I use the first 2 threads(so essentially core0) for proxmox? or it doesnt matter?


Quick note back: I've seen the case where a CPU was fully stable on an overclock but failed as soon as VT-x/VT-d (virtualization extensions) kicked in, so just be aware of that if you run into stability trouble in the future.
Well I am not sure if that is the same thing but I tested running windows and virtualbox VMs on top of the windows 10 host and everything ran smooth but anyway that doesnt matter as much if things keep crash I simply lower the OC
 
I see that -scsi trigger is there anything to worry about that? (like not having trimm or other things that wont work but need to work in for the nvme ssd to be healthy and perform close to its potential performance? )
TRIM/Discard is always a tricky question, you can certainly try to enable it (check "Discard" on the GUI after adding the drive), but no guarantee that it will work with partition-based passthrough... Shouldn't be that much of a problem though, modern consumer NVMe drives are pretty good at managing that themselves too in my experience.

Also if I pass through the nvme is it possible to boot from the nvme without promox? I ask that in case 1 nephew stays home and the other is visiting some friends or what not and nephew number 1 wants to leverage full hardware power for his windows installation
No, that is only possible if you use an entire disk, preferrable with PCIe passthrough then (for NVMe). I wouldn't really recommend doing that anyway though, Windows is often dumb and removes or misconfigures certain drivers if you reboot the same installation on "new" hardware, but YMMV.

should I use the first 2 threads(so essentially core0) for proxmox? or it doesnt matter?
Are you using a script for vCPU pinning? PVE by default doesn't support that, we rely on the host's scheduler to take care of assigning vCPUs to physical ones, so just set "6 cores" for each VM and the rest should be automatic. If you do use a script, then yes, it's usually good practice to assign to the higher cores and leave core0 free.
 
@Stefan_R Hi, with the new driverctl utility it's much easier to binding and unbinding, and handles identical GPUs. Could somebody update the wiki ?
 
@Stefan_R Hi, with the new driverctl utility it's much easier to binding and unbinding, and handles identical GPUs. Could somebody update the wiki ?
You can always update the wiki yourself, that's what it is there for :) I'm not entirely sure what the specific use case in this scenario would be though, since they intend to pass through both cards anyway? The usual difficulties of identical GPU passthrough only apply if you want one to stay active on the host, and even then it is usually a simple bash script away from working in my experience.

IOMMU depends both on cpu and chipset. In that configuration there is only one GPU, if you have 2 GPUs I believe they will be in the same group.
No, @papajo is right, the separation only depends on the motherboard and it's PCIe layout (which is not determined by the chipset, since that only controls the 4 (or more for TR etc.) lanes assigned to it). The CPU's only role is to provide the IOMMU (marketed "VT-d" for Intel), which by itself is necessary, but does not play a role in assigning the groups.
 
No, @papajo is right, the separation only depends on the motherboard and it's PCIe layout (which is not determined by the chipset, since that only controls the 4 (or more for TR etc.) lanes assigned to it). The CPU's only role is to provide the IOMMU (marketed "VT-d" for Intel), which by itself is necessary, but does not play a role in assigning the groups.
Intel customer CPU doesn't support ACS, both GPUs will ended in the same IOMMU group.

https://forum.proxmox.com/threads/is-acs_override-needed-for-my-pcie-passthrough-setup.35060/
 
Intel customer CPU doesn't support ACS, both GPUs will ended in the same IOMMU group.
As far as I'm aware they do, otherwise everything would show up in one group? Sadly, ark.intel.com doesn't show that...

Could honestly be wrong here, but it doesn't matter that much in this situation, worst come you can always override the ACS by adding 'pcie_acs_override=downstream' to the kernel commandline (since I doubt anything necessitating high-security is going to happen on the VMs in this thread). Our pve-kernel comes with the necessary patch pre-applied.

I do know that I've had a similar system (different GPUs though, not the same model) running on an 8700k previously, so it's certainly possible if you fiddle with the config a bit.
 
As far as I'm aware they do, otherwise everything would show up in one group? Sadly, ark.intel.com doesn't show that...
Sadly it's not the case. Even Xeon E3 family's CPU's PCIE root complex doesn't support ACS. Only HEDT platform, Xeon E5 and E7 are supported.

However, starting from Gen7, ACS is supported in chipset's PCIE root complex. So Network cards, NVME drives can be passed safely.

This is a main reason why I switched to AMD, they have full ACS and ARI (for SR-Iov) support.

but it doesn't matter that much in this situation

Yes in most cases it will work with ACS override.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!