Often, not all memory is local to each socket and you might want to give each VM also two sockets and enable NUMA for each VM.
That is a good point.
Hyperthreads don't improve performance by 2 (more likely by 1.3).
Thank you.
My experience in HPC/CAE/CFD/FEA workloads -- HyperThreading (or SMT), AT BEST, is only about a 3-7% performance improvement, but otherwise, can also result in a performance degradation as well due to the oversubscription of the FPU on the CPU cores.
Try and measure for your workloads yourself.
My workload is predominantly just web browsing/watching YouTube videos, so it's generally not very taxing at all.
(The server consolidation project was to migrate from 5 NAS servers down to a single system, and then virtualise any other outstanding systems/towers where and whenever I can, driven predominantly by my desire to cut my over power consumption down from 1200-ish W to ~600 W.)
The cost of electricity here isn't very high, but if I can save a buck, why not?
Trust the Linux process scheduler and don't handicap it.
I ask this only because sometimes, when I am rebooting my Linux VMs, the "screen" where it shows the progress of the system services shutting down -- will also show that there were times when threads were stuck and/or waiting for the CPU.
That suggests to me that there is some hardware contention issues that's happening, despite the generaly low system load/CPU usage, overall.
Not sure if that is storage related though as sometimes, on the Proxmox dashboard, my IO delay can be as high as 45% (not very often, but it sometimes might spike up that high. My storage consists of three 8-wide raidz2 vdevs, two of the vdevs are built using 10 TB 7200 rpm SATA 6 Gbps and/or SAS 12 Gbps drives (i.e. one vdev consists of eight 10 TB SAS 12 Gbps drives, the other vdev consists of eight 10 TB SATA 6 Gbps drives), and the third vdev is eight 6 TB SATA 6 Gbps drives.
(I am using existing hardware vs. buying new stuff when the old stuff works perfectly fine.)
Also, by not pushing everything to 100%, you make the system as a whole not responsive and you experience less latency (and you won't achieve maximum throughput).
Yeah, I am having mixed results with this.
Audio, when playing YouTube videos from a Windows 11 VM (using the SPICE audio driver, with the Windows virtio drivers 0.1.229 installed), sometimes stutters or gets a little bit garbled up when the host reports either higher IO delay and/or higher load/CPU usage.
Not the worst thing in the world, but it is certainly annoying though, when you're watching a YouTube video and having to rewind every so often as a result of said audio issues.
On this forum, you'll find lots of people running Proxmox with only a few VMs obsessing over performance, GPU passthrough and pinning cores, etc.
I do have GPU passthrough (because I have virtualised my gaming system now).
And I am also using the virtio-fs capabilities of Proxmox because rather than building out and expanding my network, I am actually contracting/shrinking my homelab (again - to cut power consumption). Don't need 10 GbE NICs/switches/cables if I can just use virtio-fs.
(Which is an AWESOME feature for Proxmox to support BTW as neither TrueNAS nor xcp-ng/XOA supports virtio-fs.)
That being said, I am running what might appear to be some hardware contention issues (with the host scheduler), so I've set the CPU affinities, but kept HyperThreading enabled to see if that might help with some of that.
(Keeping 2 cores/4 threads "clear" for the host.)
Thank you.