5800X Horrible Performance

NikoC

New Member
Feb 22, 2021
14
0
1
23
Hey, so I have a 5800x and I did a KVM for 2 clients to host their game servers.

I followed all the guides here and I have all the drivers and OS is Windows Server 2019 Desktop.

Their current KVM settings are:

Screenshot_2021-02-22 wave7156 - Proxmox Virtual Environment.png

cache writeback, ballooning enabled, qemu guest enabled etc...

I want both of my clients to have their own 8threads/4cores. That's why I set CPU limit 8.

Although my clients when using this KVM they are having the same performance as someone using a 6700k dedicated which is not normal.

The single-core and multi-core performance are bad overall.

So anyone please give me few ideas.
 
These CPUs are quite new and I don't know how well they are supported by the current kernel (5.4). You could try to disable any power saving features in the BIOS and see if that helps.
 
It could be that the CPU Limit includes the additional I/O threads of the VM. Setting the CPU Limit to the same value as the Cores might cause waits or stalls.
Also, although the CPU has 16 threads, if both VMs use all their virtual CPUs, there is no room for overhead, I/O, and host threads, which can cause waits or stalls.
If you want to reduce latency and increase responsiveness, try using 4 Cores for each VM (without CPU Limit). Then the 8 virtual Cores are backed by the 8 actual cores on the 5800X and you have the "hyper-threads" (which have less performance) for I/O and host work.
 
These CPUs are quite new and I don't know how well they are supported by the current kernel (5.4). You could try to disable any power saving features in the BIOS and see if that helps.
Have done that.
 
It could be that the CPU Limit includes the additional I/O threads of the VM. Setting the CPU Limit to the same value as the Cores might cause waits or stalls.
Also, although the CPU has 16 threads, if both VMs use all their virtual CPUs, there is no room for overhead, I/O, and host threads, which can cause waits or stalls.
If you want to reduce latency and increase responsiveness, try using 4 Cores for each VM (without CPU Limit). Then the 8 virtual Cores are backed by the 8 actual cores on the 5800X and you have the "hyper-threads" (which have less performance) for I/O and host work.
Both 2 KVM's are only using 20% of the whole KVM.

They never reach 100%.
 
Both 2 KVM's are only using 20% of the whole KVM.

They never reach 100%.
But each VM requires 8 threads to be available at the same time to be scheduled. This will probably cause then to never be scheduled at the same time, halving performance. They will be much more responsive when you reduce the number of virtual cores.
IMO: Expecting two VMs with each half of the host resources to run at (near) native performance is just unrealistic, regardless of CPU brand or type. Maybe I'm wrong and someone else can explain how to improve your performance (which would be the best outcome for both of us).
 
The kernel should be able to support that model just fine, besides some missing updates to the k10temp driver (used for CPU temperature reporting).

For starters: You have 8 real cores, the hyper-threaded duplication of the cores will never give you a performance gain like an extra real CPU core would, still two VMs with each 8 virtual cores assigned, doing CPU stressing on all of them should give you close to 100% (or1600% if you calculate with 100% == one CPU, like some tools do) CPU load at the host.

Some things good to know would be:
* what the actual test/benchmark/workload you are evaluating
* did you tried to evaluate that on the Proxmox VE host directly, if possible, to see if the virtualization layer has such an impact or if there's another issue at play?
 
  • Like
Reactions: Cantalupo and NikoC
But each VM requires 8 threads to be available at the same time to be scheduled. This will probably cause then to never be scheduled at the same time, halving performance. They will be much more responsive when you reduce the number of virtual cores.
IMO: Expecting two VMs with each half of the host resources to run at (near) native performance is just unrealistic, regardless of CPU brand or type. Maybe I'm wrong and someone else can explain how to improve your performance (which would be the best outcome for both of us).
I will reduce both kvm's for 7cores each. meaning 2cores will be leftover.

anyways I was expecting that a 5800x KVM would at least be better than a 6700k bc, it should.

Screenshot_2021-02-23 wave7156 - Proxmox Virtual Environment.png
 
The kernel should be able to support that model just fine, besides some missing updates to the k10temp driver (used for CPU temperature reporting).

For starters: You have 8 real cores, the hyper-threaded duplication of the cores will never give you a performance gain like an extra real CPU core would, still two VMs with each 8 virtual cores assigned, doing CPU stressing on all of them should give you close to 100% (or1600% if you calculate with 100% == one CPU, like some tools do) CPU load at the host.

Some things good to know would be:
* what the actual test/benchmark/workload you are evaluating
* did you tried to evaluate that on the Proxmox VE host directly, if possible, to see if the virtualization layer has such an impact or if there's another issue at play?
Understood. So it would be better to assign real cores only?

Some things good to know would be:
* what the actual test/benchmark/workload you are evaluating
* did you tried to evaluate that on the Proxmox VE host directly, if possible, to see if the virtualization layer has such an impact or if there's another issue at play?
I did single thread/core benchmark performance and game servers are also running badly on it. They are having the same performance as someone using a 6700k dedicated

I did not try anything else. I'm new on proxmox :(
 
Understood. So it would be better to assign real cores only?

I do not think that overassigment causes actual problems, in the most cases, and especially yours where total count assigned to VMs is actually quite reasonable (some people assign much more virtual cores to VMs than they have available, which can be fine if they do not need all the CPU time).

I did single thread/core benchmark performance and game servers are also running badly on it. They are having the same performance as someone using a 6700k dedicated

I mean, the Intel 6700k has a 4 GHz base clock (4.2 GHz turbo) the Ryzen 5800X has 3.8 GHz base clock (4.7 GHz turbo), as turbo is something not really deterministic and may only get used for short periods of time (depending on lots of factors, but cooling and bios and possible its settings may have some say here). So the intel runs at an actual higher base clock.

IMO, the Ryzen will only really start to show off once you hit it with multiple VMs, and parallel loads.
So, if those gameserver applications are not really multithreaded, and thus mostly run on a single core it could explain this lack of high improvement to a certain degree. You can check the Windows server task viewer to see how many cores normally are used at higher load. I'd then reduce the core count to that per VM.

The Ryzen still can be a very good choice if you want to run many of such gameserver VMs, as the intel 6700k will start to choke at about half the numbers.
Also, having in mind that virtualization has some non-negligible cost, it's not too bad if those VMs are running as good as a high quality bare metal host.

Also, CPU is not everything-. RAM tech and clock (+ latencies) and storage (spinner vs. ssd vs. NVMe) can also have an impact on overall performance.
 
I have an 5800x too.
Just made another thread with bugs, that is waiting to get approved, cause it's my first post

But what i can tell is, the performance of the 5800x itself is blazing fast.

I runned at the beginning into the same issues, that windows was slow af.
But for me it was the hdd's.
Had an Raid z1 without cache/log, consisting of 3x 6tb hdd's.
That was extremely horrible, windows was slow af, almost not usable.

Now i have a zfs mirror, consisting of 2 1tb ssds, where windows and some other kvm's are stored on.
And my hdd raid z1 has now an ssd for cache & log, where i store only backups + samba share.

However, in short, since windows is moved to the ssd zfs mirror, it gots blazing fast. The difference is just insane... From almost unusable to super fast.
Just install windows with scsi & virtio block + virtio drivers, if you haven't already + cache writeback + discard.
And you should be fine.
For a bit more speed, you should change the gpu/display to spice and give it 128mb ram... Install the never qxldod driver, that is available in the spice repo.
The new one isn't in the virtio iso, but you can get it from here: https://www.spice-space.org/download/windows/qxl-wddm-dod/qxl-wddm-dod-0.21/

And connect through remote desktop. That's how i use it. But you can use the spice client too.

Cheers
 
Like t.lamprecht said, it also could be slow RAM. The last weeks I've seen some people complaining about horrible RAM speeds inside a Windows guests. So you might benchmark the RAM inside the guest too.

By the way, what game servers are you running? Minecraft for example is super slow because it can only one core.
 
Like t.lamprecht said, it also could be slow RAM. The last weeks I've seen some people complaining about horrible RAM speeds inside a Windows guests. So you might benchmark the RAM inside the guest too.

By the way, what game servers are you running? Minecraft for example is super slow because it can only one core.
DayZ only uses like 2 cores maximum.
 
I do not think that overassigment causes actual problems, in the most cases, and especially yours where total count assigned to VMs is actually quite reasonable (some people assign much more virtual cores to VMs than they have available, which can be fine if they do not need all the CPU time).



I mean, the Intel 6700k has a 4 GHz base clock (4.2 GHz turbo) the Ryzen 5800X has 3.8 GHz base clock (4.7 GHz turbo), as turbo is something not really deterministic and may only get used for short periods of time (depending on lots of factors, but cooling and bios and possible its settings may have some say here). So the intel runs at an actual higher base clock.

IMO, the Ryzen will only really start to show off once you hit it with multiple VMs, and parallel loads.
So, if those gameserver applications are not really multithreaded, and thus mostly run on a single core it could explain this lack of high improvement to a certain degree. You can check the Windows server task viewer to see how many cores normally are used at higher load. I'd then reduce the core count to that per VM.

The Ryzen still can be a very good choice if you want to run many of such gameserver VMs, as the intel 6700k will start to choke at about half the numbers.
Also, having in mind that virtualization has some non-negligible cost, it's not too bad if those VMs are running as good as a high quality bare metal host.

Also, CPU is not everything-. RAM tech and clock (+ latencies) and storage (spinner vs. ssd vs. NVMe) can also have an impact on overall performance.
I saw a full DayZ server only uses like 2 cores maximum. So I set each VM guest to use only 4cores.

Screenshot_2021-02-23 wave7156 - Proxmox Virtual Environment(1).png

Mem, and NVME and Mobo are high quality. This was a gaming pc.
 
Mem, and NVME and Mobo are high quality. This was a gaming pc.
Gaming hardware must not be good at virtualization. It may lack alot of enterprise features that virtualization may benefit of.
And only because your physical RAM and NVMe SSD is fast it can be super slow inside the guest if something isn't configured the right way.
You really should check how good the performance of these are inside the guests.

Virtio is one of these things. If you don't installed your Windows using virtio drivers your NVMe SSD will be super slow because it is fully virtualized and not semivirtualized like with virtio. If you don't needed to download and insert a driver CD while installing windows you are not using the fast virtio. So its still possible that every slow HDD is faster then your NVMe SSD.
 
Last edited:
Gaming hardware must not be good at virtualization. It may lack alot of enterprise features that virtualization may benefit of.
And only because your physical RAM and NVMe SSD is fast it can be super slow inside the guest if something isn't configured the right way.
You really should check how good the performance of these are inside the guests.

Virtio is one of these things. If you don't installed your Windows using virtio drivers your NVMe SSD will be super slow because it is fully virtualized and not semivirtualized like with virtio. If you don't needed to download and insert a driver CD while installing windows you are not using the fast virtio. So its still possible that every slow HDD is faster then your NVMe SSD.
I have all the drivers :)
 
I made a small test:

Code:
Ryzen 5 5600X (Native W10)
- Dual Channel 3200Mhz, 64GB
- 6 Cores / 12 Threads
- Samsung 980 Pro
Versus:
Code:
Ryzen 7 5800X (Proxmox KVM)
- Dual Channel 2666Mhz ECC, 8GB (Host has 64GB)
- 4 Cores / 4 Threads
- 162GB on a  ZFS Pool (Super cheap mirrored noname SSD's)

RAM Comparization:
ram-5600x-nativeW10.pngram-5800x-proxmoxW10.png

SSD Comparization:
nvme-5600x-nativeW10.pngcheapSSD_zfs-5800x-proxmoxW10.png

CPU Comparization:
cpu-5600x-nativeW10.pngcpu-5800x-4c-proxmoxW10.png

All i can say is, that im Impressed, what KVM gets out of that Crap xD
If you do it all right, the virtualized Windows is blazing fast.
And please don't forget, it's 4 Cores on the VM, vs 6c/12t on the native W10.

Hope this helps one or another :-)
Cheers
 
  • Like
Reactions: trnvtha
After setting for both (2) customers 4 Proxmox Cores the performance went amazing for both. (5600x) system.

Now I have a 5950x system with, 4 customers, all of them have this setting:

Screenshot_2021-03-02 proxmox3 - Proxmox Virtual Environment.png

Their performance for their game servers is trashy. Any tips on what should I do here? Basically, I assigned for all of them 4 cores as I did on 5600x.
 

Attachments

  • Screenshot_2021-03-02 proxmox3 - Proxmox Virtual Environment(1).png
    Screenshot_2021-03-02 proxmox3 - Proxmox Virtual Environment(1).png
    13.2 KB · Views: 59

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!