Hello all,
I am curious about the behavior I am seeing while using
Now, if I change EPP to "performance", cores will regularly hit 3.5Ghz, but the power usage at idle and at load greatly increases. I seem similar results when changing the scaling governor to "performance" and leaving EPP at "balance_performance". This seems like expected behavior.
Is it normal for my original configuration to not even hit base clocks while using "balance_performance" and "powersave"? Is there a combination that will help me get a bit better performance under load, while keeping the idle wattage down?
One last thing to mention is that with either EPP or the scaling governor set to performance, my CPU voltage sits at 1.35v, which absolulely spams the IPMI event log with high voltage events. I've experienced this on two different ROME8D-2T motherboards.
My Hardware:
Proxmox 8.1.3
Motherboard - Asrock Rack Rome8d-2t
CPU - Epyc 7F72
RAM - 256GB 3200MHZ ECC
GPU - RTX-4070ti Super
OS Drive - Samsung 980 Pro 500GB
VM OS Drives - ZFS Mirror 2x960GB Samsung P9A3
VM Storage - ZFS Striped Mirror 4x1TB WD SN850
TrueNAS Drives - 6x6TB WD Ironwolf
PCIe NIC - 82599ES 10Gbe - passed through to TrueNAS
I am curious about the behavior I am seeing while using
amd_pstate=active
in my grub configuration. I have EPP set to "balance_performance" and the scaling governor set to "powersave". This seems to so far to be the best method for power reduction without totally killing performance. My system will idle with most of the cores sitting at 400Mhz, which is what I want and it saves about 50 watts at idle and a lot more than that under load. The clocks I am seeing don't really make sense to me though. I am using an Epyc 7F72, which has a base clock of 3.2Ghz and should be able to boost single cores up to 3.7Ghz. I have a few VMs running that would benefit from hitting both of those clocks when required. With the above config, it doesn't look like any single core is boosting above 2.7Ghz. I tested this by running a few games inside of an EndeavourOS VM and Cinebench in a Windows VM. CPU frequencies were monitored with watch -n1 "grep Hz /proc/cpuinfo"
in the Proxmox shell. Multi-core performance doesn't suffer too much from this (5-7%), but single core performance does. I didn't run Cinebench in single core mode, but I had games lose about 5-10FPS. When struggling to hit 4K60Hz in a game on a server CPU, that FPS makes a difference.Now, if I change EPP to "performance", cores will regularly hit 3.5Ghz, but the power usage at idle and at load greatly increases. I seem similar results when changing the scaling governor to "performance" and leaving EPP at "balance_performance". This seems like expected behavior.
Is it normal for my original configuration to not even hit base clocks while using "balance_performance" and "powersave"? Is there a combination that will help me get a bit better performance under load, while keeping the idle wattage down?
One last thing to mention is that with either EPP or the scaling governor set to performance, my CPU voltage sits at 1.35v, which absolulely spams the IPMI event log with high voltage events. I've experienced this on two different ROME8D-2T motherboards.
My Hardware:
Proxmox 8.1.3
Motherboard - Asrock Rack Rome8d-2t
CPU - Epyc 7F72
RAM - 256GB 3200MHZ ECC
GPU - RTX-4070ti Super
OS Drive - Samsung 980 Pro 500GB
VM OS Drives - ZFS Mirror 2x960GB Samsung P9A3
VM Storage - ZFS Striped Mirror 4x1TB WD SN850
TrueNAS Drives - 6x6TB WD Ironwolf
PCIe NIC - 82599ES 10Gbe - passed through to TrueNAS