I never tryed passive/guided tbh, so i don't know how the output should look like.
I can just say that with active (epp), i do have all frequencies, except the "current CPU frequency", i get the unable to call hardware too.
But for passive i have no clue.
Did you tryed to set the sheduler to...
About the new Bios:
Dont use Resizable Bar or Above 4G, with Proxmox.
No matter if you passthrough your GPU or use it for encoding, it will usually have the reset bug.
And you don't gain anything from on Proxmox anyway.
Cheers
amd_pstate: failed to register with return -19
With amd_pstate=active ? But passive/guided (which is the same) works?
Thats a Completely new issue i never seen before.
This patch is not part of our PVE 6.8.4 Kernel:
https://bugzilla.kernel.org/show_bug.cgi?id=218171
I have 2 Systems, one Ryzen...
Im late to the party, but some things:
No Patch will work if:
If you enabled CPPC in Bios and get in dmesg:
amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
And:
lscpu | grep -i cppc outputs nothing
1. You can disable x2apic (set xapic mode in bios), that will usually fix...
Whatever you said, it doesn't makes sense.
Host = enables all cpu features, that the cpu supports on the Hypervisor the VM is running on.
If you set Host in vm-settings and migrate the VM to another host with a different CPU your VM will freeze.
If you reboot the VM, it will work again, but...
You should simply select a cpu profile that all cpu's in your cluster supports in the VM's you wanna able to migrate.
If they have all the same cpu, or slightly different, like just more/less core versions, you can use "host" as Cpu Profile in your vm-settings, which will simply passthrough the...
what i've read on google is, that if you get:
amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
then the mainboard bios, (if cppc is enable) doesn't support cppc, or have corrupted acpi tables for cppc or something like that.
Otherwise i didn't found anything relevant.
PS...
did you tryed to check with "dmesg | grep -i cpc" or cppc or something?
I don't have any cpc or cppc lines in dmesg, but i guess its because cppc is working here, but you may have if its not.
I dont have anything in dmesg, no pstate, nothing.
Check at least with dmesg --level err or warn if...
https://helper-scripts.com/scripts?id=Proxmox+VE+Processor+Microcode
Try that at least.
Maybe a newer microcode helps. Im not sure tho.
On my genoa Servers i run performance+performance and pstate epp behaves exactly like expected. Funny thing is that even if i don't care about power...
If its one drive that causes such issues, maybe you get lucky and zfs pauses the pool with an warning, or you get a read error or sth. The thing is, it makes absolutely sense that it is utterly slow as shit, but it shouldn't lockup for hours, thats not normal.
On another side, i readed that...
yeah i seen that on another thread already, i think its exactly how you say, someone forgot to update the meta.
but if you install proxmox-headers-6.8 before dkms, will it still try to install pve-headers 6.1?
however, for the issue itself, you can remove the 6.1 headers afterwards i think, or...
From earlier research, most drives either die early, or don't die, because they were replaced already.
There is no rule that says that the drives will die after 5 or 10 years of usage, just indicators that out of 5k disks of the same model, most are diying in the first 6 months, or start diying...
The 990 Pro's Firmware updates don't help, they help only Samsung for Marketing reasons...
970 and 980 are somewhat okay, but 990 Pro is pure crap, no firmware will fix that crap xD
And yeah i will replace them at some point, just that HDD-Z2 Pool is just a movie storage pool that is anyway not...
I can only say, it would be insanely amazing, to get 6.9.
I myself would take advantage of the preferred core support with amd-pstate driver. That amd-pstate driver driven my genoa-Servers with performance+performance down to 140W Consumption for 90% of the time and 700W during load scenarios...
Actually thats not completely true with the smallest disk in the pool.
On Raid 10, zfs can grow to the size of one mirror, here is an example:
capacity operations bandwidth
pool alloc...
6 Pairs of mirrors, will give you a huge speed increase in iops and speed itself + no parity calculation.
But at a huge cost of total available size.
I googled "rebalance zfs" and there was a script called inplace rebalance something on github.
Maybe you could do that.
With my "you will run...
I edited it for zfs defaults, 64k+128k, should be more appropriate for everyone that uses the default recordsize of zfs.
But you can compare both scripts, to see what i changed etc, to modify yourself...
Cheers
Yeah, your pool is hugely unbalanced.
The first z2 stripe is full, 92,3% usage.
The other one is almost empty compared to that xD
That comes in my opinion only from one thing, your drives.
It looks to me like the Hard-Drives in the first Stripe (the one that is full) are a lot faster.
That...
Sorry, i was initially mad that no one tryed to help, so i didn't wanted to post a solution. But that doesn't make sense, to get an a.... just because no one has a clue.
However, the solution is somewhat simple and not so simple.
The main issue is how the CPU-Cores are Accesing the Cache...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.