My PVE rpool has ghost data that I cannot remove.
I misconfigured VirtIOFS storage and somehow got rpool to contain my /mnt/vmdata rather than it being located on the intended storage pool, a 12TB RAIDZ pool (16TB raw). The storage was shared to...
I had a big brain moment and I think I figured out how to set things up now. Like I was saying before, I don't want to have any internet access on my OOB management network. So I'll create two management networks, one OOB with no internet access...
See also: https://forum.proxmox.com/threads/fabc-why-is-proxmoxve-using-all-my-ram-and-why-cant-i-see-the-real-ram-usage-of-the-vms-in-the-dashboard.165943/
That's the whole point of this construct, isn't it?
If that behavior is a problem you might want to add an additional and completely independent corosync ring "outside" of those already existing networks. It must have separate wires/fibre...
I am unclear of your question? You assigned 108 GB RAM to VMs. They are only using 95 GB. 95/125 is less than 80%. If you want them to use less RAM you should allocate less RAM.
Dieser Empfehlung würde ich klar widersprechen: Netzwerkstorage ist schon im lokalen Netzwerk nicht optimal (siehe https://forum.proxmox.com/threads/datastore-performance-tester-for-pbs.148694/ ) über eine vergleichsweise lahme WAN-Verbindung...
That's the whole point of this construct, isn't it?
If that behavior is a problem you might want to add an additional and completely independent corosync ring "outside" of those already existing networks. It must have separate wires/fibre...
Thank you for coming back and sharing your findings - they will assist others who run into similar situation.
You can mark the thread as Solved by editing the first post and selecting the appropriate subject prefix. This assist with keeping the...
@Browbeat Dass systeminfo nen Hypervisor erkennt ist unter KVM normal, das wird immer angezeigt solang die VM unter nem Hypervisor läuft. Das ist aber nicht das Problem. Wichtig ist, dass Windows nicht seinen eigenen Hypervisor (Hyper-V/VBS)...
I am unclear of your question? You assigned 108 GB RAM to VMs. They are only using 95 GB. 95/125 is less than 80%. If you want them to use less RAM you should allocate less RAM.
Ja, Docker gehört in eine eigene VM, nicht auf den PVE-Host. Der Host sollte schlank bleiben und nur Proxmox laufen. So hast du eine saubere Trennung und wenn im Container was schiefgeht, ist der Hypervisor nicht betroffen.
Mit OPNsense davor...
Well, with the 6.18 kernel being absolutely unusable due to SATA controller dropouts crashing containers using NFS and occasionally the entire Proxmox host I expected a repeat with the 7.0 one. Happy to say that it passed every test that would...
would wonder me.. the answers they gave were satisfying me. We ended up to going forward with other solutions. But would nice if you update, when there is a solution - if you're staying.
The UI for nested pools was introduced not that long ago; in pve-manager 9.1.5:
pve-manager (9.1.5) trixie; urgency=medium
[..]
* ui: resource tree: don't show empty grouping nodes as expandable.
* ui: resource tree: show nested pools...
Das war des Rätsels Lösung, vielen Dank!
Ich habe die CPU auf x86-64-v3 geändert (v4 gäbs auch noch, ist das relevant?) und schon lief es! Datenbankzugriffszeiten wurden von ca. 30ms auf 0,3ms verringert.
@cwt : Meine Wochenrettung! Dankeschön...
Ja, Docker gehört in eine eigene VM, nicht auf den PVE-Host. Der Host sollte schlank bleiben und nur Proxmox laufen. So hast du eine saubere Trennung und wenn im Container was schiefgeht, ist der Hypervisor nicht betroffen.
Mit OPNsense davor...
Klingt nach nem Problem beim initramfs-Build. Probier mal mit verbose-Output, dann siehst du wo es hängt:
update-initramfs -u -k 6.8.12-20-pve -v
Und zeig mal was df -h /boot sagt, falls /boot voll ist gehts auch nicht.