Not usable/exposed in Proxmox VE yet, but upstream has integrated support for this recently (with QEMU 10.0).
Relevant feature request: https://bugzilla.proxmox.com/show_bug.cgi?id=3303
Upstream docs...
Thanks for your quick answer!
Yes ksm is active on this server shaving 60.22 GB of 399.32 used / 503 GiB total
Since we have community subscription I guess we'll have to wait a bit longuer for qemu 9.1.4, I'll report here and in the bugzilla...
All VMs are using proxmox created/managed ceph rbd storage on this cluster (56 OSD on 8 nodes, all enterprise SSD, 3000TB raw, 60% used), no issue for several years now
Hi,
I have a few nodes of my PVE 9 (up to date) cluster with very high cumulative cpu time on the pvestatd process (one nearly 1 cpu continual use)
In the journalctl -u pvestatd I get "pvestatd[4044694]: status update time (20.938 seconds)"...
Note : running PVE9 kernel 6.14.11-2-pve ceph 19.2.3-pve2 with Intel E810-C 4x25G in 802.3ad bonding on 4 HPE DL385 servers (1 TB RAM each) with default MTU 1500, we don't see a memory usage issue, or it's not growing fast enough to be visible...