I had 3 nodes cluster with CEPH installed (several OSDs on each node) in network 10.63.210.0/24 for PVE (1Gb) and 10.10.10.0/24 for CEPH (10Gbe).
It was OK until I added 4-th node to PVE cluster only from another network 10.63.200.0/24 (no CEPH/OSD on that node). PVE cluster is happy, CEPH...
@jens-maus have you tried to disable KSM on top of "mitigations=off" before VM start?
in my setup there is no memory pressure within the VM, so KSM in inactive state
In our environment we so exactly the same. CPU spikes and ICMP ping losses + novnc console freeze with increasing number of RDP users logging into W2019. However setting mitigations=off helped (>50 users work fine on each W2019 vm)
One more interesting thing. We could not reproduce an issue...
For the record:
I had to do the following:
1. nano /etc/kernel/cmdfile -> root=ZFS=rpool/ROOT/pve-1 boot=zfs mitigations=off
2. proxmox-boot-tool refresh
3. reboot
4. Check with lscpu
R u sure it is correctly applied?
Check with lscpu. Should be like this:
root@063-pve-04347:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s)...
We also noticed that problem appears more when >64Gb ram allocated (especially 128 and more) and more than 24 vCPUs
P.S as I mentioned in another thread, setting "mitigations=off" helped (as workaround only!)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.