For non-root limited user I would like to limit view to pool only in pve cluster. Is there any way to disable datacenter view or/and set another view (pool view) for given user as default
The idea behind is to hide number of cluster nodes and their names
Thanks in advance
I can confirm that with 6.2.16-14 problem still exists
Setting "mitigations=off" the only solution
@ProxmoxTeam
Any progress on this issue?
P.S Taking into consideration that KSM with kernel 6.x is broken on dual socket boards +1 vote to release 5.15 kernel
I had 3 nodes cluster with CEPH installed (several OSDs on each node) in network 10.63.210.0/24 for PVE (1Gb) and 10.10.10.0/24 for CEPH (10Gbe).
It was OK until I added 4-th node to PVE cluster only from another network 10.63.200.0/24 (no CEPH/OSD on that node). PVE cluster is happy, CEPH...
@jens-maus have you tried to disable KSM on top of "mitigations=off" before VM start?
in my setup there is no memory pressure within the VM, so KSM in inactive state
In our environment we so exactly the same. CPU spikes and ICMP ping losses + novnc console freeze with increasing number of RDP users logging into W2019. However setting mitigations=off helped (>50 users work fine on each W2019 vm)
One more interesting thing. We could not reproduce an issue...
For the record:
I had to do the following:
1. nano /etc/kernel/cmdfile -> root=ZFS=rpool/ROOT/pve-1 boot=zfs mitigations=off
2. proxmox-boot-tool refresh
3. reboot
4. Check with lscpu
R u sure it is correctly applied?
Check with lscpu. Should be like this:
root@063-pve-04347:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s)...
We also noticed that problem appears more when >64Gb ram allocated (especially 128 and more) and more than 24 vCPUs
P.S as I mentioned in another thread, setting "mitigations=off" helped (as workaround only!)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.