I see the AnonHugePages increased when starting the VM, but the default is set to 2M. Apparently this works if you don’t need IOMMU. If I needed to pass hardware /GPUs through then you’d need those to be static like your config AFAIK. That being...
Please check
cat /proc/meminfo| grep Huge
In my setup I had to define:
hugepagesz=1G hugepages=N default_hugepagesz=1024M
Where N - is number of HP with respect to VM memory size
P.S as far as I know 1G and 2M hugepages cannot be combined...
To use giant pages (hugepages 1024Mb) u need:
- explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline)
- set hugepages: 1024 in vm conf file (manually)
I would also recommend setting up numa topology...
This can be hard, because there's no clear initial condition. We have another two VMs with WS2025 and SQL2022, but there's a low traffic/load, so these errors/issues are quite rare and no service hangs so far.
But I'm quite sure that majority of...
In our environment, we use dozens of Windows Server 2019/2022, and we don't see any issues with version 1.285 drivers. However, there is only 1 server with the MSSQL database engine (2019, if I'm not mistaken), and I'm not entirely sure if it has...
Hi @Whatever, not yet. But I moved the focus on the thread I've been active since the last year: Redhat VirtIO developers would like to coordinate
In my view, the problem with the virtio github is the low/lower interest of the core (RHEL) devs...
Hi all, I'm the author of this ass-kicking post. After more than year, I'm back to service, as bug hunting never ends.
I'm sorry that I've missed many of the questions and messages, but let's move forward, we have another urgent problem(s).
In...
I'm happy to report that using the latest version of Squid (19.2.3) the command ceph daemon {monId} config set mon_cluster_log_level info now does reduce the logging output. You have to execute this on every server hosting a monitor.
Hi, in numa0/numa1, the cpus= list refers to guest vCPU indexes (0…vCPUs-1), not host CPU IDs.
Affinity is the host cpuset for the whole VM process (all vCPU threads), not per-vCPU.
Proxmox VE (QEMU) doesn’t expose per-vCPU pinning in the VM...
Good day
Help me figure out and implement the correct virtual machine configuration for a dual-socket motherboard (PVE 8.4, 6.8.12 kernel)
Given:
root@pve-node-04840:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes...