[CPU PINNING CONF] How to correctly configure CPU pinning?

ayoubbergaoui

New Member
Jan 4, 2024
1
0
1
Hello,

I have a question regarding CPU pinning in Proxmox to ensure we achieve the best configuration and performance for our tests. We are facing issues with some tests timing out due to VMs responding slowly.

For context, our environment generally uses Nested Virtualization, meaning we run tests that launch VMs inside a VM, thus having a double layer of virtualization.

On one of our Proxmox servers, I have set up two VMs based on the server's characteristics as follows (swindkvm4 is a the Hypervisor Proxmox):

==============================================================================
itsystem@swindkvm4:~$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU: 96
On-line CPU list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7443 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU scaling MHz: 70%
CPU max MHz: 4035,6440
CPU min MHz: 1500,0000
BogoMIPS: 5700,21
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmx
ext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulq
dq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic
cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfct
r_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bm
i2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cq
m_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc
_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke
vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 1,5 MiB (48 instances)
L1i: 1,5 MiB (48 instances)
L2: 24 MiB (48 instances)
L3: 256 MiB (8 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU: 0-23,48-71
NUMA node1 CPU: 24-47,72-95
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Mitigation; safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Not affected
itsystem@swindkvm4:~$
==================================================================================

Our goal is to ensure that each VM stays on the same NUMA node and does not seek resources from the second NUMA node of the server. Therefore, we want to assign each VM to a specific NUMA node. Since we have two NUMA nodes on each server, as per the output from the lscpu command, we are considering the following configuration:

sudo qm set 104 --numa0 cpus="0-23;48-67",memory=235520
sudo qm set 106 --numa1 cpus="24-47;72-91",memory=235520

With these configurations, each VM should stay on the same NUMA node, utilizing the CPUs with the indicated numbers and using a total of 44 cores (not all cores of each NUMA node).

Do you find this configuration correct?

Most importantly, how can I verify that the VM is indeed using the memory (RAM) & CPU of the NUMA node I have assigned? How can i confirm this??

FYI, our 2 vms have this conf now :
==============================================================================
itsystem@swindkvm4:~$ sudo qm config 106
agent: 1
boot: order=scsi0
cipassword: ****
ciuser: itsystem
cores: 44
cpu: host
ide2: swdata4:106/vm-106-cloudinit.qcow2,media=cdrom,size=4M
ipconfig0: ip=10.98.4.1/16,gw=10.98.0.9
memory: 235520
meta: creation-qemu=7.1.0,ctime=1675692878
name: vostochny
net0: virtio=BC:24:11:5A:7F:5F,bridge=vmbr0
numa: 0
numa0: cpus=26-47;74-95,memory=235520
scsi0: swdata4:106/vm-106-disk-0.qcow2,size=440G
scsihw: virtio-scsi-pci
smbios1: uuid=adb1384b-30ad-48ef-a4ce-6069af597fbd
sockets: 1
sshkeys: ecdsa-sha2-nistp256%20AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBSuAWqKTKodQGpWG%2FNQC5ig6o5wxe%2BbUlFi6nTQ44Il5qjugSTvQocmsV4uO4ebr70eq6kE29bJSuEUT%2B1fRuA%3D%20
tags: pmx81
vmgenid: 12a92e62-9fca-4777-8650-adc30737fc0f
==============================================================================
itsystem@swindkvm4:~$ sudo qm config 104
agent: 1
boot: order=scsi0
cipassword: ****
ciuser: itsystem
cores: 22
cpu: host
ide2: swdata4:104/vm-104-cloudinit.qcow2,media=cdrom,size=4M
ipconfig0: ip=10.98.4.2/16,gw=10.98.0.9
memory: 235520
meta: creation-qemu=7.1.0,ctime=1675692878
name: baikonour
net0: virtio=BC:24:11:3C:8D:5D,bridge=vmbr0
numa: 0
numa0: cpus=2-23;50-71,memory=235520
scsi0: swdata4:104/vm-104-disk-0.qcow2,size=440G
scsihw: virtio-scsi-pci
smbios1: uuid=743552d8-3797-470d-8356-dd6d95b98a77
sockets: 2
sshkeys: ecdsa-sha2-nistp256%20AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBSuAWqKTKodQGpWG%2FNQC5ig6o5wxe%2BbUlFi6nTQ44Il5qjugSTvQocmsV4uO4ebr70eq6kE29bJSuEUT%2B1fRuA%3D%20
tags: pmx81
vmgenid: 59026433-1bbb-4d59-82f2-71e8d8df133c
itsystem@swindkvm4:~$
======================================================================================

Thank you in advance,
Ayoub BERGAOUI
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!