Hi,
I am having a newly installed fully updated from enterprise repo Proxmox 4 cluster.
Nodes are HP DL580 gen7 with 4 x E7-4850v1.
I am trying to boot a multicore guest with the following kvm options
The result is that the guest only sees one cpu
I have the same problem with even 2 cores
I found out with a lot of try n error that if i run the kvm command by hand with changed smp options i have the result i want. Guest is able to see the cpus
Proxmox starts cm with following command:
/usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name test -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 491520 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
I changed to this:
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p 'CPUShares=1000' /usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0' -name test -smp 64 -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c395b1511a67' -drive 'file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
and it works. (output of cat /proc/cpuinfo is big.)
Can somebody help?
I am having a newly installed fully updated from enterprise repo Proxmox 4 cluster.
Nodes are HP DL580 gen7 with 4 x E7-4850v1.
I am trying to boot a multicore guest with the following kvm options
Code:
acpi: 0
bootdisk: virtio0
cores: 64
cpu: host
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 491520
name: test
net0: virtio=32:30:34:37:30:36,bridge=vmbr0
numa: 0
ostype: l24
smbios1: uuid=e11d09ae-e68d-4484-9ed4-197301060ac0
sockets: 1
virtio0: ssd:vm-100-disk-1,cache=writeback,size=50G
The result is that the guest only sees one cpu
Code:
Nov 23 18:32:33 test kernel: Initializing cgroup subsys blkio
Nov 23 18:32:33 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:32:33 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:32:33 test kernel: CPU: Unsupported number of siblings 64
Nov 23 18:32:33 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:32:33 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:32:33 test kernel: SMP alternatives: switching to UP code
Nov 23 18:32:33 test kernel: Freeing SMP alternatives: 37k freed
Nov 23 18:32:33 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:32:33 test kernel: ftrace: allocating 22128 entries in 87 pages
Code:
[root@test ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 47
model name : Intel(R) Xeon(R) CPU E7- 4850 @ 2.00GHz
stepping : 2
microcode : 1
cpu MHz : 1997.833
cache size : 24576 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc up arch_perfmon rep_good unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hypervisor lahf_lm arat
bogomips : 3995.66
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
I have the same problem with even 2 cores
Code:
Nov 23 18:38:54 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:38:54 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:38:54 test kernel: CPU: Unsupported number of siblings 2
Nov 23 18:38:54 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:38:54 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:38:54 test kernel: SMP alternatives: switching to UP code
Nov 23 18:38:54 test kernel: Freeing SMP alternatives: 37k freed
Nov 23 18:38:54 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:38:54 test kernel: ftrace: allocating 22128 entries in 87 pages
I found out with a lot of try n error that if i run the kvm command by hand with changed smp options i have the result i want. Guest is able to see the cpus
Proxmox starts cm with following command:
/usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name test -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 491520 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
I changed to this:
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p 'CPUShares=1000' /usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0' -name test -smp 64 -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c395b1511a67' -drive 'file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
and it works. (output of cat /proc/cpuinfo is big.)
Code:
Nov 23 18:51:29 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:51:29 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:51:29 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:51:29 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:51:29 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:51:29 test kernel: ftrace: allocating 22128 entries in 87 pages
Nov 23 18:51:29 test kernel: Enabling x2apic
Nov 23 18:51:29 test kernel: Enabled x2apic
Nov 23 18:51:29 test kernel: APIC routing finalized to physical x2apic.
Nov 23 18:51:29 test kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 23 18:51:29 test kernel: CPU0: Intel(R) Xeon(R) CPU E7- 4850 @ 2.00GHz stepping 02
Nov 23 18:51:29 test kernel: Performance Events: 16-deep LBR, Westmere events, Intel PMU driver.
Nov 23 18:51:29 test kernel: CPUID marked event: 'bus cycles' unavailable
Nov 23 18:51:29 test kernel: ... version: 2
Nov 23 18:51:29 test kernel: ... bit width: 48
Nov 23 18:51:29 test kernel: ... generic registers: 4
Nov 23 18:51:29 test kernel: ... value mask: 0000ffffffffffff
Nov 23 18:51:29 test kernel: ... max period: 000000007fffffff
Nov 23 18:51:29 test kernel: ... fixed-purpose events: 3
Nov 23 18:51:29 test kernel: ... event mask: 000000070000000f
Nov 23 18:51:29 test kernel: NMI watchdog disabled (cpu0): hardware events not enabled
Nov 23 18:51:29 test kernel: Booting Node 0, Processors #1
Nov 23 18:51:29 test kernel: kvm-clock: cpu 1, msr 0:53a35941, secondary cpu clock
Nov 23 18:51:29 test kernel: #2
Nov 23 18:51:29 test kernel: kvm-clock: cpu 2, msr 0:53a55941, secondary cpu clock
...
Nov 23 18:51:29 test kernel: kvm-clock: cpu 62, msr 0:541d5941, secondary cpu clock
Nov 23 18:51:29 test kernel: #63 Ok.
Nov 23 18:51:29 test kernel: kvm-clock: cpu 63, msr 0:541f5941, secondary cpu clock
Nov 23 18:51:29 test kernel: Brought up 64 CPUs
Nov 23 18:51:29 test kernel: Total of 64 processors activated (255722.62 BogoMIPS).
Nov 23 18:51:29 test kernel: devtmpfs: initialized
Nov 23 18:51:29 test kernel: regulator: core version 0.5
Nov 23 18:51:29 test kernel: NET: Registered protocol family 16
Can somebody help?
Code:
pveversion -v
root@1211-ps01:~# pveversion -v
proxmox-ve: 4.0-21 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-21
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie