New PVE.4 - Guest kvm cant see more than one core

Sakis

Active Member
Aug 14, 2013
121
6
38
Hi,

I am having a newly installed fully updated from enterprise repo Proxmox 4 cluster.
Nodes are HP DL580 gen7 with 4 x E7-4850v1.

I am trying to boot a multicore guest with the following kvm options

Code:
acpi: 0
bootdisk: virtio0
cores: 64
cpu: host
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 491520
name: test
net0: virtio=32:30:34:37:30:36,bridge=vmbr0
numa: 0
ostype: l24
smbios1: uuid=e11d09ae-e68d-4484-9ed4-197301060ac0
sockets: 1
virtio0: ssd:vm-100-disk-1,cache=writeback,size=50G


The result is that the guest only sees one cpu

Code:
Nov 23 18:32:33 test kernel: Initializing cgroup subsys blkio
Nov 23 18:32:33 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:32:33 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:32:33 test kernel: CPU: Unsupported number of siblings 64
Nov 23 18:32:33 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:32:33 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:32:33 test kernel: SMP alternatives: switching to UP code
Nov 23 18:32:33 test kernel: Freeing SMP alternatives: 37k freed
Nov 23 18:32:33 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:32:33 test kernel: ftrace: allocating 22128 entries in 87 pages

Code:
[root@test ~]# cat /proc/cpuinfo
processor    : 0
vendor_id    : GenuineIntel
cpu family    : 6
model        : 47
model name    : Intel(R) Xeon(R) CPU E7- 4850  @ 2.00GHz
stepping    : 2
microcode    : 1
cpu MHz        : 1997.833
cache size    : 24576 KB
physical id    : 0
siblings    : 1
core id        : 0
cpu cores    : 1
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 11
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc up arch_perfmon rep_good unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hypervisor lahf_lm arat
bogomips    : 3995.66
clflush size    : 64
cache_alignment    : 64
address sizes    : 40 bits physical, 48 bits virtual
power management:

I have the same problem with even 2 cores

Code:
Nov 23 18:38:54 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:38:54 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:38:54 test kernel: CPU: Unsupported number of siblings 2
Nov 23 18:38:54 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:38:54 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:38:54 test kernel: SMP alternatives: switching to UP code
Nov 23 18:38:54 test kernel: Freeing SMP alternatives: 37k freed
Nov 23 18:38:54 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:38:54 test kernel: ftrace: allocating 22128 entries in 87 pages

I found out with a lot of try n error that if i run the kvm command by hand with changed smp options i have the result i want. Guest is able to see the cpus

Proxmox starts cm with following command:
/usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name test -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 491520 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

I changed to this:
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p 'CPUShares=1000' /usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0' -name test -smp 64 -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c395b1511a67' -drive 'file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'

and it works. (output of cat /proc/cpuinfo is big.)
Code:
Nov 23 18:51:29 test kernel: Initializing cgroup subsys perf_event
Nov 23 18:51:29 test kernel: Initializing cgroup subsys net_prio
Nov 23 18:51:29 test kernel: mce: CPU supports 10 MCE banks
Nov 23 18:51:29 test kernel: alternatives: switching to unfair spinlock
Nov 23 18:51:29 test kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Nov 23 18:51:29 test kernel: ftrace: allocating 22128 entries in 87 pages
Nov 23 18:51:29 test kernel: Enabling x2apic
Nov 23 18:51:29 test kernel: Enabled x2apic
Nov 23 18:51:29 test kernel: APIC routing finalized to physical x2apic.
Nov 23 18:51:29 test kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Nov 23 18:51:29 test kernel: CPU0: Intel(R) Xeon(R) CPU E7- 4850  @ 2.00GHz stepping 02
Nov 23 18:51:29 test kernel: Performance Events: 16-deep LBR, Westmere events, Intel PMU driver.
Nov 23 18:51:29 test kernel: CPUID marked event: 'bus cycles' unavailable
Nov 23 18:51:29 test kernel: ... version:                2
Nov 23 18:51:29 test kernel: ... bit width:              48
Nov 23 18:51:29 test kernel: ... generic registers:      4
Nov 23 18:51:29 test kernel: ... value mask:             0000ffffffffffff
Nov 23 18:51:29 test kernel: ... max period:             000000007fffffff
Nov 23 18:51:29 test kernel: ... fixed-purpose events:   3
Nov 23 18:51:29 test kernel: ... event mask:             000000070000000f
Nov 23 18:51:29 test kernel: NMI watchdog disabled (cpu0): hardware events not enabled
Nov 23 18:51:29 test kernel: Booting Node   0, Processors  #1
Nov 23 18:51:29 test kernel: kvm-clock: cpu 1, msr 0:53a35941, secondary cpu clock
Nov 23 18:51:29 test kernel: #2
Nov 23 18:51:29 test kernel: kvm-clock: cpu 2, msr 0:53a55941, secondary cpu clock
...
Nov 23 18:51:29 test kernel: kvm-clock: cpu 62, msr 0:541d5941, secondary cpu clock
Nov 23 18:51:29 test kernel: #63 Ok.
Nov 23 18:51:29 test kernel: kvm-clock: cpu 63, msr 0:541f5941, secondary cpu clock
Nov 23 18:51:29 test kernel: Brought up 64 CPUs
Nov 23 18:51:29 test kernel: Total of 64 processors activated (255722.62 BogoMIPS).
Nov 23 18:51:29 test kernel: devtmpfs: initialized
Nov 23 18:51:29 test kernel: regulator: core version 0.5
Nov 23 18:51:29 test kernel: NET: Registered protocol family 16

Can somebody help?

Code:
pveversion -v
root@1211-ps01:~# pveversion -v
proxmox-ve: 4.0-21 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-21
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie
 
Hmm, it wont help you much but thats strange!
The kvm command from your machine is illogical, as we add the smp option with: [see]
Code:
push @$cmd, '-smp', "$vcpus,sockets=$sockets,cores=$cores,maxcpus=$maxcpus";

So it seems that the actual config has sockets: 1 and cores: 2 else I cannot explain that!

You can also set the vcpus in the config by adding:
vcpus: 64
but that should not be needed.
Is the command you posted generated by qm showcmd 100?

I tried to reproduce that but I can't:
Code:
root@XXXX:~# qm config 100
args: -enable-kvm
bootdisk: virtio0
cores: 64
cpu: host
ide2: iso:iso/virtio-win-0.1.110.iso,media=cdrom,size=55260K
memory: 6144
name: uno-57
net0: virtio=96:55:AB:72:C3:6B,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=cb849905-503a-48be-8d11-6614d232702b
sockets: 1
virtio0: local:100/vm-100-disk-1.qcow2,cache=writeback,size=80G
virtio1: local:100/vm-100-disk-2.qcow2,cache=writeback,size=32G

generates the following command:

Code:
root@XXXX:~# qm showcmd 100 | grep --color smp
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p CPUShares=1000 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=cb849905-503a-48be-8d11-6614d232702b -name uno-57 [B]-smp 64,sockets=1,cores=64,maxcpus=64[/B] -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 6144 -k de -enable-kvm -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:2c6ce8e66d32 -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,cache=writeback,format=qcow2,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=/var/lib/vz/images/100/vm-100-disk-2.qcow2,if=none,id=drive-virtio1,cache=writeback,format=qcow2,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/mnt/pve/iso/template/iso/virtio-win-0.1.110.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=96:55:AB:72:C3:6B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

Which looks completely correct.

Maybe a bit of a stupid question but are you really sure the config is from that machine? :)
 
Last edited:
I am also puzzled with the command issue. Didn't expected it.

In previous post i copy the 100.conf, but i believe is the same. Here is the command output you pointed.
Code:
root@xxx:~# qm config 100
acpi: 0
bootdisk: virtio0
cores: 2
cpu: host
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 49152
name: test
net0: virtio=32:30:34:37:30:36,bridge=vmbr0
numa: 0
ostype: l24
smbios1: uuid=e11d09ae-e68d-4484-9ed4-197301060ac0
sockets: 1
virtio0: ssd:vm-100-disk-1,cache=writeback,size=50G

Code:
root@1211-ps01:~# qm showcmd 100
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p CPUShares=1000 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name test -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

At the moment i run it like this (by hand started from cli):

Code:
root@xxx:~# ps auxwww | grep smp
root     32970  144  0.5 52471376 2997400 ?    Sl   18:49 148:26 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name sull-1 -smp 64 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

I tryied with added "vcpus: 64" with editing 100.conf file but the result is the same

Code:
root@xxxx:~# qm config 100
acpi: 0
bootdisk: virtio0
cores: 64
cpu: host
ide2: none,media=cdrom
memory: 49152
name: sull-2
net0: virtio=32:30:34:37:30:36,bridge=vmbr0
numa: 0
ostype: l24
smbios1: uuid=cd2a8f3d-f135-44fa-943b-96832223a8a5
sockets: 1
vcpus: 64
virtio0: ssd:vm-100-disk-1,cache=writeback,size=50G

Code:
root@1211-ps02:~# qm showcmd 101
/usr/bin/systemd-run --scope --slice qemu --unit 101 -p CPUShares=1000 /usr/bin/kvm -id 101 -chardev socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -smbios type=1,uuid=cd2a8f3d-f135-44fa-943b-96832223a8a5 -name sull-2 -smp 64,sockets=1,cores=64,maxcpus=64 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:499b19b03869 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-101-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

Proxmox generates correct the command with -smp 64,sockets=1,cores=64,maxcpus=64. But my guest (Centos 6.7) cant boot corectly with all the cpus.
Only when i start the kvm from cli with altered smp settings -smp 64 the guest sees the correct number of cpus.
 
Code:
root@xxx:~# qm config 100
acpi: 0
bootdisk: virtio0
[B]cores: 2[/B]
cpu: host
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 49152
name: test
net0: virtio=32:30:34:37:30:36,bridge=vmbr0
numa: 0
ostype: l24
smbios1: uuid=e11d09ae-e68d-4484-9ed4-197301060ac0
[B]sockets: 1[/B]
virtio0: ssd:vm-100-disk-1,cache=writeback,size=50G

Code:
root@1211-ps01:~# qm showcmd 100
/usr/bin/systemd-run --scope --slice qemu --unit 100 -p CPUShares=1000 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e11d09ae-e68d-4484-9ed4-197301060ac0 -name test [B]-smp 2,sockets=1,cores=2,maxcpus=2[/B] -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -no-acpi -vga cirrus -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time -m 49152 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c395b1511a67 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:ssd/vm-100-disk-1:mon_host=10.0.0.1;10.0.0.2;10.0.0.3:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ssd.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=32:30:34:37:30:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

The config you posted has also only 2 sockets in it, so the command was generated correctly. So it's a other config?


But nonetheless, if the "-smp 64,sockets=1,cores=64,maxcpus=64" command is not working for the machine thats not the problem.
Can you try to start the VM with less memory as a test, about 2GB to see if it's related to bug from 4.2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!