Starting VMs on R730 gives machine type error pve 6.3

totalimpact

Active Member
Dec 12, 2010
129
17
38
I just loaded up the latest 6.3.1 and ran dist-upgrades to the very latest on a Dell R730, when I try to start a new Win2019 VM with the bare basic defaults selected it fails to start with error:

Code:
() kvm: no-hpet: unsupported machine type

It seems the default machine type 5.2.0 is the issue.

Code:
boot: order=virtio0;ide2;net0
cores: 4
ide2: local:iso/Server2019Eval.iso,media=cdrom
machine: pc-i440fx-5.2.0
memory: 4096
name: Server2019
net0: e1000=CA:5E:FB:76:B9:0B,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=17c84413-50bb-4dc6-ba26-163b13edded6
sockets: 1
virtio0: VMdata:vm-102-disk-0,size=80G
vmgenid: 5f122570-08af-4d62-b932-0c56e0dd6583

Changing to machine: pc-i440fx-5.1 it can start.
 

Stefan_R

Proxmox Retired Staff
Retired Staff
Jun 4, 2019
1,300
278
88
Vienna
Did you set the machine version to 5.2 yourself? Did you have a machine version set before the upgrade? By default 5.1 should be pinned for windows type machines on upgrade.

Could you post 'pveversion -v' and 'qm showcmd --pretty <vmid>', once with 5.1 set (working) and once with how you get that error?
 

totalimpact

Active Member
Dec 12, 2010
129
17
38
This is a brand new installation, I did not change the machine type, and I made 3 VMs that all had the same problem.

Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.9-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-7
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

this command is missing something-
Code:
root@pve1:~# qm showcmd --pretty 100
400 not enough arguments
qm showcmd <vmid> [OPTIONS]
 

Stefan_R

Proxmox Retired Staff
Retired Staff
Jun 4, 2019
1,300
278
88
Vienna
Sorry, qm showcmd <vmid> --pretty - other way around.
 

totalimpact

Active Member
Dec 12, 2010
129
17
38
Strangely this time booting up set to 5.2 it worked without error.

Working 5.0:
Code:
/usr/bin/kvm \
  -id 100 \
  -name PLserver \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/100.pid \
  -daemonize \
  -smbios 'type=1,uuid=39a8397c-a869-469e-b439-3bbea9a47c5c' \
  -smp '1,sockets=1,cores=20,maxcpus=20' \
  -device 'kvm64-x86_64-cpu,id=cpu2,socket-id=0,core-id=1,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu3,socket-id=0,core-id=2,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu4,socket-id=0,core-id=3,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu5,socket-id=0,core-id=4,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu6,socket-id=0,core-id=5,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu7,socket-id=0,core-id=6,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu8,socket-id=0,core-id=7,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu9,socket-id=0,core-id=8,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu10,socket-id=0,core-id=9,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu11,socket-id=0,core-id=10,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu12,socket-id=0,core-id=11,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu13,socket-id=0,core-id=12,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu14,socket-id=0,core-id=13,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu15,socket-id=0,core-id=14,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu16,socket-id=0,core-id=15,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu17,socket-id=0,core-id=16,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu18,socket-id=0,core-id=17,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu19,socket-id=0,core-id=18,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu20,socket-id=0,core-id=19,thread-id=0' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/100.vnc,password \
  -no-hpet \
  -cpu 'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_tim                                                                                    e,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep' \
  -m 'size=1024,slots=255,maxmem=4194304M' \
  -object 'memory-backend-ram,id=ram-node0,size=1024M' \
  -numa 'node,nodeid=0,cpus=0-19,memdev=ram-node0' \
  -object 'memory-backend-ram,id=mem-dimm0,size=512M' \
  -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' \
  -object 'memory-backend-ram,id=mem-dimm1,size=512M' \
  -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' \
  -object 'memory-backend-ram,id=mem-dimm2,size=512M' \
  -device 'pc-dimm,id=dimm2,memdev=mem-dimm2,node=0' \
  -object 'memory-backend-ram,id=mem-dimm3,size=512M' \
  -device 'pc-dimm,id=dimm3,memdev=mem-dimm3,node=0' \
  -object 'memory-backend-ram,id=mem-dimm4,size=512M' \
  -device 'pc-dimm,id=dimm4,memdev=mem-dimm4,node=0' \
  -object 'memory-backend-ram,id=mem-dimm5,size=512M' \
  -device 'pc-dimm,id=dimm5,memdev=mem-dimm5,node=0' \
  -object 'memory-backend-ram,id=mem-dimm6,size=512M' \
  -device 'pc-dimm,id=dimm6,memdev=mem-dimm6,node=0' \
  -object 'memory-backend-ram,id=mem-dimm7,size=512M' \
  -device 'pc-dimm,id=dimm7,memdev=mem-dimm7,node=0' \
  -object 'memory-backend-ram,id=mem-dimm8,size=512M' \
  -device 'pc-dimm,id=dimm8,memdev=mem-dimm8,node=0' \
  -object 'memory-backend-ram,id=mem-dimm9,size=512M' \
  -device 'pc-dimm,id=dimm9,memdev=mem-dimm9,node=0' \
  -object 'memory-backend-ram,id=mem-dimm10,size=512M' \
  -device 'pc-dimm,id=dimm10,memdev=mem-dimm10,node=0' \
  -object 'memory-backend-ram,id=mem-dimm11,size=512M' \
  -device 'pc-dimm,id=dimm11,memdev=mem-dimm11,node=0' \
  -object 'memory-backend-ram,id=mem-dimm12,size=512M' \
  -device 'pc-dimm,id=dimm12,memdev=mem-dimm12,node=0' \
  -object 'memory-backend-ram,id=mem-dimm13,size=512M' \
  -device 'pc-dimm,id=dimm13,memdev=mem-dimm13,node=0' \
  -object 'memory-backend-ram,id=mem-dimm14,size=512M' \
  -device 'pc-dimm,id=dimm14,memdev=mem-dimm14,node=0' \
  -object 'memory-backend-ram,id=mem-dimm15,size=512M' \
  -device 'pc-dimm,id=dimm15,memdev=mem-dimm15,node=0' \
  -object 'memory-backend-ram,id=mem-dimm16,size=512M' \
  -device 'pc-dimm,id=dimm16,memdev=mem-dimm16,node=0' \
  -object 'memory-backend-ram,id=mem-dimm17,size=512M' \
  -device 'pc-dimm,id=dimm17,memdev=mem-dimm17,node=0' \
  -object 'memory-backend-ram,id=mem-dimm18,size=512M' \
  -device 'pc-dimm,id=dimm18,memdev=mem-dimm18,node=0' \
  -object 'memory-backend-ram,id=mem-dimm19,size=512M' \
  -device 'pc-dimm,id=dimm19,memdev=mem-dimm19,node=0' \
  -object 'memory-backend-ram,id=mem-dimm20,size=512M' \
  -device 'pc-dimm,id=dimm20,memdev=mem-dimm20,node=0' \
  -object 'memory-backend-ram,id=mem-dimm21,size=512M' \
  -device 'pc-dimm,id=dimm21,memdev=mem-dimm21,node=0' \
  -object 'memory-backend-ram,id=mem-dimm22,size=512M' \
  -device 'pc-dimm,id=dimm22,memdev=mem-dimm22,node=0' \
  -object 'memory-backend-ram,id=mem-dimm23,size=512M' \
  -device 'pc-dimm,id=dimm23,memdev=mem-dimm23,node=0' \
  -object 'memory-backend-ram,id=mem-dimm24,size=512M' \
  -device 'pc-dimm,id=dimm24,memdev=mem-dimm24,node=0' \
  -object 'memory-backend-ram,id=mem-dimm25,size=512M' \
  -device 'pc-dimm,id=dimm25,memdev=mem-dimm25,node=0' \
  -object 'memory-backend-ram,id=mem-dimm26,size=512M' \
  -device 'pc-dimm,id=dimm26,memdev=mem-dimm26,node=0' \
  -object 'memory-backend-ram,id=mem-dimm27,size=512M' \
  -device 'pc-dimm,id=dimm27,memdev=mem-dimm27,node=0' \
  -object 'memory-backend-ram,id=mem-dimm28,size=512M' \
  -device 'pc-dimm,id=dimm28,memdev=mem-dimm28,node=0' \
  -object 'memory-backend-ram,id=mem-dimm29,size=512M' \
  -device 'pc-dimm,id=dimm29,memdev=mem-dimm29,node=0' \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=90a1f419-c0c7-4fdd-a1d9-af11d47bb32f' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2,edid=off' \
  -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c93b4da01da5' \
  -drive 'file=/var/lib/vz/template/iso/virtio-drivers.iso,if=none,id=drive-ide2,media=cdrom,aio=threads'                                                                                     \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' \
  -drive 'file=/dev/zvol/VMdata/vm-100-disk-0,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,d                                                                                    etect-zeroes=on' \
  -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qe                                                                                    mu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=A2:7F:7A:8D:6F:55,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' \
  -rtc 'driftfix=slew,base=localtime' \
  -machine 'type=pc-i440fx-5.0+pve0' \
  -global 'kvm-pit.lost_tick_policy=discard'

5.2:
Code:
/usr/bin/kvm \
  -id 100 \
  -name PLserver \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/100.pid \
  -daemonize \
  -smbios 'type=1,uuid=39a8397c-a869-469e-b439-3bbea9a47c5c' \
  -smp '1,sockets=1,cores=20,maxcpus=20' \
  -device 'kvm64-x86_64-cpu,id=cpu2,socket-id=0,core-id=1,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu3,socket-id=0,core-id=2,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu4,socket-id=0,core-id=3,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu5,socket-id=0,core-id=4,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu6,socket-id=0,core-id=5,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu7,socket-id=0,core-id=6,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu8,socket-id=0,core-id=7,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu9,socket-id=0,core-id=8,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu10,socket-id=0,core-id=9,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu11,socket-id=0,core-id=10,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu12,socket-id=0,core-id=11,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu13,socket-id=0,core-id=12,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu14,socket-id=0,core-id=13,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu15,socket-id=0,core-id=14,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu16,socket-id=0,core-id=15,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu17,socket-id=0,core-id=16,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu18,socket-id=0,core-id=17,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu19,socket-id=0,core-id=18,thread-id=0' \
  -device 'kvm64-x86_64-cpu,id=cpu20,socket-id=0,core-id=19,thread-id=0' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/100.vnc,password \
  -no-hpet \
  -cpu 'kvm64,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep' \
  -m 'size=1024,slots=255,maxmem=4194304M' \
  -object 'memory-backend-ram,id=ram-node0,size=1024M' \
  -numa 'node,nodeid=0,cpus=0-19,memdev=ram-node0' \
  -object 'memory-backend-ram,id=mem-dimm0,size=512M' \
  -device 'pc-dimm,id=dimm0,memdev=mem-dimm0,node=0' \
  -object 'memory-backend-ram,id=mem-dimm1,size=512M' \
  -device 'pc-dimm,id=dimm1,memdev=mem-dimm1,node=0' \
  -object 'memory-backend-ram,id=mem-dimm2,size=512M' \
  -device 'pc-dimm,id=dimm2,memdev=mem-dimm2,node=0' \
  -object 'memory-backend-ram,id=mem-dimm3,size=512M' \
  -device 'pc-dimm,id=dimm3,memdev=mem-dimm3,node=0' \
  -object 'memory-backend-ram,id=mem-dimm4,size=512M' \
  -device 'pc-dimm,id=dimm4,memdev=mem-dimm4,node=0' \
  -object 'memory-backend-ram,id=mem-dimm5,size=512M' \
  -device 'pc-dimm,id=dimm5,memdev=mem-dimm5,node=0' \
  -object 'memory-backend-ram,id=mem-dimm6,size=512M' \
  -device 'pc-dimm,id=dimm6,memdev=mem-dimm6,node=0' \
  -object 'memory-backend-ram,id=mem-dimm7,size=512M' \
  -device 'pc-dimm,id=dimm7,memdev=mem-dimm7,node=0' \
  -object 'memory-backend-ram,id=mem-dimm8,size=512M' \
  -device 'pc-dimm,id=dimm8,memdev=mem-dimm8,node=0' \
  -object 'memory-backend-ram,id=mem-dimm9,size=512M' \
  -device 'pc-dimm,id=dimm9,memdev=mem-dimm9,node=0' \
  -object 'memory-backend-ram,id=mem-dimm10,size=512M' \
  -device 'pc-dimm,id=dimm10,memdev=mem-dimm10,node=0' \
  -object 'memory-backend-ram,id=mem-dimm11,size=512M' \
  -device 'pc-dimm,id=dimm11,memdev=mem-dimm11,node=0' \
  -object 'memory-backend-ram,id=mem-dimm12,size=512M' \
  -device 'pc-dimm,id=dimm12,memdev=mem-dimm12,node=0' \
  -object 'memory-backend-ram,id=mem-dimm13,size=512M' \
  -device 'pc-dimm,id=dimm13,memdev=mem-dimm13,node=0' \
  -object 'memory-backend-ram,id=mem-dimm14,size=512M' \
  -device 'pc-dimm,id=dimm14,memdev=mem-dimm14,node=0' \
  -object 'memory-backend-ram,id=mem-dimm15,size=512M' \
  -device 'pc-dimm,id=dimm15,memdev=mem-dimm15,node=0' \
  -object 'memory-backend-ram,id=mem-dimm16,size=512M' \
  -device 'pc-dimm,id=dimm16,memdev=mem-dimm16,node=0' \
  -object 'memory-backend-ram,id=mem-dimm17,size=512M' \
  -device 'pc-dimm,id=dimm17,memdev=mem-dimm17,node=0' \
  -object 'memory-backend-ram,id=mem-dimm18,size=512M' \
  -device 'pc-dimm,id=dimm18,memdev=mem-dimm18,node=0' \
  -object 'memory-backend-ram,id=mem-dimm19,size=512M' \
  -device 'pc-dimm,id=dimm19,memdev=mem-dimm19,node=0' \
  -object 'memory-backend-ram,id=mem-dimm20,size=512M' \
  -device 'pc-dimm,id=dimm20,memdev=mem-dimm20,node=0' \
  -object 'memory-backend-ram,id=mem-dimm21,size=512M' \
  -device 'pc-dimm,id=dimm21,memdev=mem-dimm21,node=0' \
  -object 'memory-backend-ram,id=mem-dimm22,size=512M' \
  -device 'pc-dimm,id=dimm22,memdev=mem-dimm22,node=0' \
  -object 'memory-backend-ram,id=mem-dimm23,size=512M' \
  -device 'pc-dimm,id=dimm23,memdev=mem-dimm23,node=0' \
  -object 'memory-backend-ram,id=mem-dimm24,size=512M' \
  -device 'pc-dimm,id=dimm24,memdev=mem-dimm24,node=0' \
  -object 'memory-backend-ram,id=mem-dimm25,size=512M' \
  -device 'pc-dimm,id=dimm25,memdev=mem-dimm25,node=0' \
  -object 'memory-backend-ram,id=mem-dimm26,size=512M' \
  -device 'pc-dimm,id=dimm26,memdev=mem-dimm26,node=0' \
  -object 'memory-backend-ram,id=mem-dimm27,size=512M' \
  -device 'pc-dimm,id=dimm27,memdev=mem-dimm27,node=0' \
  -object 'memory-backend-ram,id=mem-dimm28,size=512M' \
  -device 'pc-dimm,id=dimm28,memdev=mem-dimm28,node=0' \
  -object 'memory-backend-ram,id=mem-dimm29,size=512M' \
  -device 'pc-dimm,id=dimm29,memdev=mem-dimm29,node=0' \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=90a1f419-c0c7-4fdd-a1d9-af11d47bb32f' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2,edid=off' \
  -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c93b4da01da5' \
  -drive 'file=/var/lib/vz/template/iso/virtio-drivers.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' \
  -drive 'file=/dev/zvol/VMdata/vm-100-disk-0,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' \
  -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=A2:7F:7A:8D:6F:55,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' \
  -rtc 'driftfix=slew,base=localtime' \
  -machine 'type=pc-i440fx-5.2+pve0' \
  -global 'kvm-pit.lost_tick_policy=discard'
 
Last edited:

starlight

New Member
Feb 12, 2021
12
0
1
39
had the same error today (seems to be after apt-get upgrade, installation was iso from website).


dont know why, but i changed to q35 , started the machine, killed it , changed back to i440fx and then it starts up ?

Whats even stranger...

if i create a new machine (with default i440fx ... i dont have choose version in advanced mode) , the qm showcmd --pretty shows:

Code:
-machine 'type=pc-i440fx-5.2.0+pve0' \


after change to q35, start, kill , back to i440fx it shows

Code:
-machine 'type=pc-i440fx-5.1+pve0' \
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!