Proxmox VE 6.1 released!

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
748
1,626
223
We are very excited to announce the general availability of Proxmox VE 6.1.

It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies.

This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it’s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F.

The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown.

In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1.

We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1

Video intro
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso/

Documentation
https://pve.proxmox.com/pve-docs/

Community Forum
https://forum.proxmox.com

Source Code
https://git.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

FAQ
Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt?
A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade

Q: Can I install Proxmox VE 6.1 on top of Debian Buster?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus?
A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus

Q: Where can I get more information about future feature updates?
A: Check our roadmap, forum, mailing list and subscribe to our newsletter.

A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Great, sounds good :)
I want to upgrade from 6.0 to 6.1 but no have any update
apt update && apt dist-upgrade result :
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

can you help me?
Thank you
 
Will the new HA migrate on shutdown feature also apply to node reboots?
My use case being: I want to patch/upgrade a node without manually migrating VMs away from the node and back again after the reboot.
 
I don't have a subscription, i update sources.list but still don't have any package :
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Can you post the output of apt update?
 
Can you post the output of apt update?

same problem here, I do not use subscription atm.

root@viking-t:~# apt update Hit:1 http://ftp.bg.debian.org/debian buster InRelease Hit:2 http://ftp.bg.debian.org/debian buster-updates InRelease Hit:3 http://security.debian.org buster/updates InRelease Hit:4 http://download.zerotier.com/debian/buster buster InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. root@viking-t:~# apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. root@viking-t:~# apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. root@viking-t:~#

I disabled subscription repo to avoid error message during apt update

...so is this update only for servers with the subscription?

EDIT: No, it's not!
https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo

I will go and grab a subscription soon anyway. Just started with ProxMox yesterday, so before a deeper dive need to do some testing.
 
Last edited:
apt update result :
root@node1:~# apt update
Hit:1 http://deb.debian.org/debian buster InRelease
Hit:2 http://deb.debian.org/debian buster-updates InRelease
Hit:3 http://download.proxmox.com/debian/ceph-nautilus buster InRelease
Err:4 https://enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 212.224.123.70 443]
Hit:5 http://security.debian.org buster/updates InRelease
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 212.224.123.70 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
 
apt update result :
root@node1:~# apt update
Hit:1 http://deb.debian.org/debian buster InRelease
Hit:2 http://deb.debian.org/debian buster-updates InRelease
Hit:3 http://download.proxmox.com/debian/ceph-nautilus buster InRelease
Err:4 https://enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 212.224.123.70 443]
Hit:5 http://security.debian.org buster/updates InRelease
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 212.224.123.70 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

If you're not using enterprise repo, you need to add Proxmox VE No-Subscription Repository from the link below to upgrade to 6.1:
https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
 
apt update result :
root@node1:~# apt update
Hit:1 http://deb.debian.org/debian buster InRelease
Hit:2 http://deb.debian.org/debian buster-updates InRelease
Hit:3 http://download.proxmox.com/debian/ceph-nautilus buster InRelease
Err:4 https://enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 212.224.123.70 443]
Hit:5 http://security.debian.org buster/updates InRelease
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 212.224.123.70 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Buy subscription :) Or delete this: /etc/apt/sources.list.d/pve-enterprise.list

And add this:

Code:
 deb http://download.proxmox.com/debian/pve buster pve-no-subscription

to /etc/apt/sources.list
 
Timing couldn't be better with this release. Just spent way too many hours trying to get GPU passthrough working and this update has fixed my problems without any additional configuration required. Thank you!
 
On two fresh installed servers with proxmox6.1 cannot start lxc containers with local or lvm storage. I've attached `strace lxc-start -n 100 -F`

Code:
root@pve1:~# pct start 100
Job for pve-container@100.service failed because the control process exited with error code.
See "systemctl status pve-container@100.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@100' failed: exit code 1
root@pve1:~# lxc-start -n 100 -F
lxc-start: 100: conf.c: run_buffer: 352 Script exited with status 2
lxc-start: 100: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "100"
lxc-start: 100: start.c: __lxc_start: 2032 Failed to initialize container "100"
Segmentation fault
root@pve1:~# dmesg | tail
[   23.077662] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[   24.889958] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
[  169.129070] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  169.144425] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  533.341624] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  533.357210] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  552.452058] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
[  552.470831] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[  552.480625] lxc-start[3695]: segfault at 50 ip 00007f40ebdecf8b sp 00007ffd01d84e40 error 4 in liblxc.so.1.6.0[7f40ebd93000+8a000]
[  552.480659] Code: 9b c0 ff ff 4d 85 ff 0f 85 82 02 00 00 66 90 48 8b 73 50 48 8b bb f8 00 00 00 e8 80 78 fa ff 4c 8b 74 24 10 48 89 de 4c 89 f7 <41> ff 56 50 4c 89 f7 48 89 de 41 ff 56 58 48 8b 83 f8 00 00 00 8b


Code:
root@pve1:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 

Attachments

  • strace-lxc-start.txt
    20.5 KB · Views: 2
Thanks, so this means only on node shutdown (poweroff) will auto migration (and auto migrate-back) take place. Bummer!

No, on every shutdown request, be it a reboot or a poweroff.
 
  • Like
Reactions: Florian Beer
After updating I get "kernel oops" when I try to start windows guests.

Dec 5 09:17:07 node3 kernel: [3236224.934697] PGD 0 P4D 0
Dec 5 09:17:07 node3 kernel: [3236224.935413] Oops: 0010 [#3] SMP PTI
Dec 5 09:17:07 node3 kernel: [3236224.935997] CPU: 26 PID: 3030113 Comm: kvm Tainted: P D O 5.0.21-3-pve #1
Dec 5 09:17:07 node3 kernel: [3236224.936646] Hardware name: Cisco Systems Inc UCSC-C220-M5SX/UCSC-C220-M5SX, BIOS C220M5.4.0.4c.0.0506190754 05/06/2019
Dec 5 09:17:07 node3 kernel: [3236224.937328] RIP: 0010: (null)
Dec 5 09:17:07 node3 kernel: [3236224.937929] Code: Bad RIP value.
Dec 5 09:17:07 node3 kernel: [3236224.938538] RSP: 0018:ffffb5eb0128bb88 EFLAGS: 00010246
Dec 5 09:17:07 node3 kernel: [3236224.939152] RAX: 0000000000000000 RBX: 00007f9323e00008 RCX: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236224.939756] RDX: 00007f9323e00008 RSI: ffffb5eb0128bd08 RDI: ffff9999da398000
Dec 5 09:17:07 node3 kernel: [3236224.940344] RBP: ffffb5eb0128bcc8 R08: 0000000000000007 R09: 0000000000000024
Dec 5 09:17:07 node3 kernel: [3236224.940919] R10: ffff99ab78a66200 R11: 0000000000000000 R12: ffffb5eb0128bb90
Dec 5 09:17:07 node3 kernel: [3236224.941476] R13: ffffb5eb0128bd08 R14: ffff9999da398058 R15: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236224.942018] FS: 00007f9324bfd700(0000) GS:ffff99b7ffa80000(0000) knlGS:0000000000000000
Dec 5 09:17:07 node3 kernel: [3236224.942570] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec 5 09:17:07 node3 kernel: [3236224.943122] CR2: ffffffffffffffd6 CR3: 0000002776208005 CR4: 00000000007626e0
Dec 5 09:17:07 node3 kernel: [3236224.943684] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236224.944249] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Dec 5 09:17:07 node3 kernel: [3236224.944863] PKRU: 55555554
Dec 5 09:17:07 node3 kernel: [3236224.945437] Call Trace:
Dec 5 09:17:07 node3 kernel: [3236224.946020] kvm_vcpu_ioctl_get_hv_cpuid+0x44/0x220 [kvm]
Dec 5 09:17:07 node3 kernel: [3236224.946572] ? get_page_from_freelist+0xf55/0x1440
Dec 5 09:17:07 node3 kernel: [3236224.947120] ? get_page_from_freelist+0xf55/0x1440
Dec 5 09:17:07 node3 kernel: [3236224.947679] ? kvm_arch_vcpu_load+0x94/0x290 [kvm]
Dec 5 09:17:07 node3 kernel: [3236224.948227] kvm_arch_vcpu_ioctl+0x14b/0x11f0 [kvm]
Dec 5 09:17:07 node3 kernel: [3236224.948750] ? __alloc_pages_nodemask+0x13f/0x2e0
Dec 5 09:17:07 node3 kernel: [3236224.949267] ? mem_cgroup_commit_charge+0x82/0x4d0
Dec 5 09:17:07 node3 kernel: [3236224.949781] ? mem_cgroup_try_charge+0x8b/0x190
Dec 5 09:17:07 node3 kernel: [3236224.950223] ? mem_cgroup_throttle_swaprate+0x2c/0x154
Dec 5 09:17:07 node3 kernel: [3236224.950668] kvm_vcpu_ioctl+0xe5/0x610 [kvm]
Dec 5 09:17:07 node3 kernel: [3236224.951102] do_vfs_ioctl+0xa9/0x640
Dec 5 09:17:07 node3 kernel: [3236224.951596] ? handle_mm_fault+0xdd/0x210
Dec 5 09:17:07 node3 kernel: [3236224.952101] ksys_ioctl+0x67/0x90
Dec 5 09:17:07 node3 kernel: [3236224.952599] __x64_sys_ioctl+0x1a/0x20
Dec 5 09:17:07 node3 kernel: [3236224.953103] do_syscall_64+0x5a/0x110
Dec 5 09:17:07 node3 kernel: [3236224.953617] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Dec 5 09:17:07 node3 kernel: [3236224.954116] RIP: 0033:0x7f933cd63427
Dec 5 09:17:07 node3 kernel: [3236224.954593] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
Dec 5 09:17:07 node3 kernel: [3236224.955638] RSP: 002b:00007f9324bf6ce8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Dec 5 09:17:07 node3 kernel: [3236224.956189] RAX: ffffffffffffffda RBX: 00000000c008aec1 RCX: 00007f933cd63427
Dec 5 09:17:07 node3 kernel: [3236224.956752] RDX: 00007f9323e00000 RSI: ffffffffc008aec1 RDI: 0000000000000024
Dec 5 09:17:07 node3 kernel: [3236224.957331] RBP: 00007f9323e00000 R08: 0000000000000000 R09: 00007f9324bf9d18
Dec 5 09:17:07 node3 kernel: [3236224.957871] R10: 0000000000005000 R11: 0000000000000246 R12: 00007f93300e5e80
Dec 5 09:17:07 node3 kernel: [3236224.958351] R13: 00007f93300e5e80 R14: 00007f9323e00000 R15: 00007f93300e5e80
Dec 5 09:17:07 node3 kernel: [3236224.958893] Modules linked in: ip6table_raw iptable_raw nfnetlink_queue bluetooth ecdh_generic rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_mark xt_set xt_physdev xt_addrtype xt_comment xt_multiport xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_tcpudp ip_set_hash_net veth ceph libceph fscache ebtable_filter ebtables ip_set ip6table_filter ip6_tables sctp iptable_filter bpfilter xfs softdog nfnetlink_log nfnetlink intel_rapl skx_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass ipmi_ssif nls_iso8859_1 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 crypto_simd zfs(PO) cryptd glue_helper zunicode(PO) zlua(PO) intel_cstate mgag200 ttm drm_kms_helper snd_pcm snd_timer drm snd soundcore i2c_algo_bit intel_rapl_perf fb_sys_fops joydev pcspkr syscopyarea input_leds sysfillrect sysimgblt mei_me ioatdma mei ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter acpi_pad
Dec 5 09:17:07 node3 kernel: [3236224.958929] mac_hid zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi sunrpc scsi_transport_iscsi ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c hid_generic usbkbd usbmouse usbhid hid ixgbe xfrm_algo megaraid_sas dca mdio lpc_ich ahci libahci wmi
Dec 5 09:17:07 node3 kernel: [3236224.966768] CR2: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236224.967490] ---[ end trace 0aa8662337049962 ]---
Dec 5 09:17:07 node3 kernel: [3236225.026187] RIP: 0010: (null)
Dec 5 09:17:07 node3 kernel: [3236225.026994] Code: Bad RIP value.
Dec 5 09:17:07 node3 kernel: [3236225.027703] RSP: 0018:ffffb5eb010efb88 EFLAGS: 00010246
Dec 5 09:17:07 node3 kernel: [3236225.028409] RAX: 0000000000000000 RBX: 00007f8b97200008 RCX: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236225.029118] RDX: 00007f8b97200008 RSI: ffffb5eb010efd08 RDI: ffff99a2ba3a8000
Dec 5 09:17:07 node3 kernel: [3236225.029827] RBP: ffffb5eb010efcc8 R08: 0000000000000007 R09: 0000000000000040
Dec 5 09:17:07 node3 kernel: [3236225.030545] R10: ffff99b5af424200 R11: 0000000000000000 R12: ffffb5eb010efb90
Dec 5 09:17:07 node3 kernel: [3236225.031258] R13: ffffb5eb010efd08 R14: ffff99a2ba3a8058 R15: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236225.032012] FS: 00007f9324bfd700(0000) GS:ffff99b7ffa80000(0000) knlGS:0000000000000000
Dec 5 09:17:07 node3 kernel: [3236225.032746] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec 5 09:17:07 node3 kernel: [3236225.033474] CR2: ffffffffffffffd6 CR3: 0000002776208005 CR4: 00000000007626e0
Dec 5 09:17:07 node3 kernel: [3236225.034220] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 5 09:17:07 node3 kernel: [3236225.034967] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Dec 5 09:17:07 node3 kernel: [3236225.035709] PKRU: 55555554


This also affects migrations from/to other nodes. Linux VMs seem not to be affected.

I did not reboot after the update, to apply the new kernel, which may be the reason.

I'll later shut down the remaining two windows VMs on one node and reboot to see if the problem persists.

When I change the vm type of a newly created vm from windows to linux, the vm starts fine and starts booting from the installation CD-iso.
 
I did not reboot after the update, to apply the new kernel, which may be the reason.

I'll later shut down the remaining two windows VMs on one node and reboot to see if the problem persists.
Please do so, as Proxmox VE 6.1 uses the 5.3 Kernel. But as the old kernel still runs, were it worked previously (I assume), it could be an issue with the hosts HW+5.0 Kernel + newer QEMU 4.1.1.

When I change the vm type of a newly created vm from windows to linux, the vm starts fine and starts booting from the installation CD-iso.

Huh, strange. Can you post the VM config?
 
Huh, strange. Can you post the VM config?
Yes, of course. But it is nothing special:


Configured as windows vm - produces the kernel oops and vm start action produces timeout and no console available, but is displayed as running and the kvm process can be stopped trough the web ui.

Code:
agent: 1
bootdisk: scsi0
cores: 4
ide0: cephfs:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: cephfs:iso/de_windows_server_2019_updated_nov_2019_x64_dvd_da26c983.iso,media=cdrom,size=3246988K
memory: 10000
name: testwork
net0: virtio=3E:47:53:5E:A1:C1,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
smbios1: uuid=bc513c2e-37b2-4548-90f5-4fa898c4c380
sockets: 2
virtio1: cephstor_vm:vm-142-disk-1,size=32G
vmgenid: d0548447-e665-4cb4-8496-cf4036cb0c53


Code:
TASK ERROR: start failed: command '/usr/bin/kvm -id 142 -name testwork -chardev 'socket,id=qmp,path=/var/run/qemu-server/142.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/142.pid -daemonize -smbios 'type=1,uuid=bc513c2e-37b2-4548-90f5-4fa898c4c380' -smp '8,sockets=2,cores=4,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/142.vnc,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,enforce' -m 10000 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=d0548447-e665-4cb4-8496-cf4036cb0c53' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/142.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:7422606656c9' -drive 'file=/mnt/pve/cephfs/template/iso/virtio-win-0.1.171.iso,if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -drive 'file=/mnt/pve/cephfs/template/iso/de_windows_server_2019_updated_nov_2019_x64_dvd_da26c983.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201' -drive 'file=rbd:cephstor/vm-142-disk-1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/cephstor_vm.keyring,if=none,id=drive-virtio1,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb' -netdev 'type=tap,id=net0,ifname=tap142i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=3E:47:53:5E:A1:C1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc+pve1' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout

The kvm process is running on the node where I try to start it

Code:
3684951 ?        Sl     0:00 /usr/bin/kvm -id 142 -name testwork -chardev socket,id=qmp,path=/var/run/qemu-server/142.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/142.pid -daemonize -smbios type=1,uuid=bc513c2e-37b2-4548-90f5-4fa898c4c380 -smp 8,sockets=2,cores=4,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/142.vnc,password -no-hpet -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,enforce -m 10000 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device vmgenid,guid=d0548447-e665-4cb4-8496-cf4036cb0c53 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/142.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:7422606656c9 -drive file=/mnt/pve/cephfs/template/iso/virtio-win-0.1.171.iso,if=none,id=drive-ide0,media=cdrom,aio=threads -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200 -drive file=/mnt/pve/cephfs/template/iso/de_windows_server_2019_updated_nov_2019_x64_dvd_da26c983.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=201 -drive file=rbd:cephstor/vm-142-disk-1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/cephstor_vm.keyring,if=none,id=drive-virtio1,format=raw,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -netdev type=tap,id=net0,ifname=tap142i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=3E:47:53:5E:A1:C1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -machine type=pc+pve1 -global kvm-pit.lost_tick_policy=discard

After chaning type to linux, the vm boots fine.

Code:
agent: 1
bootdisk: scsi0
cores: 4
ide0: cephfs:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: cephfs:iso/de_windows_server_2019_updated_nov_2019_x64_dvd_da26c983.iso,media=cdrom,size=3246988K
memory: 10000
name: testwork
net0: virtio=3E:47:53:5E:A1:C1,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
smbios1: uuid=bc513c2e-37b2-4548-90f5-4fa898c4c380
sockets: 2
virtio1: cephstor_vm:vm-142-disk-1,size=32G
vmgenid: d0548447-e665-4cb4-8496-cf4036cb0c53


Addendum @11:08 cweber:
When I disable kvm hardware virtualization the vm starts with type "windows"

I also disabled qemu-agent, and changed vfrom scsi to virtio, but this does not helf, only after deactivating kvm hardware virtualization the vm boots.

Code:
agent: 0
boot: cdn
bootdisk: virtio1
cores: 4
ide0: cephfs:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: cephfs:iso/de_windows_server_2019_updated_nov_2019_x64_dvd_da26c983.iso,media=cdrom,size=3246988K
kvm: 0
memory: 10000
name: testwork
net0: virtio=3E:47:53:5E:A1:C1,bridge=vmbr0,firewall=1,link_down=1
numa: 0
ostype: win10
smbios1: uuid=bc513c2e-37b2-4548-90f5-4fa898c4c380
sockets: 2
virtio1: cephstor_vm:vm-142-disk-1,size=32G
vmgenid: d0548447-e665-4cb4-8496-cf4036cb0c53
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!