I just upgraded the Proxmox host from 5.2 to 5.3-5. Mostly standard configuration on a Dell T20 machine, ZFS with mirrored harddisks. Update worked fine, except that the only Windows 10 VM is now incredibly slow as soon as it gets to the login screen. There's no high CPU usage or I/O, it just feels like KVM is not working or everything is emulated.
qm config 113
pveversion -v
pveversion shows that currently kernel 4.15.18-8 is running, because I wanted to check if it's a kernel related problem. With kernel 4.15.18-9 I have the same problem. There are also 2x LXC containers and 3x KVM VMs on the same host which are working absolutly fine (but no Windows, just Debian / Ubuntu machines). With Proxmox 5.2, all VMs worked fine. Any idea how to solve this?
qm config 113
Code:
boot: cdn
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: win10.lan
net0: virtio=C2:43:58:60:E7:AF,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
parent: UPDATE060618
scsi0: local-zfs:vm-113-disk-1,size=128G
scsihw: virtio-scsi-pci
smbios1: uuid=3f5916e4-34ff-4156-8d25-6ef9034b22b6
sockets: 1
pveversion -v
Code:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-8-pve)
pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
pve-kernel-4.15: 5.2-12
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-3-pve: 4.13.16-50
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.4-1-pve: 4.13.4-26
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-33
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-31
pve-container: 2.0-31
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-16
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-43
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
pveversion shows that currently kernel 4.15.18-8 is running, because I wanted to check if it's a kernel related problem. With kernel 4.15.18-9 I have the same problem. There are also 2x LXC containers and 3x KVM VMs on the same host which are working absolutly fine (but no Windows, just Debian / Ubuntu machines). With Proxmox 5.2, all VMs worked fine. Any idea how to solve this?