What is "kvm: Desc next is 3" indicative of?

There is no choice for me to compare performance, in my case VirtIO SCSI is absolutely unstable.
 
When you switched the disks from Virtio SCSI to VirtIO Block and changed the SCSI Controller did you need to re-initialize the disks in Windows guest (assign the drive letters again) or did Windows figure it out on its own?
@Max2048
When I did this on a test VM the C: drive was fine, the D: drive required initialisation.
 
Has anyone already updated to PVE 8.2 and checked whether the problem still exists there?

According to the changelog, the bug could be fixed. But it is not quite clear.

I installed kernel 6.8 on April 14 and later upgraded to PVE 8.2.
I haven't had an occurrence since upgrading the kernel, however not enough time has passed to declare its resolved for me.
 
Still with us.

May 04 07:34:08.429285 QEMU[1940]: kvm: virtio: zero sized buffers are not allowed
 
W2022 + MSSQL 2022 + SCSi
May 12 07:57:42 pve9 QEMU[2663212]: kvm: virtio: zero sized buffers are not allowed
++

pveversion -v
Code:
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.2
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

qm config 201
Code:
agent: 1
bios: ovmf
boot: order=virtio0;net0
cores: 16
cpu: x86-64-v4
machine: pc-q35-8.1
memory: 122880
meta: creation-qemu=8.1.2,ctime=1712315022
name: virt100
net0: virtio=BC:24:11:22:78:7A,bridge=vmbr999,firewall=1,tag=100
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=XXXZZXXXX
sockets: 1
virtio0: local-btrfs:201/vm-201-disk-0.raw,iothread=1,size=100G
virtio1: local-btrfs:201/vm-201-disk-1.raw,iothread=1,size=1000G
virtio2: local-btrfs:201/vm-201-disk-2.raw,iothread=1,size=200G
vmgenid: 7821b093-59dd-4d6a-b2f9-5ffd14040570

Syslog utc
Code:
May 31 03:01:08 pve9 QEMU[4274]: kvm: Desc next is 5
May 31 03:01:11 pve9 QEMU[4274]: kvm: Desc next is 3
May 31 03:01:13 pve9 pve-ha-crm[1649]: loop take too long (92 seconds)
May 31 03:01:16 pve9 pve-ha-lrm[2264]: loop take too long (101 seconds)

2
Syslog utc
Code:
May 31 11:52:12 pve9 QEMU[838445]: kvm: Desc next is 3
May 31 11:52:17 pve9 pve-ha-crm[1649]: loop take too long (105 seconds)
May 31 11:52:20 pve9 pve-ha-lrm[2264]: loop take too long (111 seconds)
May 31 12:20:13 pve9 pve-ha-lrm[2264]: loop take too long (36 seconds)
May 31 12:26:16 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:26:51 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:27:10 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:08 pve9 pveproxy[1032371]: proxy detected vanished client connection
May 31 12:28:09 pve9 pveproxy[970810]: proxy detected vanished client connection
May 31 12:28:12 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:12 pve9 pveproxy[1015813]: proxy detected vanished client connection
May 31 12:28:12 pve9 pvedaemon[1643]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:13 pve9 pve-ha-crm[1649]: loop take too long (55 seconds)
May 31 12:28:14 pve9 pve-ha-lrm[2264]: loop take too long (60 seconds)
May 31 12:28:36 pve9 pvedaemon[971019]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got>
May 31 12:32:10 pve9 pvestatd[1628]: auth key pair too old, rotating..
 
Last edited:
All disks move to VirtIO Block with IO thread: ON, SCSI Controller to Default (if set to VirtIO SCSI Single, even without disks, - system disks will randomly hangs). Look like it's weird bug with VirtIO SCSI drivers/emulation in Proxmox 8.
Still working fine with this configuration? I'm going to move my VMs to this configuration too, this issue is not going away otherwise.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!