What is "kvm: Desc next is 3" indicative of?

There is no choice for me to compare performance, in my case VirtIO SCSI is absolutely unstable.
 
When you switched the disks from Virtio SCSI to VirtIO Block and changed the SCSI Controller did you need to re-initialize the disks in Windows guest (assign the drive letters again) or did Windows figure it out on its own?
@Max2048
When I did this on a test VM the C: drive was fine, the D: drive required initialisation.
 
Has anyone already updated to PVE 8.2 and checked whether the problem still exists there?

According to the changelog, the bug could be fixed. But it is not quite clear.

I installed kernel 6.8 on April 14 and later upgraded to PVE 8.2.
I haven't had an occurrence since upgrading the kernel, however not enough time has passed to declare its resolved for me.
 
Still with us.

May 04 07:34:08.429285 QEMU[1940]: kvm: virtio: zero sized buffers are not allowed
 
W2022 + MSSQL 2022 + SCSi
May 12 07:57:42 pve9 QEMU[2663212]: kvm: virtio: zero sized buffers are not allowed
++

pveversion -v
Code:
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.2
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

qm config 201
Code:
agent: 1
bios: ovmf
boot: order=virtio0;net0
cores: 16
cpu: x86-64-v4
machine: pc-q35-8.1
memory: 122880
meta: creation-qemu=8.1.2,ctime=1712315022
name: virt100
net0: virtio=BC:24:11:22:78:7A,bridge=vmbr999,firewall=1,tag=100
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=XXXZZXXXX
sockets: 1
virtio0: local-btrfs:201/vm-201-disk-0.raw,iothread=1,size=100G
virtio1: local-btrfs:201/vm-201-disk-1.raw,iothread=1,size=1000G
virtio2: local-btrfs:201/vm-201-disk-2.raw,iothread=1,size=200G
vmgenid: 7821b093-59dd-4d6a-b2f9-5ffd14040570

Syslog utc
Code:
May 31 03:01:08 pve9 QEMU[4274]: kvm: Desc next is 5
May 31 03:01:11 pve9 QEMU[4274]: kvm: Desc next is 3
May 31 03:01:13 pve9 pve-ha-crm[1649]: loop take too long (92 seconds)
May 31 03:01:16 pve9 pve-ha-lrm[2264]: loop take too long (101 seconds)

2
Syslog utc
Code:
May 31 11:52:12 pve9 QEMU[838445]: kvm: Desc next is 3
May 31 11:52:17 pve9 pve-ha-crm[1649]: loop take too long (105 seconds)
May 31 11:52:20 pve9 pve-ha-lrm[2264]: loop take too long (111 seconds)
May 31 12:20:13 pve9 pve-ha-lrm[2264]: loop take too long (36 seconds)
May 31 12:26:16 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:26:51 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:27:10 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:08 pve9 pveproxy[1032371]: proxy detected vanished client connection
May 31 12:28:09 pve9 pveproxy[970810]: proxy detected vanished client connection
May 31 12:28:12 pve9 pvedaemon[1644]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:12 pve9 pveproxy[1015813]: proxy detected vanished client connection
May 31 12:28:12 pve9 pvedaemon[1643]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got t>
May 31 12:28:13 pve9 pve-ha-crm[1649]: loop take too long (55 seconds)
May 31 12:28:14 pve9 pve-ha-lrm[2264]: loop take too long (60 seconds)
May 31 12:28:36 pve9 pvedaemon[971019]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got>
May 31 12:32:10 pve9 pvestatd[1628]: auth key pair too old, rotating..
 
Last edited:
All disks move to VirtIO Block with IO thread: ON, SCSI Controller to Default (if set to VirtIO SCSI Single, even without disks, - system disks will randomly hangs). Look like it's weird bug with VirtIO SCSI drivers/emulation in Proxmox 8.
Still working fine with this configuration? I'm going to move my VMs to this configuration too, this issue is not going away otherwise.
 
New to the forum, I was facing the same issue with the latest version of PVE with external ceph cluster.
I could replicate easily the issue on a Windows 2022 VM running Diskspd when using Virtio SCSI with IO Thread enabled.

I only disabled IO Thread and the issue hasn't occurred again yet.

PVE Version
Code:
pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.8.8-1-pve)

VM Config without IOThread
Code:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: Skylake-Server-v5
efidisk0: <redacted>:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: none,media=cdrom
machine: pc-i440fx-8.1
memory: 4096
meta: creation-qemu=8.1.5,ctime=1719332365
name: <redacted>
net0: virtio=BC:24:11:59:9F:D3,bridge=vmbr1,tag=103
numa: 0
ostype: win11
scsi0: <redacted>:vm-105-disk-1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=134dc02b-85dc-43da-b0c8-8c00fe468ce4
sockets: 1
vmgenid: bc925ff5-c095-4c7d-9ae1-ebd0dd47cde6

Diskspd Command
Code:
DiskSpd.exe -c10G -d360 -r -w40 -t16 -o64 -b4K -Sh -L SpeedTest.dat
 
There is currently only one solution to this and it's mentioned here: https://forum.proxmox.com/goto/post?id=651323

IOThread has no effect on this. People have tested it several times and it will occur again, randomly.
Seems I jinxed it. Over night the VM crashed as predicted by you.

I have tried changing the settings on the VM to VirtIO Block with IO thread: ON, SCSI Controller to Default but I am now getting a BSOD on bootup. (Boot device not accessible). Not sure if any of you encountered this issue.


Code:
bios: ovmf
boot: order=virtio0;ide2;net0
cores: 4
cpu: Skylake-Server-v5
efidisk0: <redacted>:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: none,media=cdrom
machine: pc-i440fx-8.1
memory: 4096
meta: creation-qemu=8.1.5,ctime=1719332365
name: <redacted>
net0: virtio=BC:24:11:59:9F:D3,bridge=vmbr1,tag=103
numa: 0
ostype: win11
smbios1: uuid=134dc02b-85dc-43da-b0c8-8c00fe468ce4
sockets: 1
virtio0: <redacted>:vm-105-disk-1,iothread=1,size=32G
vmgenid: bc925ff5-c095-4c7d-9ae1-ebd0dd47cde6
 
Yes, so what i did was:
1. Add a new VirtIO Block disk with IO thread enabled, size doesnt matter
2. Boot as usual, check that the drivers for the added disk are correctly loaded in device manager
3. Now you can change the boot disk to VirtIO Block and all other disks attached to the VM as well
4. It should boot normally now
5. Change SCSI Controller to Default and reboot again. It should also work flawlessly
6. You can remove the temporary added disk from step 1.
 
Last edited:
  • Like
Reactions: rborg
Yes, so what i did was:
1. Add a new VirtIO Block disk with IO thread enabled, size doesnt matter
2. Boot as usual, check that the drivers for the added disk are correctly loaded in device manager
3. Now you can change the boot disk to VirtIO Block and all other disks attached to the VM as well
4. It should boot normally now
5. Change SCSI Controller to Default and reboot again. It should also work flawlessly
6. You can remove the temporary added disk from step 1.
Thanks Max, worked.

For those asking about performance I have run the following DiskSPD Tests and there are noticeable differences.
The storage was always the same external ceph with no load apart from this VM.
command: DiskSpd.exe -c1G -d60 -r -w40 -t4 -o64 -b4K -Sh -L SpeedTest.dat

Test #1> Controller: VirtIO SCSI, Disk: SCSI
Code:
Stats |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev | 
-----------------------------------------------------------------------------------------------------
total:        5163270144 |      1260564 |      82.06 |   21006.33 |   12.026 |    14.255
read:         3100942336 |       757066 |      49.28 |   12615.92 |    3.351 |     3.613
write:        2062327808 |       503498 |      32.78 |    8390.41 |   25.069 |    14.348

Test #2> Controller: VirtIO SCSI, Disk: SATA
Code:
Stats           bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev | 
-----------------------------------------------------------------------------------------------------
total:        2409717760 |       588310 |      38.30 |    9805.42 |   25.188 |    17.099
read:         1447251968 |       353333 |      23.00 |    5889.03 |   24.379 |    16.146
write:         962465792 |       234977 |      15.30 |    3916.38 |   26.404 |    18.372

Test #3> Controller: Default, Disk:VirtIO Block, IO Thread: Off
Code:
Stats |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev | 
-----------------------------------------------------------------------------------------------------
total:        1641836544 |       400839 |      26.10 |    6680.64 |   38.281 |    16.920
read:          986255360 |       240785 |      15.68 |    4013.08 |   35.885 |    15.853
write:         655581184 |       160054 |      10.42 |    2667.56 |   41.886 |    17.813

Test #4> Controller: Default, Disk:VirtIO Block, IO Thread: On
Code:
Stats          bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | LatStdDev | 
-----------------------------------------------------------------------------------------------------
total:        1835835392 |       448202 |      29.17 |    7468.10 |   34.244 |    14.438
read:         1103183872 |       269332 |      17.53 |    4487.71 |   32.176 |    13.691
write:         732651520 |       178870 |      11.64 |    2980.40 |   37.358 |    14.966

So using VirtIO Block does give stability but there is quite some trade off in performance especially comparing to Virtio SCSI.
Just for reference I executed the same test on a similar VM but on an ESXI and performance wise it was close to test #1.

Hopefully this gets fixed soon.
 
Hello.
Some problem for me, PVE 8.2.4 (3 nodes), WS2019 with sql server, ceph all ssd storage, VirtioSCSI with IO/Thread ON and discard on.
PVE: Aug 06 06:59:46 pve1 QEMU[7554]: kvm: virtio: zero sized buffers are not allowed
VM: set to device, \Device\RaidPort1, was issued
And VM totally freezed.
Someone have tested with or without discard ?
 
Hello.
Some problem for me, PVE 8.2.4 (3 nodes), WS2019 with sql server, ceph all ssd storage, VirtioSCSI with IO/Thread ON and discard on.
PVE: Aug 06 06:59:46 pve1 QEMU[7554]: kvm: virtio: zero sized buffers are not allowed
VM: set to device, \Device\RaidPort1, was issued
And VM totally freezed.
Someone have tested with or without discard ?

There is no work around for this bug, its not even understood what component is causing it. My advice is to migrate away from Proxmox if you have SQL workloads.
 
Hello.
Some problem for me, PVE 8.2.4 (3 nodes), WS2019 with sql server, ceph all ssd storage, VirtioSCSI with IO/Thread ON and discard on.
PVE: Aug 06 06:59:46 pve1 QEMU[7554]: kvm: virtio: zero sized buffers are not allowed
VM: set to device, \Device\RaidPort1, was issued
And VM totally freezed.
Someone have tested with or without discard ?
1.) Try updating your microcode :

Adding the "non-free-firmware" to your /etc/apt/sources.list . https://wiki.debian.org/Microcode

cp /etc/apt/sources.list /etc/apt/sources.list.bak
sed -i 's/contrib/contrib non-free-firmware/g' /etc/apt/sources.list
apt update

Choose your cpu:
apt install intel-microcode
or
apt install amd64-microcode

(restart of the node is required)

2.) Try runniing your SQL servers with I/O threaded OFF .
I run my sql servers with I/O threaded OFF because of freezes.

3.) Look if you have the newest windows VirtIO driver installed
 
Just to report, we had this issue recently on a windows server 2022. The solution was to use the virtio driver 0.1.204. In this link there are reports that version 0.1.229 also solves the problem. https://github.com/virtio-win/kvm-guest-drivers-windows/issues/756
I have the last virtio drivers; the most stable configuration is with VirtioSCSI (not VirtioSCSI single), hard disk scsi, aio=threads, no IO thread. I am testing with WS2016 .
With WS2022, i have a reboot with: "The computer has rebooted from a bugcheck. The bugcheck was: 0x000000ef (0xffff9c0a368ad080, 0x0000000000000000, 0x0000000000000000, 0x0000000000000000). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 15b7553f-9ca9-46f5-8721-1a7482c021f6."
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!