Problem with Windows Server 2019 and viostor

marco011ET

New Member
Mar 5, 2021
7
0
1
34
Goodmorning,

i have a big problem with a windows VM.

Like every 1 hour the viostor reset the device and all share crash(also the console via proxmox GUI) on the event viewer i see the viostor error event ID 129.

I search something on the net but most say is a HDD problem, i checked all the DISKS but i don't see any error, all the other VM work perfectly this one no...
Someone has occured with this error and he resolve it?

thanks in advice for the support
 
Please provide the output of pveversion -v and qm config <VMID> for the VM in question.
 
Please provide the output of pveversion -v and qm config <VMID> for the VM in question.
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-5 (running version: 7.1-5/6fe299a0)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-7
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.4.143-1-pve: 5.4.143-1
ceph: 16.2.6-pve2
ceph-fuse: 16.2.6-pve2
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

qm config

Code:
boot: order=virtio0;net0;ide0
cores: 8
ide0: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
machine: pc-i440fx-6.0
memory: 65536
name: CAAR-DATA-STORAGE
net0: virtio=9A:84:25:6E:83:14,bridge=vmbr0,firewall=1
net1: virtio=F2:22:C1:60:BB:72,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
parent: POINT_0
smbios1: uuid=95110096-3228-48e0-b666-c92a452df525
sockets: 1
startup: up=300
virtio0: VM:vm-101-disk-0,size=100G
virtio1: VM:vm-101-disk-1,size=3T
virtio10: VM:vm-101-disk-10,size=512G
virtio11: VM:vm-101-disk-11,size=3T
virtio12: VM:vm-101-disk-12,size=512G
virtio13: VM:vm-101-disk-13,size=100G
virtio2: VM:vm-101-disk-2,size=500G
virtio3: VM:vm-101-disk-3,size=4000G
virtio4: VM:vm-101-disk-4,size=1T
virtio5: VM:vm-101-disk-5,size=1536G
virtio6: VM:vm-101-disk-6,size=5T
virtio7: VM:vm-101-disk-7,size=512G
virtio8: VM:vm-101-disk-8,size=3T
virtio9: VM:vm-101-disk-9,size=4T
vmgenid: 050888d9-33fa-47e1-8f9e-270859a6fc8a
 
Last edited:
This is most likely the same issue as reported in multiple threads.
The issue is with kernel 5.13 and VirtIO Block/SATA. If possible, either change the disks to VirtIO SCSI or IDE for the time being.
Or downgrade the kernel to 5.11.
 
This is most likely the same issue as reported in multiple threads.
The issue is with kernel 5.13 and VirtIO Block/SATA. If possible, either change the disks to VirtIO SCSI or IDE for the time being.
Or downgrade the kernel to 5.11.
Thanks for the reply, how i caould change disk from virtio to IDE? there will problem with SCSI Controller?
 
When the VM is offline, detach one disk and then edit the unused disk to attach it again with IDE or SCSI.
It is recommended to use SCSI, but Windows VMs require additional drivers to be installed.
 
When the VM is offline, detach one disk and then edit the unused disk to attach it again with IDE or SCSI.
It is recommended to use SCSI, but Windows VMs require additional drivers to be installed.
i change the boot DISK to SCSI but at startup windows go on Bluescreen i supposed is a driver problem i found it on the virtio drivers iso?
 
Yes, that's because the SCSI driver is missing.
It is available on the VirtIO drivers ISO, but you might want to switch your boot disk to IDE, and one of the other disks to SCSI and boot once so you can install the driver. Then afterwards you can switch all disks to SCSI.
 
Hello, I have the same issue but my virtual machine is configured with iscsi from creation and the kernel running in Proxmox host is this version 5.15.53-1-pve

pveversion
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-3-pve: 5.15.39-3
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

vm config
agent: 1
bios: ovmf
boot: order=scsi0;sata0
cores: 20
cpu: host,flags=+pdpe1gb;+hv-tlbflush
cpulimit: 6
efidisk0: SSD:vm-251050-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K
hotplug: disk,network,usb,memory,cpu
machine: pc-q35-6.1
memory: 20480
meta: creation-qemu=6.1.0,ctime=1644838934
name: XXXXXXXXXX
net0: virtio=0A:41:AB:88:72:92,bridge=vmbr1,tag=251
numa: 1
ostype: win11
sata0: none,media=cdrom
scsi0: SSD:vm-251050-disk-1,discard=on,iothread=1,size=70G,ssd=1
scsi1: NVME:vm-251050-disk-0,discard=on,iops_rd=5000,iops_rd_max=10000,iops_wr=5000,iops_wr_max=10000,iothread=1,size=10G,ssd=1
scsi2: SATA:vm-251050-disk-0,backup=0,discard=on,iothread=1,size=20T,ssd=1
scsi3: SSD:vm-251050-disk-3,discard=on,iops_rd=2000,iops_rd_max=5000,iops_wr=2000,iops_wr_max=5000,iothread=1,size=3584G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=66664c74-0005-41f3-9b89-c0cd0f1881e7
sockets: 2
tpmstate0: SSD:vm-251050-disk-2,size=4M,version=v2.0
vcpus: 6
vmgenid: 3905d6b1-69d9-4ff5-b58e-bd7c9a1a0beb

The storage backend is Ceph 16.2.9, this virtual machine has disks from 3 different type of Ceph pools, SSD and NVME served from external Ceph same version, is another Proxmox Hyperconverged cluster, and the only with problems is the disk scsi2 of SATA pool which is served by the local Ceph of this cluster.

Thanks in advance!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!