Proxmox 4.4 virtio_scsi regression.

tomtom13

Well-Known Member
Dec 28, 2016
84
5
48
42
Hi folks,
I was successfully using proxmox 4.3 with hard drives passed to VM's with virtio_scsi (both single & pci) without any issue and with any mix of configuration flags (aio, threads, ect ect).

After upgrading to 4.4 all VM's corrupted their file systems on passthrough disks (fun).

Right now I've reverted a production systems to 4.3 and run there successfully what ever I could rescue. I use 2 servers as test ground with proxmox 4.4 and I can confirm that disks connected via built in SATA, SAS HBA, SAS HBA -> SAS extender, SAS HBA -> SATA are affected with this problem. Problem seems to be indiscriminate of hardware (all xeons and ECC ram though), drive connection type (not fibre crazy stuff) and VM file system running (all linux thou).

Below is a snippet from /var/log/messages from randomly picked up VM

Code:
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#4 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#4 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#4 CDB: Write(16) 8a 00 00 00 00 00 00 20 b0 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2142208
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#6 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#6 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#6 CDB: Write(16) 8a 00 00 00 00 00 00 20 c0 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2146304
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#0 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#0 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#0 CDB: Write(16) 8a 00 00 00 00 00 00 20 d0 00 00 00 06 c8 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2150400
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#5 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#5 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#5 CDB: Write(16) 8a 00 00 00 00 00 00 20 b8 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2144256
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#3 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#3 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#3 CDB: Write(16) 8a 00 00 00 00 00 00 20 f0 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2158592
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#2 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#2 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#2 CDB: Write(16) 8a 00 00 00 00 00 00 20 e8 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2156544
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#15 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#15 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#15 CDB: Write(16) 8a 00 00 00 00 00 00 21 10 00 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2166784
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#17 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#17 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#17 CDB: Write(16) 8a 00 00 00 00 00 00 21 1d 40 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2170176
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#13 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#13 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#13 CDB: Write(16) 8a 00 00 00 00 00 00 21 08 00 00 00 07 50 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2164736
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#18 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#18 Sense Key : Illegal Request [current]
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#18 Add. Sense: Invalid field in cdb
Dec 27 23:01:53 tevva-server kernel: sd 4:0:0:4: [sdc] tag#18 CDB: Write(16) 8a 00 00 00 00 00 00 21 25 40 00 00 08 00 00 00
Dec 27 23:01:53 tevva-server kernel: blk_update_request: critical target error, dev sdc, sector 2172224
 
Nobody ? Seriously nobody is bothered that most important milestone of pravirtualisation is broken for no good reason in new proxmox ... and nobody in proxmox is bothered that proxmox took a giant step back ?
 
Do you use local disk ? if yes, zfs? lvm-thin ? or files (raw,qcow2)....
Local disks (sata / sas) passed through directly to VM:
scsi_hw:virtio_scsi_pci (or _single, gives same issue with 4.4)
scsiX: /dev/disk/by-id/disk_name,size=XYZ

In guest operating system any FS on this device can go: etx3, etx4, xfs, btrfs

(this is what I mean by passing on disk directly to guest)

Does it also happen Ehen you restore a 4.3 backup to a 4.4 host?
This happens when
- upgrading 4.3 to 4.4 with guest VM present and working OK (guest shutdown during upgrade ofcourse)
- upgrading 4.3 to 4.4 then installing guest VM
- fresh 4.4 install and installing guest VM inside


FYI, don't get me wrong guys but that type of passing disks directly with pretty much zero overhead is a holy grail for some ... I can run a high demand cctv system inside of virtual machine with motion detection and it runs as smooth as on bare metal. I can migrate everything to new machine with little down time by just shifting disks across and passing cloned VM image ...
 
Last edited:
Local disks (sata / sas) passed through directly to VM:
scsi_hw:virtio_scsi_pci (or _single, gives same issue with 4.4)
scsiX: /dev/disk/by-id/disk_name,size=XYZ

In guest operating system any FS on this device can go: etx3, etx4, xfs, btrfs

(this is what I mean by passing on disk directly to guest)


This happens when
- upgrading 4.3 to 4.4 with guest VM present and working OK (guest shutdown during upgrade ofcourse)
- upgrading 4.3 to 4.4 then installing guest VM
- fresh 4.4 install and installing guest VM inside


FYI, don't get me wrong guys but that type of passing disks directly with pretty much zero overhead is a holy grail for some ... I can run a high demand cctv system inside of virtual machine with motion detection and it runs as smooth as on bare metal. I can migrate everything to new machine with little down time by just shifting disks across and passing cloned VM image ...

could you post the output of "pveversion -v"? does the syslog/journal on the host show anything out of the ordinary when the issue occurs? we cannot reproduce this here so far, so it might be a hardware / kernel combination that is at fault.. if you still have them, could you try booting with the pre-upgrade kernel and check whether this works or not?
 
Code:
pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-2 (running version: 4.4-2/80259e05)
pve-kernel-4.4.35-1-pve: 4.4.35-76
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-84
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-89
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80

In terms of hardware (just to list a few):

1) HP proliant S326M1 gen6, both SATA and SAS malfunctioning:
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller
06:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03)

2) HP proliant DL180 gen6, both sata and SAS malfunctioning:
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller
03:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

3) mashed up server from spare bits, SATA controller malfunctioning as well with 4.4 scsi_virtio (not using Marvel chip - not stupid enough !!!):
00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller (rev 06)
07:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 6 Gb/s RAID controller with HyperDuo (rev 11)

4) workstation with Asus Z9PE-D8 WS, dual xeon e5-2690, ecc ram, intel controller (not using marvel chip - again not insane enough)



Example of fully functioning configuration on 4.3:
Code:
cat /etc/pve/qemu-server/100.conf
boot: cdn
bootdisk: scsi0
cores: 8
cpu: host
memory: 40960
name: tevva-cctv
net0: virtio=7A:50:18:9B:50:14,bridge=vmbr0
net1: virtio=C2:80:E5:A2:32:CC,bridge=vmbr0
numa: 1
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=10G
scsi1: /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VLJ45PPY,aio=native,size=7814026584K
scsi2: /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VLJ4L7BY,aio=native,size=7814026584K
scsi3: /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VLJ7DPRY,aio=native,size=7814026584K
scsi4: /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VLJ7E99Y,aio=native,size=7814026584K
scsihw: virtio-scsi-pci
smbios1: uuid=e2a97857-3b2d-4994-96fe-26aed2ca1e5a
sockets: 2


Soooo I'm very surprised that you can't replicate that on your end. I've got people comming to me on other forum that they have exactly the same problem with pass thorugh disks in proxmox 4.4
 
Last edited:
Hi Tomtom

I tried to reproduce the problem you menionned by doing a SCSI LUN passthrough to a VM and could not reproduce your problem:

Steps I used:
* add a SCSI direct disk to a VM
qm set 403 -scsi1 /dev/disk/by-id/wwn-0xFREEBSD_MYDEVID_1
* after mounting partioning, mkfs and mount the device, run fio on it to hammer it a bit
fio --size=9G --bs=64k --rw=write --direct=1 --runtime=60 --name=64kwrite --group_reporting

I don't see anything suspicious in logs.
Am I missing something here ?

Also can you check on the hosts if the status of the drive is OK with smartctl ?
 
@manu first of all you are not passing a real disk (at least it does not look like that from your example), second of all have you got that:
Code:
cpu: host
scsihw: virtio-scsi-pci

S.M.A.R.T. is continuously monitored on all drives and there is no problem with any of them (even level of helium on some quirky drives is at perfect levels)
 
In case of a qemu passthrough bug I would expect this to be reproducible with a ISCSI device as qemu is anyway not aware of the semantics of the underlying backing device and presents the same controller to the guest.( Yes I am using virtio-scsi-pci)

Maybe you hit a rare combination of hardware and kernel version ? Are you seeing anything suspicious in the host logs ? I would expect any kind of block read error on the guest to be reported on the host side too.
 
In terms of logs - nothing on host :/

In terms of your passthrough logic - you are slightly wrong, if you use virtio_scsi_pci with image backing it will emulate SCSI for you (actually using a separate thread for it OR thread per device), if you actually connect to real device qemu will pass scsi directly to kernel and it will pass it on to SAS disk or translate it through libata for SATA. If in doubt I can point you towards the proxmox documentation.

Also as I'm reading your email again now - the for iscsi qemu will use NFS rather than BLK portion of kernel - so it's very different.

So to replicate my problem you _need_ to use real disks.
 
Last edited:
Just FYI,
(since I've had few hours free in new year)
when downgrading kernel to version used on proxmox 4.3 it's all the same :/

Code:
uname -a
Linux proxmox-dl180-14bay-2 4.4.19-1-pve #1 SMP Wed Sep 14 14:33:50 CEST 2016 x86_64 GNU/Linux

So bottom line is that irrespective of hardware (controller / disk / protocol) and kernel (previous working version used) proxmox 4.4 will corrupt data on directly attached disks through virtio_scsi_pci (or _single)
 
Yep - I discovered it on btrfs and then was able to replicate it with dd directly to disk device :/ But it seems here that if it's not following "the ceph of lvm way" it's not worth the bother - even devs can't commit to test it as described in original post ... they use network iscsi :/


Can you please list hardware that you have (down to ecc / nonecc ram, controller type, cpu exact model etc.) AND your config for your VM ?
 
Hi Tom,

Here's my enviroment and conf:

pve:
pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)

SATA controller:
00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 05)

vm conf:
Code:
balloon: 8192
bootdisk: virtio0
cores: 3
cpu: host
ide2: none,media=cdrom
memory: 16384
name: omv3-kvm-production
net0: virtio=32:41:XX:XX:XX:XX,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: omvextra
scsi0: /dev/disk/by-id/ata-WDC_WD20EARX-00PASB0_WD-WCAZA9734804,size=1953514584K
scsi1: /dev/disk/by-id/ata-ST32000644NS_9WM1PNPB,size=1953514584K
scsi2: /dev/disk/by-id/ata-WDC_WD20EARS-00J2GB0_WD-WCAYY0101692,size=1953514584K
scsi3: /dev/disk/by-id/ata-ST3000DM001-9YN166_Z1F0LGFR,size=2930266584K
scsihw: virtio-scsi-pci
smbios1: uuid=7d4305ad-93d2-4712-9cd3-60e9d61393c8
sockets: 1
startup: order=2,up=16,down=1
virtio0: local-zfs:vm-103-disk-1,cache=writeback,size=18G

Just passing local harddisks to a vm, which is openmediavault 3.

After did the last pve upgrade via apt update, I got lots of erros on accessing the harddisks in the vm:
Code:
[    3.385277] BTRFS info (device sdb1): disk space caching is enabled
[    3.385279] BTRFS info (device sdb1): has skinny extents
[    3.388703] BTRFS info (device sdd1): disk space caching is enabled
[    3.388704] BTRFS info (device sdd1): has skinny extents
[    3.389532] BTRFS info (device sda1): disk space caching is enabled
[    3.389533] BTRFS info (device sda1): has skinny extents
[    3.390574] BTRFS info (device sdc1): disk space caching is enabled
[    3.390576] BTRFS info (device sdc1): has skinny extents
[    3.419708] BTRFS info (device sdd1): bdev /dev/sdd1 errs: wr 0, rd 159, flush 0, corrupt 0, gen 0
[    3.449176] BTRFS info (device sdc1): bdev /dev/sdc1 errs: wr 11, rd 64, flush 0, corrupt 0, gen 0
[    3.458517] BTRFS info (device sda1): bdev /dev/sda1 errs: wr 0, rd 85, flush 0, corrupt 0, gen 0
[    3.476531] BTRFS info (device sdb1): bdev /dev/sdb1 errs: wr 51, rd 0, flush 0, corrupt 0, gen 0
[    3.681998] systemd-journald[482]: Received request to flush runtime journal from PID 1
[    3.976493] BTRFS info (device sdb1): checking UUID tree
[    4.532431] RPC: Registered named UNIX socket transport module.
[    4.532433] RPC: Registered udp transport module.
[    4.532434] RPC: Registered tcp transport module.
[    4.532434] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    4.535112] FS-Cache: Loaded
[    4.539993] FS-Cache: Netfs 'nfs' registered for caching
[    4.545611] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[  141.091842] sd 0:0:0:0: [sda] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  141.091862] sd 0:0:0:0: [sda] tag#13 Sense Key : Illegal Request [current]
[  141.091868] sd 0:0:0:0: [sda] tag#13 Add. Sense: Invalid field in cdb
[  141.091873] sd 0:0:0:0: [sda] tag#13 CDB: Read(10) 28 00 3c 9e 7a 00 00 08 00 00
[  141.091877] blk_update_request: critical target error, dev sda, sector 1017018880
[  141.091932] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 86, flush 0, corrupt 0, gen 0
[  141.092484] sd 0:0:0:0: [sda] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  141.092486] sd 0:0:0:0: [sda] tag#12 Sense Key : Illegal Request [current]
[  141.092487] sd 0:0:0:0: [sda] tag#12 Add. Sense: Invalid field in cdb
[  141.092489] sd 0:0:0:0: [sda] tag#12 CDB: Read(10) 28 00 3c 9e 72 00 00 08 00 00
[  141.092490] blk_update_request: critical target error, dev sda, sector 1017016832
[  141.092526] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 87, flush 0, corrupt 0, gen 0
[  141.758149] sd 0:0:0:0: [sda] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  141.758153] sd 0:0:0:0: [sda] tag#5 Sense Key : Illegal Request [current]
[  141.758155] sd 0:0:0:0: [sda] tag#5 Add. Sense: Invalid field in cdb
[  141.758157] sd 0:0:0:0: [sda] tag#5 CDB: Read(10) 28 00 3c 9f 8a 00 00 08 00 00
[  141.758159] blk_update_request: critical target error, dev sda, sector 1017088512
[  141.758210] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 88, flush 0, corrupt 0, gen 0
[  141.790872] sd 0:0:0:0: [sda] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  141.790875] sd 0:0:0:0: [sda] tag#9 Sense Key : Illegal Request [current]
[  141.790877] sd 0:0:0:0: [sda] tag#9 Add. Sense: Invalid field in cdb
[  141.790879] sd 0:0:0:0: [sda] tag#9 CDB: Read(10) 28 00 3c 9f c2 00 00 08 00 00
[  141.790881] blk_update_request: critical target error, dev sda, sector 1017102848
[  141.790932] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 89, flush 0, corrupt 0, gen 0
[  142.004908] sd 0:0:0:0: [sda] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  142.004912] sd 0:0:0:0: [sda] tag#4 Sense Key : Illegal Request [current]
[  142.004914] sd 0:0:0:0: [sda] tag#4 Add. Sense: Invalid field in cdb
[  142.004926] sd 0:0:0:0: [sda] tag#4 CDB: Read(10) 28 00 3c 9f f2 00 00 08 00 00
[  142.004928] blk_update_request: critical target error, dev sda, sector 1017115136
[  142.004972] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 90, flush 0, corrupt 0, gen 0
[  142.005032] sd 0:0:0:0: [sda] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  142.005034] sd 0:0:0:0: [sda] tag#7 Sense Key : Illegal Request [current]
[  142.005035] sd 0:0:0:0: [sda] tag#7 Add. Sense: Invalid field in cdb
[  142.005036] sd 0:0:0:0: [sda] tag#7 CDB: Read(10) 28 00 3c a0 0a 00 00 08 00 00
[  142.005037] blk_update_request: critical target error, dev sda, sector 1017121280
[  142.005078] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 91, flush 0, corrupt 0, gen 0
[  142.183284] sd 0:0:0:0: [sda] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  142.183287] sd 0:0:0:0: [sda] tag#7 Sense Key : Illegal Request [current]
[  142.183294] sd 0:0:0:0: [sda] tag#7 Add. Sense: Invalid field in cdb
[  142.183296] sd 0:0:0:0: [sda] tag#7 CDB: Read(10) 28 00 3c a0 4a 00 00 08 00 00
[  142.183298] blk_update_request: critical target error, dev sda, sector 1017137664
[  142.183347] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 92, flush 0, corrupt 0, gen 0
[  142.403454] sd 0:0:0:0: [sda] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  142.403457] sd 0:0:0:0: [sda] tag#3 Sense Key : Illegal Request [current]
[  142.403459] sd 0:0:0:0: [sda] tag#3 Add. Sense: Invalid field in cdb
[  142.403462] sd 0:0:0:0: [sda] tag#3 CDB: Read(10) 28 00 3c a0 6a 00 00 08 00 00
[  142.403464] blk_update_request: critical target error, dev sda, sector 1017145856
[  142.403512] BTRFS error (device sda1): bdev /dev/sda1 errs: wr 0, rd 93, flush 0, corrupt 0, gen 0
[  167.053365] sd 0:0:0:1: [sdd] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  167.053369] sd 0:0:0:1: [sdd] tag#1 Sense Key : Illegal Request [current]
[  167.053371] sd 0:0:0:1: [sdd] tag#1 Add. Sense: Invalid field in cdb
[  167.053374] sd 0:0:0:1: [sdd] tag#1 CDB: Read(10) 28 00 00 a3 54 00 00 08 00 00
[  167.053376] blk_update_request: critical target error, dev sdd, sector 10703872
[  167.053425] BTRFS error (device sdd1): bdev /dev/sdd1 errs: wr 0, rd 160, flush 0, corrupt 0, gen 0
[  167.152374] sd 0:0:0:1: [sdd] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  167.152378] sd 0:0:0:1: [sdd] tag#13 Sense Key : Illegal Request [current]
[  167.152380] sd 0:0:0:1: [sdd] tag#13 Add. Sense: Invalid field in cdb
[  167.152383] sd 0:0:0:1: [sdd] tag#13 CDB: Read(10) 28 00 00 a3 dc 00 00 08 00 00
[  167.152386] blk_update_request: critical target error, dev sdd, sector 10738688
[  167.152438] BTRFS error (device sdd1): bdev /dev/sdd1 errs: wr 0, rd 161, flush 0, corrupt 0, gen 0
[  167.239771] sd 0:0:0:1: [sdd] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  167.239793] sd 0:0:0:1: [sdd] tag#8 Sense Key : Illegal Request [current]
[  167.239800] sd 0:0:0:1: [sdd] tag#8 Add. Sense: Invalid field in cdb
[  167.239804] sd 0:0:0:1: [sdd] tag#8 CDB: Read(10) 28 00 00 a4 1c 00 00 08 00 00
[  167.239808] blk_update_request: critical target error, dev sdd, sector 10755072
[  167.239860] BTRFS error (device sdd1): bdev /dev/sdd1 errs: wr 0, rd 162, flush 0, corrupt 0, gen 0

More details about my issue:
http://forum.openmediavault.org/ind...-btrfs-errors-and-ro-mount-after-last-apt-upd
Related issues:
https://forum.rockstor.com/t/btrfs-error-and-critical-target-errors-with-kvm-disk-passthrough/2573/4
https://forum.proxmox.com/threads/kvm-loosing-disk-drives.31963/
https://forum.proxmox.com/threads/proxmox-4-4-virtio_scsi-regression.31471/
 
changed scsihw mode from virtio-scsi-pci to virtio-scsi-single, still has errors
 
I started trying to reproduce this on Friday, and I think I am seeing some results but need to verify them first. It seems it only hits specific hardware and/or file systems though. I'll keep you posted when I find new information.
 
maybe it's a regression in scsi-block in qemu 2.7 ?

you can try to edit
/usr/share/perl5/PVE/QemuServer.pm

Code:
sub print_drivedevice_full {

         if (my $info = path_is_scsi($path)) {
                    if ($info->{type} == 0) {
                        $devicetype = 'block';
and replace "block" by "generic" or "hd"

then restart

systemctl restart pvedaemon


and stop/start your vm.
 
  • Like
Reactions: esi_y
Thank you spirit,
replace "block" by "hd" this works! I no longer have file system erros while accesing the harddrive.
replace "block" by "generic" does not work.

Anyway, it solves my problem temporary, but I've losted around 40GB data on this system
 
Okay, I can definitely reproduce this issue, and it seems like it is a regression in qemu 2.7. I'll start a bisect run later on and hopefully we can find the issue fast.
 
  • Like
Reactions: johnnywoz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!