4.0 scsi vs virtio driver

stefws

Member
Jan 29, 2015
302
4
18
Denmark
siimnet.dk
Created a VM with an virtio mapped image of a LV in a iSCSI attached VG.
Then when trying to install inside I discovered that the need OS didn't support the virtio driver, so I tried changing the virtio to a scsi driver only to discover, that this seemed to map hypervisor iSCSI device directly rather than expected LV.
Lucky the VM user didn't install on this device :)

Ended up using the data driver ok, but this was pretty scary and are wondering if this is normal or if I have done something wrong.

# pveversion -verbose
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie
openvswitch-switch: 2.3.2-1
 
I had this experience on PVE 3.4 + 3.10 Linux kernel, too. It doesn't have to be iSCSI. One of my VMs happened to even boot from the hypervisor disk instead of its own vdisk on top of its assigned LVM volume. I was prepared to reinstall the whole node from scratch but fortunately (and strangely) no corruption occurred. I guess the guest OS detected something odd and changed the affected filesyetems to RO. I solved it by using the 2.6.x kernel. But it's a dinosaur now and not for 4.x.
 
I had this experience on PVE 3.4 + 3.10 Linux kernel, too. It doesn't have to be iSCSI.

I first noticed it on 3.4 also - I couldn't figure out at first when it started happening. But now that you mention it it would have been after I upgraded the kernel to 3.10.

I have been experiencing this with LVM on local SCSI drives/RAID.
 
Hi,

currently I'm seeing the same problem in our lab. (most recent proxmox version - pve-manager/4.1-5/f910ef5c (running kernel: 4.2.6-1-pve))

When using scsi0 + virtio-scsi I can see the whole VG in guest machine (and destroying it when writing on it). Moving to virtio-hdd device it does not happen.

PS: What's the difference between virtio-scsi and virtio-scsi-single?
 
Unfortunately, I also see "through":

Code:
root@fileserver ~ > parted -- /dev/sda print
Model: HP HSV400 (scsi)
Disk /dev/sda: 2147GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type      File system     Flags
1      1049kB  2004MB  2003MB  primary   ext4            boot
2      2005MB  2146MB  142MB   extended
5      2005MB  2146MB  142MB   logical   linux-swap(v1)

I also encounter errors like these:

Code:
[    0.987627] sd 2:0:0:0: Attached scsi generic sg0 type 0
[    0.987846] sd 2:0:0:1: Attached scsi generic sg1 type 0
[    0.987985] sd 2:0:0:0: [sda] 4194304000 512-byte logical blocks: (2.14 TB/1.95 TiB)
[    0.988012] sd 2:0:0:1: [sdb] 2147483648 512-byte logical blocks: (1.09 TB/1.00 TiB)
[    0.988336] sd 2:0:0:1: [sdb] Write Protect is off
[    0.988338] sd 2:0:0:1: [sdb] Mode Sense: 63 00 00 08
[    0.988470] sd 2:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.988941] sd 2:0:0:0: [sda] Write Protect is off
[    0.988943] sd 2:0:0:0: [sda] Mode Sense: 83 00 10 08
[    0.989299] sd 2:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    0.990821] sd 2:0:0:1: [sdb] Attached SCSI disk
[    0.992180]  sda: sda1 sda2 < sda5 >
[    0.994456] sd 2:0:0:0: [sda] Attached SCSI disk
[    1.044350] sd 2:0:0:0: [sda] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[    1.044357] sd 2:0:0:0: [sda] Sense Key : Aborted Command [current]
[    1.044360] sd 2:0:0:0: [sda] Add. Sense: I/O process terminated
[    1.044364] sd 2:0:0:0: [sda] CDB: Read(10) 28 00 f9 ff ff 80 00 00 08 00
[    1.044367] blk_update_request: I/O error, dev sda, sector 4194303872
[    1.084147] sd 2:0:0:0: [sda] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[    1.084150] sd 2:0:0:0: [sda] Sense Key : Aborted Command [current]
[    1.084152] sd 2:0:0:0: [sda] Add. Sense: I/O process terminated
[    1.084153] sd 2:0:0:0: [sda] CDB: Read(10) 28 00 f9 ff ff 80 00 00 08 00
[    1.084155] blk_update_request: I/O error, dev sda, sector 4194303872
[    1.084199] Buffer I/O error on dev sda, logical block 524287984, async page read

The size of the block device is wrong, the disk should be only 2 GB, not 2 TB (PV-size, not VG-size).
 
Unfortunately, TRIM support is not the only difference, see here, for example. But I agree that instead of IDE, virtio-blk would be a better choice feature and performance-wise. Where I experienced this error I switched to it and it worked just fine in place of virtio-scsi. Anyway I'm rather surprised that no upstream fix exists yet for this very serious bug.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!