possible bug with LVM o storage

rickygm

Renowned Member
Sep 16, 2015
136
6
83
Hi , I have a couple of volumes per iscsi connected to 3 nodes with proxmox (10gb ethernet and multipath) , a volume is almost 6TB and another of 1.5TB, by mistake attach a 2.5tb disk to a vm and when I want to delete disk proxmox hangs .

lvremove DLOW1:vm-1007-disk-2 (HANGED with this command)

I have to forcefully shut down the server again, if I check the log when it gets stuck it shows me this.

dmesg

[68782.193915] print_req_error: I/O error, dev sdd, sector 9965660160
[68782.194180] sd 9:0:0:1: [sdd] tag#4 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK
[68782.194190] sd 9:0:0:1: [sdd] tag#4 CDB: Unmap/Read sub-channel 42 00 00 00 00 00 00 00 18 00
[68782.194192] print_req_error: I/O error, dev sdd, sector 9965660160
[68782.194455] sd 9:0:0:1: [sdd] tag#5 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK
[68782.194457] sd 9:0:0:1: [sdd] tag#5 CDB: Unmap/Read sub-channel 42 00 00 00 00 00 00 00 18 00
[68782.194458] print_req_error: I/O error, dev sdd, sector 9965660160
[68782.194721] sd 9:0:0:1: [sdd] tag#6 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK
[68782.194723] sd 9:0:0:1: [sdd] tag#6 CDB: Unmap/Read sub-channel 42 00 00 00 00 00 00 00 18 00
[68782.194724] print_req_error: I/O error, dev sdd, sector 9965660160
[68782.194985] sd 9:0:0:1: [sdd] tag#7 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK
[68782.194987] sd 9:0:0:1: [sdd] tag#7 CDB: Unmap/Read sub-channel 42 00 00 00 00 00 00 00 18 00
[68782.194988] print_req_error: I/O error, dev sdd, sector 9965660160

these commands are from another node

SDD is the volume for iscsi

lsscsi -s
[0:0:1:0] disk ATA INTEL SSDSC2KW12 036C /dev/sda 120GB
[1:0:1:0] disk ATA INTEL SSDSC2KW12 036C /dev/sdb 120GB
[6:0:0:1] disk FreeNAS iSCSI Disk 0123 /dev/sdc 7.30TB
[6:0:0:2] disk FreeNAS iSCSI Disk 0123 /dev/sde 1.64TB
[7:0:0:1] disk FreeNAS iSCSI Disk 0123 /dev/sdd 7.30TB
[7:0:0:2] disk FreeNAS iSCSI Disk 0123 /dev/sdf 1.64TB


multipath -ll
DFAST1 (36589cfc0000001733dbf6c0497b9340e) dm-1 FreeNAS,iSCSI Disk
size=1.5T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 7:0:0:2 sdf 8:80 active ready running
`- 6:0:0:2 sde 8:64 active ready running
DLOW1 (36589cfc000000d2aad2d66d27ebe70a3) dm-0 FreeNAS,iSCSI Disk
size=6.6T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 7:0:0:1 sdd 8:48 active ready running
`- 6:0:0:1 sdc 8:32 active ready running

pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/DFAST1 vDFAST1 lvm2 a-- 1.50t 941.00g
/dev/mapper/DLOW1 DLOW1 lvm2 a-- 6.64t 1.12t

any idea?
 
pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-20-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-8
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-54
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-5
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
I would go back a kernel or use 21-pve which is in testing.

https://bugzilla.proxmox.com/show_bug.cgi?id=2354


Hi Adamb , I followed the thread and I see that you also suffered this pain just like me :( , What is the best way to install this version?

these are the kernel versions that I have installed

dpkg -l | grep pve-kernel
ii pve-firmware 2.0-7 all Binary firmware code for the pve-kernel
ii pve-kernel-4.15 5.4-8 all Latest Proxmox VE Kernel Image
ii pve-kernel-4.15.18-12-pve 4.15.18-36 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-20-pve 4.15.18-46 amd64 The Proxmox PVE Kernel Image
 
Hi Adamb , I followed the thread and I see that you also suffered this pain just like me :( , What is the best way to install this version?

these are the kernel versions that I have installed

dpkg -l | grep pve-kernel
ii pve-firmware 2.0-7 all Binary firmware code for the pve-kernel
ii pve-kernel-4.15 5.4-8 all Latest Proxmox VE Kernel Image
ii pve-kernel-4.15.18-12-pve 4.15.18-36 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-20-pve 4.15.18-46 amd64 The Proxmox PVE Kernel Image

Probably easiest to just move over to the testing kernel.

Snag this file on the server.

http://download.proxmox.com/debian/...ve-kernel-4.15.18-21-pve_4.15.18-47_amd64.deb

Then "dpkg -i pve-kernel-4.15.18-21-pve_4.15.18-47_amd64.deb"

Once that completes, reboot the host and you should be on 21-pve. Good luck!
 
  • Like
Reactions: Stoiko Ivanov