Multipath iSCSI + HP P2000 G3, can't create LVM physical volume

corefey

Renowned Member
Apr 3, 2017
3
0
66
41
Hi all!

Now I'm trying to setup proxmox cluster with P2000 G3 ISCSI storage which uses multipath.
I have some issues while trying to create LVM physical volume with pvcreate command.

My steps before using this command were:

1. Create vdisk and volume with read-write access on P2000 storage and map it to all ports (we have 2 contollers which contains 4 ports each)

2. Multipath tools installation on proxmox node:
Code:
aptitude install multipath-tools

3. Editing /etc/iscsi/iscsi.conf file:
Code:
node.startup = automatic          
node.session.timeo.replacement_timeout = 15

4. Discover the target:
Code:
root@test2:~# iscsiadm --mode discovery --type sendtargets --login --portal 192.168.8.101
192.168.8.101:3260,1 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.103:3260,2 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.101:3260,3 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.103:3260,4 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.102:3260,5 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.104:3260,6 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.102:3260,7 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.104:3260,8 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.101,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.103,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.101,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.103,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.102,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.104,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.102,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.104,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.101,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.103,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.101,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.103,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.102,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.8.104,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.102,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.hp:storage.p2000g3.11151257a1, portal: 192.168.9.104,3260] successful.

root@test2:/etc/iscsi# iscsiadm -m discovery -t st -l -p 192.168.8.101
192.168.8.101:3260,1 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.103:3260,2 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.101:3260,3 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.103:3260,4 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.102:3260,5 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.8.104:3260,6 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.102:3260,7 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
192.168.9.104:3260,8 iqn.1986-03.com.hp:storage.p2000g3.11151257a1
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.

5. Verify that proxmox node appears under "Volume" and "Hosts" sections on P2000 storage.

6. Reboot proxmox node.

7. Run multipath -v3 and find out my wwid (uuid).
Code:
...
create: 3600c0ff00012596ddfc3905801000000 undef HP,P2000 G3 iSCSI
...

8. Create /etc/multipath.conf
Code:
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        uid_attribute           ID_SERIAL
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
        user_friendly_names     yes
}
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(td|hd)[a-z]"
        devnode "^dcssblk[0-9]*"
        devnode "^cciss!c[0-9]d[0-9]*"
        device {
                vendor "DGC"
                product "LUNZ"
        }
        device {
                vendor "EMC"
                product "LUNZ"
        }
        device {
                vendor "IBM"
                product "Universal Xport"
        }
        device {
                vendor "IBM"
                product "S/390.*"
        }
        device {
                vendor "DELL"
                product "Universal Xport"
        }
        device {
                vendor "SGI"
                product "Universal Xport"
        }
        device {
                vendor "STK"
                product "Universal Xport"
        }
        device {
                vendor "SUN"
                product "Universal Xport"
        }
        device {
                vendor "(NETAPP|LSI|ENGENIO)"
                product "Universal Xport"
        }
}
blacklist_exceptions {
        wwid "3600c0ff00012596ddfc3905801000000"
}
devices {
    device {
        vendor "HP"
        product "P2000 G3 FC|P2000G3 FC/iSCSI|P2000 G3 SAS|P2000 G3 iSCSI"
        getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
        hardware_handler "0"
        path_selector "round-robin 0"
        path_grouping_policy group_by_prio
        failback immediate
        rr_weight uniform
        rr_min_io 100
        no_path_retry 18
        path_checker tur
        }
}
multipaths {
  multipath {
        wwid "3600c0ff00012596ddfc3905801000000"
        alias disk1
  }
}

9. Restart multipath-service:
Code:
service multipath-tools reload
service multipath-tools restart

10. Check multipath status:
Code:
root@test2:~# multipath -ll
3600c0ff00012596ddfc3905801000000 dm-3 HP,P2000 G3 iSCSI
size=136G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 5:0:0:0 sdb 8:16  active ready running
| |- 6:0:0:0 sde 8:64  active ready running
| |- 7:0:0:0 sdg 8:96  active ready running
| `- 9:0:0:0 sdh 8:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 2:0:0:0 sda 8:0   active ready running
  |- 3:0:0:0 sdc 8:32  active ready running
  |- 4:0:0:0 sdd 8:48  active ready running
  `- 8:0:0:0 sdf 8:80  active ready running

And here finall I tried to create LVM physical volume (PV):
Code:
root@test2:~# pvcreate /dev/dm-3

After that nothing has happened. It just "freezes"

Output of messages log file:
Code:
Apr  3 11:09:01 test2 kernel: [  360.052261] pvcreate        D 0000000000000001     0  1454   1179 0x00000000
Apr  3 11:09:01 test2 kernel: [  360.052265]  ffff880035bdf8f8 0000000000000086 ffff880135894b00 ffff880035be2580
Apr  3 11:09:01 test2 kernel: [  360.052267]  0000000000000046 ffff880035be0000 ffff88013fd16a00 7fffffffffffffff
Apr  3 11:09:01 test2 kernel: [  360.052269]  ffff880035be2580 ffff880035b0c000 ffff880035bdf918 ffffffff81803eb7
Apr  3 11:09:01 test2 kernel: [  360.052271] Call Trace:
Apr  3 11:09:01 test2 kernel: [  360.052279]  [<ffffffff81803eb7>] schedule+0x37/0x80
Apr  3 11:09:01 test2 kernel: [  360.052281]  [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Apr  3 11:09:01 test2 kernel: [  360.052286]  [<ffffffff8106033e>] ? kvm_clock_get_cycles+0x1e/0x20
Apr  3 11:09:01 test2 kernel: [  360.052288]  [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Apr  3 11:09:01 test2 kernel: [  360.052291]  [<ffffffff8123aa2c>] do_blockdev_direct_IO+0x10dc/0x2d20
Apr  3 11:09:01 test2 kernel: [  360.052294]  [<ffffffff810c4681>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Apr  3 11:09:01 test2 kernel: [  360.052297]  [<ffffffff81236f00>] ? I_BDEV+0x20/0x20
Apr  3 11:09:01 test2 kernel: [  360.052299]  [<ffffffff8123c6b3>] __blockdev_direct_IO+0x43/0x50
Apr  3 11:09:01 test2 kernel: [  360.052301]  [<ffffffff81237778>] blkdev_direct_IO+0x58/0x80
Apr  3 11:09:01 test2 kernel: [  360.052304]  [<ffffffff81183999>] generic_file_direct_write+0xb9/0x190
Apr  3 11:09:01 test2 kernel: [  360.052306]  [<ffffffff81183b32>] __generic_file_write_iter+0xc2/0x1f0
Apr  3 11:09:01 test2 kernel: [  360.052308]  [<ffffffff811833e0>] ? generic_file_read_iter+0xd0/0x5d0
Apr  3 11:09:01 test2 kernel: [  360.052310]  [<ffffffff81237a7b>] blkdev_write_iter+0x8b/0x120
Apr  3 11:09:01 test2 kernel: [  360.052312]  [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Apr  3 11:09:01 test2 kernel: [  360.052314]  [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Apr  3 11:09:01 test2 kernel: [  360.052315]  [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Apr  3 11:09:01 test2 kernel: [  360.052317]  [<ffffffff811fc76d>] ? fixed_size_llseek+0x1d/0x20
Apr  3 11:09:01 test2 kernel: [  360.052318]  [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Apr  3 11:09:01 test2 kernel: [  360.052320]  [<ffffffff811fccf2>] ? SyS_lseek+0x92/0xb0
Apr  3 11:09:01 test2 kernel: [  360.052322]  [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Apr  3 11:11:01 test2 kernel: [  480.052277] pvcreate        D 0000000000000001     0  1454   1179 0x00000000
Apr  3 11:11:01 test2 kernel: [  480.052280]  ffff880035bdf8f8 0000000000000086 ffff880135894b00 ffff880035be2580
Apr  3 11:11:01 test2 kernel: [  480.052283]  0000000000000046 ffff880035be0000 ffff88013fd16a00 7fffffffffffffff
Apr  3 11:11:01 test2 kernel: [  480.052285]  ffff880035be2580 ffff880035b0c000 ffff880035bdf918 ffffffff81803eb7
Apr  3 11:11:01 test2 kernel: [  480.052287] Call Trace:
Apr  3 11:11:01 test2 kernel: [  480.052294]  [<ffffffff81803eb7>] schedule+0x37/0x80
Apr  3 11:11:01 test2 kernel: [  480.052296]  [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Apr  3 11:11:01 test2 kernel: [  480.052301]  [<ffffffff8106033e>] ? kvm_clock_get_cycles+0x1e/0x20
Apr  3 11:11:01 test2 kernel: [  480.052303]  [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Apr  3 11:11:01 test2 kernel: [  480.052307]  [<ffffffff8123aa2c>] do_blockdev_direct_IO+0x10dc/0x2d20
Apr  3 11:11:01 test2 kernel: [  480.052310]  [<ffffffff810c4681>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Apr  3 11:11:01 test2 kernel: [  480.052312]  [<ffffffff81236f00>] ? I_BDEV+0x20/0x20
Apr  3 11:11:01 test2 kernel: [  480.052315]  [<ffffffff8123c6b3>] __blockdev_direct_IO+0x43/0x50
Apr  3 11:11:01 test2 kernel: [  480.052316]  [<ffffffff81237778>] blkdev_direct_IO+0x58/0x80
Apr  3 11:11:01 test2 kernel: [  480.052320]  [<ffffffff81183999>] generic_file_direct_write+0xb9/0x190
Apr  3 11:11:01 test2 kernel: [  480.052322]  [<ffffffff81183b32>] __generic_file_write_iter+0xc2/0x1f0
Apr  3 11:11:01 test2 kernel: [  480.052323]  [<ffffffff811833e0>] ? generic_file_read_iter+0xd0/0x5d0
Apr  3 11:11:01 test2 kernel: [  480.052325]  [<ffffffff81237a7b>] blkdev_write_iter+0x8b/0x120
Apr  3 11:11:01 test2 kernel: [  480.052328]  [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Apr  3 11:11:01 test2 kernel: [  480.052329]  [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Apr  3 11:11:01 test2 kernel: [  480.052331]  [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Apr  3 11:11:01 test2 kernel: [  480.052333]  [<ffffffff811fc76d>] ? fixed_size_llseek+0x1d/0x20
Apr  3 11:11:01 test2 kernel: [  480.052334]  [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Apr  3 11:11:01 test2 kernel: [  480.052336]  [<ffffffff811fccf2>] ? SyS_lseek+0x92/0xb0
Apr  3 11:11:01 test2 kernel: [  480.052337]  [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Apr  3 11:12:46 test2 kernel: [  585.821773] sd 7:0:0:0: [sdg] FAILED Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
Apr  3 11:12:46 test2 kernel: [  585.821781] sd 7:0:0:0: [sdg] CDB: Write(10) 2a 00 00 00 00 08 00 00 08 00
Apr  3 11:12:46 test2 kernel: [  585.821909] device-mapper: multipath: Failing path 8:96.
Apr  3 11:13:01 test2 kernel: [  600.052263] pvcreate        D 0000000000000001     0  1454   1179 0x00000000
Apr  3 11:13:01 test2 kernel: [  600.052266]  ffff880035bdf8f8 0000000000000086 ffff880135894b00 ffff880035be2580
Apr  3 11:13:01 test2 kernel: [  600.052269]  0000000000000046 ffff880035be0000 ffff88013fd16a00 7fffffffffffffff
Apr  3 11:13:01 test2 kernel: [  600.052270]  ffff880035be2580 ffff880035b0c000 ffff880035bdf918 ffffffff81803eb7
Apr  3 11:13:01 test2 kernel: [  600.052273] Call Trace:
Apr  3 11:13:01 test2 kernel: [  600.052280]  [<ffffffff81803eb7>] schedule+0x37/0x80
Apr  3 11:13:01 test2 kernel: [  600.052282]  [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Apr  3 11:13:01 test2 kernel: [  600.052287]  [<ffffffff8106033e>] ? kvm_clock_get_cycles+0x1e/0x20
Apr  3 11:13:01 test2 kernel: [  600.052289]  [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Apr  3 11:13:01 test2 kernel: [  600.052292]  [<ffffffff8123aa2c>] do_blockdev_direct_IO+0x10dc/0x2d20
Apr  3 11:13:01 test2 kernel: [  600.052295]  [<ffffffff810c4681>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Apr  3 11:13:01 test2 kernel: [  600.052298]  [<ffffffff81236f00>] ? I_BDEV+0x20/0x20
Apr  3 11:13:01 test2 kernel: [  600.052300]  [<ffffffff8123c6b3>] __blockdev_direct_IO+0x43/0x50
Apr  3 11:13:01 test2 kernel: [  600.052302]  [<ffffffff81237778>] blkdev_direct_IO+0x58/0x80
Apr  3 11:13:01 test2 kernel: [  600.052305]  [<ffffffff81183999>] generic_file_direct_write+0xb9/0x190
Apr  3 11:13:01 test2 kernel: [  600.052307]  [<ffffffff81183b32>] __generic_file_write_iter+0xc2/0x1f0
Apr  3 11:13:01 test2 kernel: [  600.052309]  [<ffffffff811833e0>] ? generic_file_read_iter+0xd0/0x5d0
Apr  3 11:13:01 test2 kernel: [  600.052311]  [<ffffffff81237a7b>] blkdev_write_iter+0x8b/0x120
Apr  3 11:13:01 test2 kernel: [  600.052313]  [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Apr  3 11:13:01 test2 kernel: [  600.052315]  [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Apr  3 11:13:01 test2 kernel: [  600.052316]  [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Apr  3 11:13:01 test2 kernel: [  600.052318]  [<ffffffff811fc76d>] ? fixed_size_llseek+0x1d/0x20
Apr  3 11:13:01 test2 kernel: [  600.052319]  [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Apr  3 11:13:01 test2 kernel: [  600.052321]  [<ffffffff811fccf2>] ? SyS_lseek+0x92/0xb0
Apr  3 11:13:01 test2 kernel: [  600.052323]  [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Apr  3 11:15:01 test2 kernel: [  720.052269] pvcreate        D 0000000000000001     0  1454   1179 0x00000000
Apr  3 11:15:01 test2 kernel: [  720.052272]  ffff880035bdf8f8 0000000000000086 ffff880135894b00 ffff880035be2580
Apr  3 11:15:01 test2 kernel: [  720.052274]  0000000000000046 ffff880035be0000 ffff88013fd16a00 7fffffffffffffff
Apr  3 11:15:01 test2 kernel: [  720.052276]  ffff880035be2580 ffff880035b0c000 ffff880035bdf918 ffffffff81803eb7
Apr  3 11:15:01 test2 kernel: [  720.052278] Call Trace:
Apr  3 11:15:01 test2 kernel: [  720.052286]  [<ffffffff81803eb7>] schedule+0x37/0x80
Apr  3 11:15:01 test2 kernel: [  720.052288]  [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Apr  3 11:15:01 test2 kernel: [  720.052292]  [<ffffffff8106033e>] ? kvm_clock_get_cycles+0x1e/0x20
Apr  3 11:15:01 test2 kernel: [  720.052294]  [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Apr  3 11:15:01 test2 kernel: [  720.052298]  [<ffffffff8123aa2c>] do_blockdev_direct_IO+0x10dc/0x2d20
Apr  3 11:15:01 test2 kernel: [  720.052301]  [<ffffffff810c4681>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Apr  3 11:15:01 test2 kernel: [  720.052304]  [<ffffffff81236f00>] ? I_BDEV+0x20/0x20
Apr  3 11:15:01 test2 kernel: [  720.052306]  [<ffffffff8123c6b3>] __blockdev_direct_IO+0x43/0x50
Apr  3 11:15:01 test2 kernel: [  720.052307]  [<ffffffff81237778>] blkdev_direct_IO+0x58/0x80
Apr  3 11:15:01 test2 kernel: [  720.052311]  [<ffffffff81183999>] generic_file_direct_write+0xb9/0x190
Apr  3 11:15:01 test2 kernel: [  720.052313]  [<ffffffff81183b32>] __generic_file_write_iter+0xc2/0x1f0
Apr  3 11:15:01 test2 kernel: [  720.052315]  [<ffffffff811833e0>] ? generic_file_read_iter+0xd0/0x5d0
Apr  3 11:15:01 test2 kernel: [  720.052316]  [<ffffffff81237a7b>] blkdev_write_iter+0x8b/0x120
Apr  3 11:15:01 test2 kernel: [  720.052319]  [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Apr  3 11:15:01 test2 kernel: [  720.052320]  [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Apr  3 11:15:01 test2 kernel: [  720.052322]  [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Apr  3 11:15:01 test2 kernel: [  720.052324]  [<ffffffff811fc76d>] ? fixed_size_llseek+0x1d/0x20
Apr  3 11:15:01 test2 kernel: [  720.052325]  [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Apr  3 11:15:01 test2 kernel: [  720.052326]  [<ffffffff811fccf2>] ? SyS_lseek+0x92/0xb0
Apr  3 11:15:01 test2 kernel: [  720.052328]  [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Apr  3 11:17:01 test2 kernel: [  840.052267] pvcreate        D 0000000000000001     0  1454   1179 0x00000000
Apr  3 11:17:01 test2 kernel: [  840.052271]  ffff880035bdf8f8 0000000000000086 ffff880135894b00 ffff880035be2580
Apr  3 11:17:01 test2 kernel: [  840.052273]  0000000000000046 ffff880035be0000 ffff88013fd16a00 7fffffffffffffff
Apr  3 11:17:01 test2 kernel: [  840.052275]  ffff880035be2580 ffff880035b0c000 ffff880035bdf918 ffffffff81803eb7
Apr  3 11:17:01 test2 kernel: [  840.052277] Call Trace:
Apr  3 11:17:01 test2 kernel: [  840.052284]  [<ffffffff81803eb7>] schedule+0x37/0x80
Apr  3 11:17:01 test2 kernel: [  840.052287]  [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Apr  3 11:17:01 test2 kernel: [  840.052291]  [<ffffffff8106033e>] ? kvm_clock_get_cycles+0x1e/0x20
Apr  3 11:17:01 test2 kernel: [  840.052293]  [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Apr  3 11:17:01 test2 kernel: [  840.052297]  [<ffffffff8123aa2c>] do_blockdev_direct_IO+0x10dc/0x2d20
Apr  3 11:17:01 test2 kernel: [  840.052300]  [<ffffffff810c4681>] ? __raw_callee_save___pv_queued_spin_unlock+0x11/0x20
Apr  3 11:17:01 test2 kernel: [  840.052302]  [<ffffffff81236f00>] ? I_BDEV+0x20/0x20
Apr  3 11:17:01 test2 kernel: [  840.052304]  [<ffffffff8123c6b3>] __blockdev_direct_IO+0x43/0x50
Apr  3 11:17:01 test2 kernel: [  840.052306]  [<ffffffff81237778>] blkdev_direct_IO+0x58/0x80
Apr  3 11:17:01 test2 kernel: [  840.052309]  [<ffffffff81183999>] generic_file_direct_write+0xb9/0x190
Apr  3 11:17:01 test2 kernel: [  840.052311]  [<ffffffff81183b32>] __generic_file_write_iter+0xc2/0x1f0
Apr  3 11:17:01 test2 kernel: [  840.052313]  [<ffffffff811833e0>] ? generic_file_read_iter+0xd0/0x5d0
Apr  3 11:17:01 test2 kernel: [  840.052315]  [<ffffffff81237a7b>] blkdev_write_iter+0x8b/0x120
Apr  3 11:17:01 test2 kernel: [  840.052317]  [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Apr  3 11:17:01 test2 kernel: [  840.052319]  [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Apr  3 11:17:01 test2 kernel: [  840.052321]  [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Apr  3 11:17:01 test2 kernel: [  840.052322]  [<ffffffff811fc76d>] ? fixed_size_llseek+0x1d/0x20
Apr  3 11:17:01 test2 kernel: [  840.052324]  [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Apr  3 11:17:01 test2 kernel: [  840.052325]  [<ffffffff811fccf2>] ? SyS_lseek+0x92/0xb0
Apr  3 11:17:01 test2 kernel: [  840.052327]  [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Apr  3 11:18:58 test2 kernel: [  957.854080] sd 5:0:0:0: [sdb] FAILED Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
Apr  3 11:18:58 test2 kernel: [  957.854089] sd 5:0:0:0: [sdb] CDB: Write(10) 2a 00 00 00 00 08 00 00 08 00
Apr  3 11:18:58 test2 kernel: [  957.854196] device-mapper: multipath: Failing path 8:16

It seems something wrong with multipath because we have messages like:
Apr 3 11:12:46 test2 kernel: [ 585.821909] device-mapper: multipath: Failing path 8:96.
Apr 3 11:18:58 test2 kernel: [ 957.854196] device-mapper: multipath: Failing path 8:16.

But multipath -ll showed that multipath worked correct.

So, I don't know what is wrong with this setup.

Proxmox version:
root@test2:~# pveversion
pve-manager/4.1-1/2f9650d4 (running kernel: 4.2.6-1-pve)

But I tried on 3.4 version as well.

I would be appreciated for any suggestions and pieces of advice.
 
Last edited:
The problem was in storage.

I checked log file and saw some warning about unwritable cache data.

Checked unwritable-cache:
Code:
#show unwritable-cache
Unwritable System Cache
-----------------------
Percent of unwritable cache in controller A: 98
Percent of unwritable cache in controller B: 98

Cleared cache:
Code:
clear cache

Checked unwritable-cache one more time:
Code:
#show unwritable-cache
Unwritable System Cache
-----------------------
Percent of unwritable cache in controller A: 0
Percent of unwritable cache in controller B: 0

And tried to create LVM physical volume with pvcreate. It worked now!

Thanks to all!