Multipath-tools Errors in PVE 6.3 Updated

Rais Ahmed

Active Member
Apr 14, 2017
50
4
28
37
Hi,

I have install and configure 3 Nodes Cluster Proxmox Virtual Environment 6.3-3 with multipath-tools Installed: 0.7.9-3+deb10u1 and seeing following errors continuously.

Code:
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdi: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdq: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdg: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdp: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdf: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdo: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdh: couldn't get asymmetric access state
multipathd[10763]: get_alua_info: get_asymmetric_access_state returned -3
multipathd[10763]: sdn: couldn't get asymmetric access state

Why I am seeing these errors?
Does it have any impact to performance/stability?
How to resolve it?

Please help.
 
Hi,

do you add some LUN from storage?
Can you please give us output from commands

multipath -ll

dmsetup ls

dmsetup deps
 
Hi Thank you for response.

Yes, LUNs connected via iSCSI configured in multipath.

Code:
multipath -ll
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdb: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdc: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdd: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sde: couldn't get asymmetric access state
mpath0 (36e0008410036ab762422364a00000008) dm-5 HUAWEI,XSG1
size=5.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=-1 status=active
  |- 1:0:0:1 sdb 8:16  active ready running
  |- 2:0:0:1 sdc 8:32  active ready running
  |- 3:0:0:1 sdd 8:48  active ready running
  `- 4:0:0:1 sde 8:64  active ready running
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdo: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdp: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdn: couldn't get asymmetric access state
Jan 21 22:44:58 | get_alua_info: get_asymmetric_access_state returned -3
Jan 21 22:44:58 | sdq: couldn't get asymmetric access state
mpatha (36e0008410036ab7624bd8f6c00000009) dm-13 HUAWEI,XSG1
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=-1 status=active
  |- 1:0:0:4 sdo 8:224 active ready running
  |- 2:0:0:4 sdp 8:240 active ready running
  |- 3:0:0:4 sdn 8:208 active ready running
  `- 4:0:0:4 sdq 65:0  active ready running

Code:
dmsetup ls
Huawei5TB-vm--201--disk--0      (253:7)
mpath0-part1    (253:6)
mpath0  (253:5)
Huawei--1TB-vm--804--disk--0    (253:12)
pve-data_tdata  (253:3)
pve-data_tmeta  (253:2)
mpatha  (253:13)
pve-swap        (253:0)
pve-root        (253:1)
Huawei--1TB-vm--101--disk--2    (253:11)
pve-data        (253:4)
Huawei--300GB-vm--702--disk--0  (253:8)
Huawei--1TB-vm--101--disk--1    (253:10)
mpatha-part1    (253:14)
Huawei--1TB-vm--101--disk--0    (253:9)
Code:
dmsetup deps
Huawei5TB-vm--201--disk--0: 1 dependencies      : (253, 6)
mpath0-part1: 1 dependencies    : (253, 5)
mpath0: 4 dependencies  : (8, 64) (8, 48) (8, 32) (8, 16)
Huawei--1TB-vm--804--disk--0: 1 dependencies    : (8, 145)
pve-data_tdata: 1 dependencies  : (8, 3)
pve-data_tmeta: 1 dependencies  : (8, 3)
mpatha: 4 dependencies  : (65, 0) (8, 208) (8, 240) (8, 224)
pve-swap: 1 dependencies        : (8, 3)
pve-root: 1 dependencies        : (8, 3)
Huawei--1TB-vm--101--disk--2: 1 dependencies    : (8, 145)
pve-data: 2 dependencies        : (253, 3) (253, 2)
Huawei--300GB-vm--702--disk--0: 1 dependencies  : (8, 81)
Huawei--1TB-vm--101--disk--1: 1 dependencies    : (8, 145)
mpatha-part1: 1 dependencies    : (253, 13)
Huawei--1TB-vm--101--disk--0: 1 dependencies    : (8, 145)
 
Do you restart servers, if you can, after adding LUNs?
You have EMULEX HBA?
You use both LUNs

mpath0-part1

and

mpatha-part1

Problem is in partition table.
You can reload multipath

systemctl reload multipathd

This will reload config from

/etc/multipath.com
/etc/multipath/wwid
 
Hi,
Already done mentioned items,
yes using both LUNs
configuration reloaded.
Please help.
 
Hi,
Already done mentioned items,
yes using both LUNs
configuration reloaded.
Please help.


Hi,

My advice is to restart one node, you say have it three.
If that solve problem you can migrate VM and restart second ...

If that do not resolve problem, only you can do is to "delete" problematic disk's from multipath

Code:
multipathd del path sdb
echo 1 > /sys/block/sdb/device/delete

To this two command for each problematic disk in environment [sdb, sdc, sdd, sde].
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!