This is the same error that I posted here. However, the resolutions in that thread are not working. I've narrowed down the problem as a multipath issue, but I'm creating a new thread because this situation seems to be unique.
Node A and B were clustered and are running v6.4-13 and are identical hardware. They both have /dev/sda and /dev/sdb, as local devices, and /dev/sdc and /dev/sdd, multipathed and mapped to an iSCSI SAN . These were the same nodes with which I was having trouble setting up in the post above. Like I said in that post, I didn't wittingly set up multipath to the iSCSI SAN volume. It just so happened that addressing multipath solved that issue.
If removing multipath is the easier solution in this case, I would gladly do that, but I need some direction on how to gracefully back out of it.
Finally, here is the case.
Enter, Node C, a completely different model server I set up with v7.0-13 and then joined to the cluster. Multipath does not seem to work on it. I was able to migrate a VM to Node C, but starting in fails with the error
Issuing
Node C
Nodes A and B
If I'm understanding the error correctly, since multipath is broken, the VMs cannot be found on both /dev/sdc /dev/sdd, and these actions fail.
Clearly multipath is broken on Node C, but again, if it is possible to safely disable multipath on Nodes A and B, this is perfectly acceptable. I still want to add two more nodes like C (with the intent of removing Nodes A and B) and I need to know the best way to proceed. Thank you in advance.
Node A and B were clustered and are running v6.4-13 and are identical hardware. They both have /dev/sda and /dev/sdb, as local devices, and /dev/sdc and /dev/sdd, multipathed and mapped to an iSCSI SAN . These were the same nodes with which I was having trouble setting up in the post above. Like I said in that post, I didn't wittingly set up multipath to the iSCSI SAN volume. It just so happened that addressing multipath solved that issue.
If removing multipath is the easier solution in this case, I would gladly do that, but I need some direction on how to gracefully back out of it.
Finally, here is the case.
Enter, Node C, a completely different model server I set up with v7.0-13 and then joined to the cluster. Multipath does not seem to work on it. I was able to migrate a VM to Node C, but starting in fails with the error
Cannot activate LVs in VG while PVs appear on duplicate devices.
. The same thing happens if I try to destroy the node.Issuing
lsblk
shows the VM disks which reside on the iSCSI volume, while /dev/sdd is empty. Furthermore, neither device is mapped to the multipath wwid as they are on Node A and B.Node C
Bash:
sdc 8:32 0 2T 0 disk
├─vms-vm--102--disk--1 253:0 0 40G 0 lvm
├─vms-vm--103--disk--0 253:1 0 40G 0 lvm
├─vms-vm--400--disk--0 253:2 0 16G 0 lvm
├─vms-vm--100--disk--0 253:3 0 40G 0 lvm
├─vms-vm--205--disk--0 253:4 0 25G 0 lvm
├─vms-vm--101--disk--0 253:5 0 40G 0 lvm
└─vms-vm--200--disk--0 253:6 0 40G 0 lvm
sdd 8:48 0 2T 0 disk
Nodes A and B
Bash:
sdc 8:32 0 2T 0 disk
└─36001a620000142313332304230313538 253:2 0 2T 0 mpath
├─vms-vm--102--disk--1 253:3 0 40G 0 lvm
├─vms-vm--103--disk--0 253:4 0 40G 0 lvm
├─vms-vm--400--disk--0 253:5 0 16G 0 lvm
├─vms-vm--100--disk--0 253:6 0 40G 0 lvm
├─vms-vm--205--disk--0 253:7 0 25G 0 lvm
├─vms-vm--101--disk--0 253:8 0 40G 0 lvm
└─vms-vm--200--disk--0 253:9 0 40G 0 lvm
sdd 8:48 0 2T 0 disk
└─36001a620000142313332304230313538 253:2 0 2T 0 mpath
├─vms-vm--102--disk--1 253:3 0 40G 0 lvm
├─vms-vm--103--disk--0 253:4 0 40G 0 lvm
├─vms-vm--400--disk--0 253:5 0 16G 0 lvm
├─vms-vm--100--disk--0 253:6 0 40G 0 lvm
├─vms-vm--205--disk--0 253:7 0 25G 0 lvm
├─vms-vm--101--disk--0 253:8 0 40G 0 lvm
└─vms-vm--200--disk--0 253:9 0 40G 0 lvm
If I'm understanding the error correctly, since multipath is broken, the VMs cannot be found on both /dev/sdc /dev/sdd, and these actions fail.
Clearly multipath is broken on Node C, but again, if it is possible to safely disable multipath on Nodes A and B, this is perfectly acceptable. I still want to add two more nodes like C (with the intent of removing Nodes A and B) and I need to know the best way to proceed. Thank you in advance.
Last edited: