Proxmox 8.1 HITACHI FC Mount Problem (Solved)

netmacig

New Member
Mar 14, 2024
8
2
3
Hello,
I have 3 proxmox nodes. I want to cluster with Hitachi FC. But I couldn't get FC to work. Where am I doing wrong? Thanks for helping.

output of pveversion -v, pvs, vgs, lvs, multipath -ll, cat /etc/multipath.conf and cat /etc/multipath/wwids
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2

-----------------------

root@host1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <272.40g 16.00g
----------------------------------
root@host1:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <272.40g 16.00g
-----------------------------------------------------
root@host1:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <166.90g 0.00 0.97
root pve -wi-ao---- <78.10g
swap pve -wi-ao---- 8.00g
-------------------------------------------------
root@host1:~# multipath -ll
mpath0_E590 (360060e8005721e000000721e00000500) dm-8 HITACHI,OPEN-V*17
size=26T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 4:0:0:0 sdc 8:32 active ready running
`- 5:0:0:0 sdb 8:16 active ready running
-------------------------------------------------------------------
root@host1:~# cat /etc/multipath.conf
blacklist {
wwid .*
}

blacklist_exceptions {
wwid "360060e8005721e000000721e00000500"
wwid "3600144f028f88a0000005037a95d0002"
}
multipaths {
multipath {
wwid "360060e8005721e000000721e00000500"
alias mpath0_E590
}
}

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes

-----------------------------------------------------------------

root@host1:~# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360060e8005721e000000721e00000500/
 

Attachments

  • Cluster.jpg
    Cluster.jpg
    214.4 KB · Views: 5
I have 3 proxmox nodes. I want to cluster with Hitachi FC. But I couldn't get FC to work. Where am I doing wrong? Thanks for helping.
The disk seems to be present, at least on the node where you provided "multipath" output from. So that would mean you have FC working.

The first thing to do is to ensure that all 3 nodes can see the disk/s in "lsscsi" and "lsblk".
Then you need to configure "multipath" properly on all 3.
You then need to provision LVM volume group from one node only. You appear to have a partition on the disk now. Unless you have a logical reason to have it - get rid of it.
All partition and LVM manipulations need to be done on "mpath" device, not on "sdX" devices.
After you finish all of the above, you can then add LVM type storage in PVE GUI.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I did as you said. Unfortunately, it does not appear when I want to add a disk.
 

Attachments

  • no disk add.png
    no disk add.png
    113 KB · Views: 5
I think something's going on, man. I don't understand where this datadisk came from, but it looks like it's corrupted.
 

Attachments

  • fcdatadisk.png
    fcdatadisk.png
    91.6 KB · Views: 6
I'd disable the multipath and zero out/wipe one of the underlying devices directly then. Or start from scratch on your storage array by re-creating the LUN.

I guess you can also try to wipe sdb or sdc directly at this point, you have nothing to loose.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hello friend, I created a 2TB space with the fidsk command. Then I mapped the area I created. I'm so close. I can use this area on Host 1, but it is not active on other hosts.
fdisk /dev/mapper/mpath0_E590
vgcreate iscsi /dev/mapper/mpath0_E590-part1
 

Attachments

  • azkaldi.jpg
    azkaldi.jpg
    470.1 KB · Views: 4
I am not exactly sure what your screenshots are showing. In general, screenshots are not the best way to communicate information.
You should utilize CLI commands/outputs and encode it with CODE tags.

What I can see is that you have "?" on other nodes that possibly indicates that storage was not made properly available across the cluster.
You need to examine the output of:
lsblk - on each host
multipath -ll - on each host
cat /etc/pve/storage.cfg (should be identical on each host if cluster was properly configured)
pvesm status - on each host

The Volume Group will not necessarily be activate everywhere until your actually start using it, ie create a slice for VM running on that node or migrate a VM there.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: netmacig
Hello
After restarting the server everything went back to normal. Multipath works very well, thank you very much my friend, I'm glad to have you.
 
  • Like
Reactions: bbgeek17
Hello friend, I created a 2TB space with the fidsk command. Then I mapped the area I created. I'm so close. I can use this area on Host 1, but it is not active on other hosts.
fdisk /dev/mapper/mpath0_E590
vgcreate iscsi /dev/mapper/mpath0_E590-part1
Don't partition unnecessarily. This is a windows thing, no one else does this. It's useless and just complicates things. Create the volume group directly on the device.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!