Hello
Migrating a WMware cluster of 3 hosts to a Promox cluster and a storage bay connected to the hosts with 2 SAS HBA cards per host.
I want shared storage to be able to migrate VMs.
Currently 2 hosts are mounted in the Proxmox cluster and connected to the bay.
I've enabled multipath on servers
/etc/multipath.conf on 2 servers
and
/etc/multipath/wwids on 2 servers
result on 2 servers:
pve1
pve3
I created a ZFS pool on the 2 hosts, mounted ZFS storage on the cluster.
zpool command results
pve1
pve3
I create zfs storage on the cluster
Storage appears on 2 hosts, but only pve1 contains VM disks.
Did I miss a step so that all the hosts could see the same storage?
I would be very grateful if you could provide me with your expertise or experience on this problem.
have a nice day
Migrating a WMware cluster of 3 hosts to a Promox cluster and a storage bay connected to the hosts with 2 SAS HBA cards per host.
I want shared storage to be able to migrate VMs.
Currently 2 hosts are mounted in the Proxmox cluster and connected to the bay.
I've enabled multipath on servers
Bash:
apt install multipath-tools
modprobe dm_multipat
multipath -a 36000d31004000e000000000000000009
multipath -r
Bash:
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "36000d31004000e000000000000000009"
}
/etc/multipath/wwids on 2 servers
Bash:
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/36000d31004000e000000000000000009/
pve1
Bash:
root@pve1:~# multipath -ll
mpatha (36000d31004000e000000000000000009) dm-2 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 14:0:1:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 14:0:0:1 sda 8:0 active ready running
Bash:
root@pve3:~# multipath -ll
mpatha (36000d31004000e000000000000000009) dm-5 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 1:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 1:0:1:1 sdc 8:32 active ready running
Bash:
zpool create -f -o ashift=12 pool-zfs /dev/mapper/mpatha
pve1
Bash:
root@pve1:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool-zfs 4.98T 536K 4.98T - - 0% 0% 1.00x ONLINE -
root@pve1:~# zpool status
pool: pool-zfs
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
pool-zfs ONLINE 0 0 0
mpatha ONLINE 0 0 0
errors: No known data errors
Bash:
root@pve3:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool-zfs 4.98T 504K 4.98T - - 0% 0% 1.00x ONLINE -
root@pve3:~# zpool status
pool: pool-zfs
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
pool-zfs ONLINE 0 0 0
mpatha ONLINE 0 0 0
errors: No known data errors
I create zfs storage on the cluster
Storage appears on 2 hosts, but only pve1 contains VM disks.
Did I miss a step so that all the hosts could see the same storage?
I would be very grateful if you could provide me with your expertise or experience on this problem.
have a nice day