Hello community! There are two servers with one shared storage system via FC. I set up a shared LVM and a multipatch. I have doubts that everything is configured as needed. Proxmox version 7.4-3
I moved a number of vm with disks to these nodes, they were positioned very strangely. Some went straight to the sdb disk, some went to the vmdisk (multipath).
I also made changes to /etc/lvm/lvm.conf. However, I don’t understand why this is needed, that’s what was recommended in the setup articles. Can you explain why this is needed?
After moving all VMs to the first node, lsblk looks like this
multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "3600000e00d2c0000002c0c2e00000000"
wwid "3600000e00d2c0000002c0c2e00010000"
}
multipaths {
multipath {
wwid "3600000e00d2c0000002c0c2e00000000"
alias vmdisk
}
multipath {
wwid "3600000e00d2c0000002c0c2e00010000"
alias vmbackup
}
}
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "3600000e00d2c0000002c0c2e00000000"
wwid "3600000e00d2c0000002c0c2e00010000"
}
multipaths {
multipath {
wwid "3600000e00d2c0000002c0c2e00000000"
alias vmdisk
}
multipath {
wwid "3600000e00d2c0000002c0c2e00010000"
alias vmbackup
}
}
vmbackup (3600000e00d2c0000002c0c2e00010000) dm-31 FUJITSU,ETERNUS_DXL
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 13:0:0:2 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 12:0:0:2 sdc 8:32 active ready running
vmdisk (3600000e00d2c0000002c0c2e00000000) dm-30 FUJITSU,ETERNUS_DXL
size=11T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 13:0:0:1 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 12:0:0:1 sdb 8:16 active ready running
size=10T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 13:0:0:2 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 12:0:0:2 sdc 8:32 active ready running
vmdisk (3600000e00d2c0000002c0c2e00000000) dm-30 FUJITSU,ETERNUS_DXL
size=11T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 13:0:0:1 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 12:0:0:1 sdb 8:16 active ready running
I moved a number of vm with disks to these nodes, they were positioned very strangely. Some went straight to the sdb disk, some went to the vmdisk (multipath).
I also made changes to /etc/lvm/lvm.conf. However, I don’t understand why this is needed, that’s what was recommended in the setup articles. Can you explain why this is needed?
Code:
filter = [ "a|/dev/mapper/vm.*|", "r|.*|" ]
After moving all VMs to the first node, lsblk looks like this
Last edited: