It would probably help the community if you were to provide more data. Start from scratch and show the exact commands you run and the output.
Don't cut things out, if the output is long - use the SPOILER tag.
Start with no LVM or Multipath, and stop the service if needed. Provide "lsblk", "lsscsi", "blkid"
Add/enable multipath. Provide your exact configuration file "cat /etc/multipath.conf|egrep -v "^$|^#".
What is in your /etc/multipath/wwids
What are the OS and PVE versions? (pveversion). Did you install it from PVE ISO or Debian?
What are the package versions for multipath?
Make sure your system is operational and survives reboot at this point.
What is in your lvm.conf? cat /etc/lvm/lvm.conf |egrep -v "^$|^#| *#"
Only after that add LVM. Make sure you show the commands you use to do so, the state of devices, etc. Follow on with reboot.
Many thousands of people use this configuration at this time, there is something in your workflow that is different but it's hard to say with limited information.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
i used some guide on the internet for that as expected and merge information from 2 post, PVE 8.2.4 (ISO installed) on no-subscription for now (need a POC to justify the 10k CAD cost of 6 socket premium to my boss) lsscsi return command not found but i included lsblk and blkid with multipath -ll
i think it's about the filter in /etc/lvm/lvm.conf
https://gist.github.com/mrpeardotnet/547aecb041dbbcfa8334eb7ffb81d784
https://pve.proxmox.com/wiki/ISCSI_Multipath#Configuration
Here's the config i run
Multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
rr_min_io 100
failback immediate
no_path_retry queue
find_multipaths yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9].*"
}
devices {
device {
vendor "(HP|HPE)"
product "MSA [12]0[456]0 (SAN|SAS|FC|iSCSI)"
path_grouping_policy "group_by_prio"
prio "alua"
failback "immediate"
no_path_retry 18
}
}
multipaths {
multipath {
wwid 3600c0ff000530276c0ea7a6601000000
alias RAID5
}
multipath {
wwid 3600c0ff00053025039357d6601000000
alias RAID10
}
}
multipath.wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/3600c0ff000530276c0ea7a6601000000/
/3600c0ff00053025039357d6601000000/
multipath.bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
RAID5 3600c0ff000530276c0ea7a6601000000
RAID10 3600c0ff00053025039357d6601000000
etc/lvm/lvm.conf
devices {
# added by pve-manager to avoid scanning ZFS zvols and Ceph rbds
global_filter=["r|/dev/zd.*|","r|/dev/rbd.*|"]
}
cat /etc/lvm/lvm.conf |egrep -v "^$|^#| *#"
config {
}
devices {
}
allocation {
}
log {
}
backup {
}
shell {
}
global {
}
activation {
}
dmeventd {
}
devices {
global_filter=["r|/dev/zd.*|","r|/dev/rbd.*|"]
}