Are you doing iSCSI connection manually, i.e. directly via
iscsiadm
, or are you adding Proxmox iSCSI storage pool?
If it's the former, then you need to set up that connection on each node, as
@LnxBil mentioned. Don't forget to mark the session to auto-start, or it won't be present on reboot.
If its the latter, please provide the output of :
-
cat /etc/pve/storage.cfg
-
pvesm status
(from each node)
-
iscsiadm -m node
(from each node)
-
iscsiadm -m session
(from each node)
-
pvesm list [iscsi_storage_name]
(from each node)
If you plan to use the iSCSI LUN as shared storage, yes you have to create the LVM structure manually. But only from one node.
You'd need to provide the same output that I listed above. However, it'd be cleaner if you opened a separate thread.
Good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
root@pve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
lvm: lvm
vgname pve1
content rootdir,images
shared 1
root@pve2:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
lvm: lvm
vgname pve1
content rootdir,images
shared 1
root@pve3:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
lvm: lvm
vgname pve1
content rootdir,images
shared 1
root@pve1:~# pvesm status
Name Type Status Total Used Available %
local dir active 71017632 11527716 55836696 16.23%
local-lvm lvmthin active 148086784 17059597 131027186 11.52%
lvm lvm active 419426304 104857600 314568704 25.00%
root@pve2:~# pvesm status
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Name Type Status Total Used Available %
local dir active 71017632 7254868 60109544 10.22%
local-lvm lvmthin active 148086784 8648268 139438515 5.84%
lvm lvm inactive 0 0 0 0.00%
root@pve3:~# pvesm status
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Name Type Status Total Used Available %
local dir active 71017632 7255032 60109380 10.22%
local-lvm lvmthin active 148086784 8692694 139394089 5.87%
lvm lvm inactive 0 0 0 0.00%
root@pve3:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0
root@pve2:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0
root@pve1:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0
root@pve1:~# iscsiadm -m session
tcp: [1] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)
root@pve2:~# iscsiadm -m session
tcp: [4] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)
root@pve3:~# iscsiadm -m session
tcp: [5] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)
root@pve3:~# pvesm list lvm
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Volid Format Type Size VMID
root@pve2:~# pvesm list lvm
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Volid Format Type Size VMID
root@pve1:~# pvesm list lvm
Volid Format Type Size VMID
lvm:vm-103-disk-0 raw images 53687091200 103
lvm:vm-103-disk-1 raw images 53687091200 103