Here is an example of what, I think, you are trying to achieve. I will use Blockbridge storage and some shortcuts that will not be available for your NAS. However, I am sure you can find alternative methods to achieve the same result:
1) Create a Disk on NAS
Code:
#bb vss provision -c 32GiB --with-disk --label proxmox --disk-label disk
2) Attach the disk to both nodes in the cluster, ie command is run on each node
Code:
# bb host attach -d disk --multi-writer --persist
3) confirm the disk is visible by the OS
Code:
#root@pve7demo1:~# lsblk
#sdc 8:32 0 32G 0 disk
4) this is an iscsi disk
Code:
#root@pve7demo1:~# iscsiadm -m node
#172.16.200.42:3260,1 iqn.2009-12.com.blockbridge:t-pjwafzugcjf-ccngipjn
5) create an LVM structure
Code:
#sgdisk -N 1 /dev/sdc
#pvcreate /dev/sdc1
#vgcreate vmdata /dev/sdc1
6) create PVE shared storage object
Code:
#pvesm add lvm vmdata --vgname vmdata --content rootdir,images --shared 1
7) create a container with rootfs placed on Blockbridge iSCSI storage utilizing our plugin
Code:
pct create 103 local:local:vztmpl/ubuntu-22.10-standard_22.10-1_amd64.tar.zst --rootfs bb-iscsi:1
8) add a DATA disk to container. I would normally be using Blockbridge driver in our installation. However for this example I will utilize LVM thick storage I created on raw iSCSI attached disk in steps 1-5
Code:
#pct set 103 -mp0 vmdata:10,mp=/vmdata
9) lets examine the disk presentation inside the container:
Code:
pct enter 103
root@CT103:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sde 996780 631272 296696 69% /
/dev/mapper/vmdata-vm--103--disk--0 10218772 28 9678072 1% /vmdata
Moving to the second node in the cluster
10) examine lsblk and notice the individual iSCSI disk is present and active. The rootdisk in this case is not present, as Blockbridge plugin takes care of timely attaching the disk as needed.
Code:
root@pve7demo2:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:33 0 32G 0 disk
11) migrate the container (command is run from node1)
Code:
pct migrate 102 pve7demo2 --restart
12) Proxmox took care of ensuring that no Proxmox controlled application is accessing Data disk pve-vm--103--disk--0 disk on node1. Access has been allowed for LXC on node2. The rootfs disk located on Blockbridge storage has been automatically re-mounted on node2 as well.
Code:
root@pve7demo2:~# bb host info
== Localhost: pve7demo2
Hostname pve7demo2
Initiator Name iqn.1993-08.org.debian:01:ce28b56bb9dc
== Disks attached to pve7demo2
vss [1] disk capacity paths protocol transport mode device
------------------------- ------- -------- ----- -------- --------- ---------- --------
bb-iscsi:vm-102-disk-0 base 4.0GiB 1 iscsi TCP/IP read-write /dev/sdi
proxmox disk 32.0GiB 1 iscsi TCP/IP read-write /dev/sdb
To summarize: When using non-Blockbridge iSCSI storage with PVE cluster, the most straightforward approach is to use LVM and allow PVE full management. The resources are presented to CT as Device Mount Points (man pct). The handling/transfer of ownership of DMPs depends on the underlying storage. You can also pass-through entire LUN, there are examples available in forum search.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox