Hello,
I have a working cluster with 3 PVE nodes, and 1 storage node (PVE installation, not part of the cluster and nothing used other than ZFS), which is working with ZFS over iSCSI without issues.
Trying to add a second storage node with the same setup process as for the first (based on notes), but when trying to create a hard drive on the new storage, I get a popup with the error message "failed to update VM 103: Invalid lun definition in config! (500)".
Executing the PVE ssh command on the PVE host (got it from the error message initially displayed due to server signature):
correctly returns the list of disks that were requested to be created with the proper sizes and all however when using targetcli, no block backends or luns have been created for the above volumes (executing targetcli on the storage node):
Do note, that from the PVE GUI when navigating to the storage of an individual node, both the "Summary" and "VM Disks" panes of the storage correctly list sizes / contents.
Can you please suggest next steps in debugging this issue?
Thank you in advance!
I have a working cluster with 3 PVE nodes, and 1 storage node (PVE installation, not part of the cluster and nothing used other than ZFS), which is working with ZFS over iSCSI without issues.
Trying to add a second storage node with the same setup process as for the first (based on notes), but when trying to create a hard drive on the new storage, I get a popup with the error message "failed to update VM 103: Invalid lun definition in config! (500)".
Executing the PVE ssh command on the PVE host (got it from the error message initially displayed due to server signature):
Code:
root@hv01 ~ # /usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/1.2.3.4_id_rsa root@1.2.3.4 zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -d1 -Hp rpool/data
rpool/data - - filesystem 0
rpool/data/vm-103-disk-0 8589934592 - volume -
rpool/data/vm-103-disk-1 5368709120 - volume -
rpool/data/vm-103-disk-2 2147483648 - volume -
rpool/data/vm-103-disk-3 2147483648 - volume -
rpool/data/vm-103-disk-4 2147483648 - volume -
rpool/data/vm-103-disk-5 1073741824 - volume -
root@hv01 ~ #
correctly returns the list of disks that were requested to be created with the proper sizes and all however when using targetcli, no block backends or luns have been created for the above volumes (executing targetcli on the storage node):
Code:
root@store02:~# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> ls
o- / .................................................................................. [...]
o- backstores ....................................................................... [...]
| o- block ........................................................... [Storage Objects: 0]
| o- fileio .......................................................... [Storage Objects: 1]
| | o- tmpdsk .............. [/rpool/data/disks/tmpdsk.img (100.0MiB) write-back activated]
| | o- alua ............................................................ [ALUA Groups: 1]
| | o- default_tg_pt_gp ................................ [ALUA state: Active/optimized]
| o- pscsi ........................................................... [Storage Objects: 0]
| o- ramdisk ......................................................... [Storage Objects: 0]
o- iscsi ..................................................................... [Targets: 1]
| o- iqn.2023-11.cloud.myhost.store02:data ............................... [TPGs: 1]
| o- tpg1 ........................................................ [no-gen-acls, no-auth]
| o- acls ................................................................... [ACLs: 0]
| o- luns ................................................................... [LUNs: 1]
| | o- lun0 ......... [fileio/tmpdsk (/rpool/data/disks/tmpdsk.img) (default_tg_pt_gp)]
| o- portals ............................................................. [Portals: 1]
| o- 1.2.3.4:3260 ........................................................ [OK]
o- loopback .................................................................. [Targets: 0]
o- vhost ..................................................................... [Targets: 0]
o- xen-pvscsi ................................................................ [Targets: 0]
/>
Do note, that from the PVE GUI when navigating to the storage of an individual node, both the "Summary" and "VM Disks" panes of the storage correctly list sizes / contents.
Can you please suggest next steps in debugging this issue?
Thank you in advance!