ISCSI question

david.eberle_sts

New Member
Nov 25, 2024
4
1
3
good day,

i have a PVE cluster that i setup and have connected a symbology ISCSI drive. I was able to install an OS on my cluster using the ISCSI for the data drive successfully. Everything was working correctly until i came back into the office today (Monday) and for some reason the IP changed on the ISCSI device and i lost my HDD for my VM. i was able to remap the ISCSI connection successfully but i was not able to access that VM HDD. (nothing from i can tell has changed other than the IP address) however when i go to the storage portion of the connection/node i can see the VM disk that i want to attach to the VM. my question is can i attach that disk to the VM and restart the VM with that connected? if so how do i do that? or is this a bug that needs to be fixed in a future release?

TIA
 
Hi @david.eberle_sts , welcome to the forum.

To be frank, its unclear what happened with your environment. It seems that your external storage has re-IP'ed itself, wrecking havoc in the configuration? Your storage should not be changing its IP at will. I don't think any hypervisor would be able to handle this.

i was able to remap the ISCSI connection successfully but i was not able to access that VM HDD.
It would help if you illustrated this statement with any or all of the : commands, VM config, storage config, logs, screenshots, etc

my question is can i attach that disk to the VM and restart the VM with that connected? if so how do i do that?
Please provide your current running configuration:
cat /etc/pve/storage.cfg
pvesm status
pvesm list [storage]

Explain, as best as you can, how the previous configuration was different from the current one.

Provide your VM configuration:
qm config [vmid]

Provide relevant logs , or at least:
journalctl -n 100

Provide system state:
iscsiadm -m node
iscsiadm -m session
lsblk
lsscsi

It's unlikely that what you ran into is a Proxmox bug. When bad things happen in the environment, recovery is not always straightforward.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S
@bbgeek17 thank you for the welcome, i have been using Proxmox for a while now but not in this capacity.. here is the info that you requested:

Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

iscsi: proxmoxsym
        portal 10.11.12.193
        target iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e
        content images
        nodes proxmox2,proxmox1

lvm: symstorage
        vgname PVE
        base proxmoxsym:0.0.1.scsi-36001405095011e8d863cd45d3daeecdf
        content rootdir,images
        nodes proxmox1,proxmox2
        saferemove 0
        shared 1

pbs: store
        datastore store2
        server 10.11.12.24
        content backup
        fingerprint XXXXXXXX
        nodes proxmox1,proxmox2
        prune-backups keep-all=1
        username root@pam


Code:
pvesm status
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea8:9916,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: 192.168.2.250,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea8:9917,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea8:9916,3260]
Logging in to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: 192.168.2.250,3260]
Logging in to [iface: default, target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea8:9917,3260]
command '/usr/bin/iscsiadm --mode node --targetname iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e --login' failed: exit code 15
Name              Type     Status           Total            Used       Available        %
local              dir     active        98497780        10153540        83294692   10.31%
local-lvm      lvmthin     active      1792532480               0      1792532480    0.00%
proxmoxsym       iscsi     active               0               0               0    0.00%
store              pbs     active      1055761840          263320      1001795100    0.02%
symstorage         lvm     active      2122313728       262144000      1860169728   12.35%

Code:
qm config 100
agent: 1
boot: order=ide2;net0
cores: 4
cpu: x86-64-v2-AES
memory: 8096
meta: creation-qemu=9.0.2,ctime=1732304920
name: win2022servertest
net0: virtio=BC:24:11:27:34:AD,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=1a78b5f5-312d-4412-ae14-3b8b5497bc4b
sockets: 4
vmgenid: 62542156-ba72-4ed2-b1ca-1890db923fd7

Code:
journalctl -n 100
Nov 25 14:46:30 proxmox1 pvestatd[1092]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e ->
Nov 25 14:46:30 proxmox1 iscsid[38394]: Connection-1:0 to [target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: 192.168.2.250,3260>
Nov 25 14:46:31 proxmox1 pvestatd[1092]: status update time (8.729 seconds)
Nov 25 14:46:33 proxmox1 iscsid[38394]: connection-1:0 cannot make a connection to fe80::211:32ff:fea8:9916:3260 (-1,22)
Nov 25 14:46:33 proxmox1 iscsid[38394]: connection-1:0 cannot make a connection to fe80::211:32ff:fea8:9917:3260 (-1,22)
Nov 25 14:46:35 proxmox1 iscsid[38394]: connect to 10.11.12.224:3260 failed (No route to host)
Nov 25 14:46:36 proxmox1 iscsid[38394]: Connection-1:0 to [target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea>
Nov 25 14:46:36 proxmox1 iscsid[38394]: Connection-1:0 to [target: iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e, portal: fe80::211:32ff:fea>
Nov 25 14:46:37 proxmox1 iscsid[38394]: connect to 192.168.2.250:3260 failed (No route to host)
[and this repeats]

Code:
iscsiadm -m node
10.11.12.193:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e
10.11.12.224:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e
192.168.2.250:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e
[fe80::211:32ff:fea8:9916]:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e
[fe80::211:32ff:fea8:9917]:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e

Code:
iscsiadm -m session
tcp: [1] 10.11.12.224:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e (non-flash)
tcp: [2] 10.11.12.193:3260,1 iqn.2000-01.com.synology:CYACSBCKP.default-target.16fcdf3207e (non-flash)

Code:
lsblk
NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                      8:0    0  1.8T  0 disk
├─sda1                   8:1    0 1007K  0 part
├─sda2                   8:2    0    1G  0 part /boot/efi
└─sda3                   8:3    0  1.8T  0 part
  ├─pve-swap           252:0    0    8G  0 lvm  [SWAP]
  ├─pve-root           252:1    0   96G  0 lvm  /
  ├─pve-data_tmeta     252:2    0 15.9G  0 lvm 
  │ └─pve-data         252:4    0  1.7T  0 lvm 
  └─pve-data_tdata     252:3    0  1.7T  0 lvm 
    └─pve-data         252:4    0  1.7T  0 lvm 
sdb                      8:16   0  3.6T  0 disk
└─sdb4                   8:20   0  3.6T  0 part
sdc                      8:32   0    2T  0 disk
└─PVE-vm--100--disk--0 252:5    0  250G  0 lvm 
sdd                      8:48   0    2T  0 disk
sr0                     11:0    1 1024M  0 rom

Code:
lsscsi
-bash: lsscsi: command not found

the IP 193 is the "new IP" and the .224 is the "old IP" that has the VM HDD on it that i would like to attach to the 100 VMID
my setup was a straight OS install of windows server 2022 with the HDD was not local to the host but was housed in the ISCSI LUN. Nothing that i can tell was changed other than the IP. attached in the VM HDD that i would like to attach to the VM

1732568432211.png

i hope this helps

TIA

David
 
You did not run "pvesm list [iscsi_storage]", however your gui snippet appears to show the LUN.
Run "qm disk rescan", then, if all went well, visit hardware page of your VM and assign "unused0" disk as appropriate device.

On a side note: your device is reporting multiple iSCSI portals that are not accessible from PVE. This can lead to system confusion down the road.
Additionally, if this was my system, I would try to get down to why the IP changed.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17 i ran the "qm disk rescan" and i was able to assign that to the VM as the HDD, thank you for that. the issue that we found out was that for some reason the IP got changed to DHCP and not a static assigned. we fixed that as now it is a static IP. one last issue that we found was that when we mapped/mounted that ISCSI drive/share to the cluster one node (proxmox1) on the cluster sees it and is able to access it while the other node (proxmox2) can not and is giving us a 500 error with a "storage proxmoxsym not online" error message. as we are trying to migrate a VM between the two nodes within the cluster.

TIA

David
 
  • Like
Reactions: Johannes S
one last issue that we found was that when we mapped/mounted that ISCSI drive/share to the cluster one node (proxmox1) on the cluster sees it and is able to access it while the other node (proxmox2) can not and is giving us a 500 error with a "storage proxmoxsym not online" error message.
Well, I hope its the last one!

The "not online" means that the PVE health check from proxmox2 is not working. There could be many reasons: network, iscsi authentication, etc.
You can run all the same commands I provided in the comment #2, then examine the output to see if you can narrow down the issue.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!