add new volume with iscsi

rickygm

Renowned Member
Sep 16, 2015
136
6
83
Hi , I am trying to add a new iscsi volume , The scan shows it correctly, but when I want to log in It shows me the following:

Code:
iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals

discovered:

Code:
iscsiadm -m discovery -t st -p  10.10.10.1
10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b
[root@srv-02 ~]# iscsiadm -m discovery -t st -p  10.10.9.1
10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a


version of pve
Code:
[root@srv-02 ~]# pveversion -v
proxmox-ve: not correctly installed (running kernel: 6.5.13-1-pve)
pve-manager: not correctly installed (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2

If I list the disks I don't see the correct size for sdh - sdi

Code:
[root@srv-02 ~]# lsscsi -s
[0:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sda    120GB
[1:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sdb    120GB
[3:0:0:0]    disk    ATA      CT240BX500SSD1   052   /dev/sdc    240GB
[6:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdd   1.28TB
[6:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdf   6.15TB
[7:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sde   1.28TB
[7:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdg   6.15TB
[8:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdh    131kB
[9:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdi    131kB


any idea? is it a bug in iscsi?
 
Hi,
Hi , I am trying to add a new iscsi volume , The scan shows it correctly, but when I want to log in It shows me the following:

Code:
iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals
If this is the complete output of the command, this just means the initiator already has a session to every known portal, and thus cannot log in to any more portals. Indeed, in your other post [1] it looks like there is a session to each discovered portal.
version of pve
Code:
[root@srv-02 ~]# pveversion -v
proxmox-ve: not correctly installed (running kernel: 6.5.13-1-pve)
pve-manager: not correctly installed (running version: 8.1.4/ec5affc9e41f1d79)
[...]
This looks like an upgrade from PVE 7 to PVE 8 may have gone wrong -- possibly the wrong apt upgrade was used instead of the correct apt dist-upgrade? Please double-check the upgrade guide [2].
If I list the disks I don't see the correct size for sdh - sdi

Code:
[root@srv-02 ~]# lsscsi -s
[0:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sda    120GB
[1:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sdb    120GB
[3:0:0:0]    disk    ATA      CT240BX500SSD1   052   /dev/sdc    240GB
[6:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdd   1.28TB
[6:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdf   6.15TB
[7:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sde   1.28TB
[7:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdg   6.15TB
[8:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdh    131kB
[9:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdi    131kB
131kB looks a bit small indeed. What is the size you would expect? Are the sizes for sdd/sde and sdf/sdg correct?

How did you set up the iSCSI connections? Did you create an iSCSI storage? [3]

It looks like there are multiple paths to the iSCSI targets, so multipath-tools should also be set up, see [4] for more details.

Please post/attach the output of the following commands:
Code:
iscsiadm -m session -P3
cat /etc/pve/storage.cfg
lsblk -o +FSTYPE,MODEL,TRAN,WWN

[1] https://forum.proxmox.com/threads/137953/post-718238
[2]https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
[3] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_open_iscsi
[4] https://pve.proxmox.com/wiki/Multipath
 
Hi,

If this is the complete output of the command, this just means the initiator already has a session to every known portal, and thus cannot log in to any more portals. Indeed, in your other post [1] it looks like there is a session to each discovered portal.

yes I have two different san with Freenas, proxmox finds them correctly.


This looks like an upgrade from PVE 7 to PVE 8 may have gone wrong -- possibly the wrong apt upgrade was used instead of the correct apt dist-upgrade? Please double-check the upgrade guide [2].

It's strange because I always follow the official proxmox guide and do the correct procedures, I'm going to update that host


131kB looks a bit small indeed. What is the size you would expect? Are the sizes for sdd/sde and sdf/sdg correct?


If the values for those disks are correct, whenever we add a new SAN via iSCSI, we make sure that it has multiple paths to avoid any hardware failure.

Code:
[0:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sda    120GB
[1:0:1:0]    disk    ATA      KINGSTON SA400S3 B1H5  /dev/sdb    120GB
[3:0:0:0]    disk    ATA      CT240BX500SSD1   052   /dev/sdc    240GB
[6:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdd   1.28TB
[6:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdf   6.15TB
[7:0:0:6]    disk    FreeNAS  iSCSI Disk       0123  /dev/sde   1.28TB
[7:0:0:12]   disk    FreeNAS  iSCSI Disk       0123  /dev/sdh   6.15TB
[8:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdg    131kB
[9:0:0:1]    disk    FreeNAS  iSCSI Disk       0123  /dev/sdi    131kB

The problem I have is when adding the new SAN, the disks show the size incorrectly (sdg - sdi) , should show 2.8Tb


How did you set up the iSCSI connections?

Do you mean the configuration in FreeNas?

Did you create an iSCSI storage? [3]


yes , I show you part of the configuration in freenas

It looks like there are multiple paths to the iSCSI targets, so multipath-tools should also be set up, see [4] for more details.

Please post/attach the output of the following commands:
Code:
iscsiadm -m session -P3
[/QUOTE]

[CODE][root@srv-02 ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 2.1.8
Target: iqn.2021-08.com.storage2.domain.com:tgt-iscsi-b (non-flash)
    Current Portal: 10.10.10.10:3260,2
    Persistent Portal: 10.10.10.10:3260,2
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:ecd7afb25c9b
        Iface IPaddress: 10.10.10.2
        Iface HWaddress: default
        Iface Netdev: default
        SID: 1
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 5
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 131072
        FirstBurstLength: 131072
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 6    State: running
        scsi6 Channel 00 Id 0 Lun: 12
            Attached scsi disk sdi        State: running
        scsi6 Channel 00 Id 0 Lun: 6
            Attached scsi disk sdd        State: running
Target: iqn.2021-08.com.storage2.domain.com:tgt-iscsi-a (non-flash)
    Current Portal: 10.10.9.10:3260,1
    Persistent Portal: 10.10.9.10:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:ecd7afb25c9b
        Iface IPaddress: 10.10.9.2
        Iface HWaddress: default
        Iface Netdev: default
        SID: 2
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 5
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 131072
        FirstBurstLength: 131072
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 7    State: running
        scsi7 Channel 00 Id 0 Lun: 12
            Attached scsi disk sdh        State: running
        scsi7 Channel 00 Id 0 Lun: 6
            Attached scsi disk sde        State: running
Target: iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b (non-flash)
    Current Portal: 10.10.10.1:3260,2
    Persistent Portal: 10.10.10.1:3260,2
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:ecd7afb25c9b
        Iface IPaddress: 10.10.10.2
        Iface HWaddress: default
        Iface Netdev: default
        SID: 3
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 15
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 131072
        FirstBurstLength: 131072
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 8    State: running
        scsi8 Channel 00 Id 0 Lun: 1
            Attached scsi disk sdf        State: running
Target: iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a (non-flash)
    Current Portal: 10.10.9.1:3260,1
    Persistent Portal: 10.10.9.1:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:ecd7afb25c9b
        Iface IPaddress: 10.10.9.2
        Iface HWaddress: default
        Iface Netdev: default
        SID: 4
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 15
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 131072
        FirstBurstLength: 131072
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 9    State: running
        scsi9 Channel 00 Id 0 Lun: 1
            Attached scsi disk sdg        State: running
[root@srv-02 ~]#


cat /etc/pve/storage.cfg

Code:
[root@srv-02 ~]# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,iso,backup

lvm: vDFAST3
    vgname vDFAST3
    content images,rootdir
    shared 1

lvm: DLOW2
    vgname vDLOW2
    content images,rootdir
    shared 1

nfs: DATOSBCK
    export /backupdirvm
    path /mnt/pve/DATOSBCK
    server 192.168.11.75
    content iso,images,backup
    prune-backups keep-all=1

nfs: DATOS
    export /backupdirvm
    path /mnt/pve/DATOS
    server 192.168.11.19
    content vztmpl,images,backup,rootdir,iso
    options vers=3
    prune-backups keep-all=1


lsblk -o +FSTYPE,MODEL,TRAN,WWN

Code:
[root@srv-02 ~]# lsblk -o +FSTYPE,MODEL,TRAN,WWN
NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS FSTYPE MODEL TRAN   WWN
sda            8:0    0 111.8G  0 disk                     KINGS ata    0x50026b7784a13fdc
├─sda1         8:1    0  1007K  0 part                                  0x50026b7784a13fdc
├─sda2         8:2    0   512M  0 part              vfat                0x50026b7784a13fdc
└─sda3         8:3    0 111.3G  0 part              zfs_me              0x50026b7784a13fdc
sdb            8:16   0 111.8G  0 disk                     KINGS ata    0x50026b7784a13b74
├─sdb1         8:17   0  1007K  0 part                                  0x50026b7784a13b74
├─sdb2         8:18   0   512M  0 part              vfat                0x50026b7784a13b74
└─sdb3         8:19   0 111.3G  0 part              zfs_me              0x50026b7784a13b74
sdc            8:32   0 223.6G  0 disk                     CT240 ata    0x500a0751e591f9d6
sdd            8:48   0   1.2T  0 disk              LVM2_m iSCSI iscsi  0x6589cfc00000058e92c2b98251118c68
└─DFAST3     252:1    0   1.2T  0 mpath             LVM2_m
  ├─vDFAST3-vm--1010--disk--0
  │          252:20   0    40G  0 lvm
  ├─vDFAST3-vm--1013--disk--0
  │          252:21   0    45G  0 lvm
  ├─vDFAST3-vm--2001--disk--0
  │          252:22   0   300G  0 lvm
  ├─vDFAST3-vm--2008--disk--0
  │          252:23   0    50G  0 lvm
  ├─vDFAST3-vm--2001--disk--1
  │          252:24   0    90G  0 lvm
  ├─vDFAST3-vm--1012--disk--0
  │          252:25   0    40G  0 lvm
  ├─vDFAST3-vm--1016--disk--0
  │          252:26   0    40G  0 lvm
  ├─vDFAST3-vm--1006--disk--0
  │          252:27   0    36G  0 lvm
  └─vDFAST3-vm--1014--disk--0
             252:28   0    32G  0 lvm
sde            8:64   0   1.2T  0 disk              LVM2_m iSCSI iscsi  0x6589cfc00000058e92c2b98251118c68
└─DFAST3     252:1    0   1.2T  0 mpath             LVM2_m
  ├─vDFAST3-vm--1010--disk--0
  │          252:20   0    40G  0 lvm
  ├─vDFAST3-vm--1013--disk--0
  │          252:21   0    45G  0 lvm
  ├─vDFAST3-vm--2001--disk--0
  │          252:22   0   300G  0 lvm
  ├─vDFAST3-vm--2008--disk--0
  │          252:23   0    50G  0 lvm
  ├─vDFAST3-vm--2001--disk--1
  │          252:24   0    90G  0 lvm
  ├─vDFAST3-vm--1012--disk--0
  │          252:25   0    40G  0 lvm
  ├─vDFAST3-vm--1016--disk--0
  │          252:26   0    40G  0 lvm
  ├─vDFAST3-vm--1006--disk--0
  │          252:27   0    36G  0 lvm
  └─vDFAST3-vm--1014--disk--0
             252:28   0    32G  0 lvm
sdf            8:80   0   128K  0 disk                     iSCSI iscsi  0x6589cfc000000dedda78f5ae5b5d88da
sdg            8:96   0   128K  0 disk                     iSCSI iscsi  0x6589cfc000000dedda78f5ae5b5d88da
sdh            8:112  0   5.6T  0 disk              LVM2_m iSCSI iscsi  0x6589cfc000000fb7fd3a78e8b4b2bb5c
└─DLOW2      252:0    0   5.6T  0 mpath             LVM2_m
  ├─vDLOW2-vm--1005--disk--0
  │          252:2    0    42G  0 lvm
  ├─vDLOW2-vm--2002--disk--0
  │          252:3    0    70G  0 lvm
  ├─vDLOW2-vm--1011--disk--0
  │          252:4    0    40G  0 lvm
  ├─vDLOW2-vm--2006--disk--0
  │          252:5    0   300G  0 lvm
  ├─vDLOW2-vm--1016--disk--0
  │          252:6    0   350G  0 lvm               LVM2_m
  ├─vDLOW2-vm--2000--disk--0
  │          252:7    0    65G  0 lvm
  ├─vDLOW2-vm--1001--disk--0
  │          252:8    0    40G  0 lvm
  ├─vDLOW2-vm--1009--disk--0
  │          252:9    0    35G  0 lvm
  ├─vDLOW2-vm--2003--disk--0
  │          252:10   0    70G  0 lvm
  ├─vDLOW2-vm--2005--disk--0
  │          252:11   0    70G  0 lvm
  ├─vDLOW2-vm--1022--disk--0
  │          252:12   0    35G  0 lvm
  ├─vDLOW2-vm--1019--disk--0
  │          252:13   0   120G  0 lvm
  ├─vDLOW2-vm--1000--disk--0
  │          252:14   0    40G  0 lvm
  ├─vDLOW2-vm--1008--disk--0
  │          252:15   0    25G  0 lvm
  ├─vDLOW2-vm--2011--disk--0
  │          252:16   0   190G  0 lvm
  ├─vDLOW2-vm--1002--disk--0
  │          252:17   0    40G  0 lvm
  ├─vDLOW2-vm--1003--disk--0
  │          252:18   0    40G  0 lvm
  └─vDLOW2-vm--1004--disk--0
             252:19   0    35G  0 lvm
sdi            8:128  0   5.6T  0 disk              LVM2_m iSCSI iscsi  0x6589cfc000000fb7fd3a78e8b4b2bb5c
└─DLOW2      252:0    0   5.6T  0 mpath             LVM2_m
  ├─vDLOW2-vm--1005--disk--0
  │          252:2    0    42G  0 lvm
  ├─vDLOW2-vm--2002--disk--0
  │          252:3    0    70G  0 lvm
  ├─vDLOW2-vm--1011--disk--0
  │          252:4    0    40G  0 lvm
  ├─vDLOW2-vm--2006--disk--0
  │          252:5    0   300G  0 lvm
  ├─vDLOW2-vm--1016--disk--0
  │          252:6    0   350G  0 lvm               LVM2_m
  ├─vDLOW2-vm--2000--disk--0
  │          252:7    0    65G  0 lvm
  ├─vDLOW2-vm--1001--disk--0
  │          252:8    0    40G  0 lvm
  ├─vDLOW2-vm--1009--disk--0
  │          252:9    0    35G  0 lvm
  ├─vDLOW2-vm--2003--disk--0
  │          252:10   0    70G  0 lvm
  ├─vDLOW2-vm--2005--disk--0
  │          252:11   0    70G  0 lvm
  ├─vDLOW2-vm--1022--disk--0
  │          252:12   0    35G  0 lvm
  ├─vDLOW2-vm--1019--disk--0
  │          252:13   0   120G  0 lvm
  ├─vDLOW2-vm--1000--disk--0
  │          252:14   0    40G  0 lvm
  ├─vDLOW2-vm--1008--disk--0
  │          252:15   0    25G  0 lvm
  ├─vDLOW2-vm--2011--disk--0
  │          252:16   0   190G  0 lvm
  ├─vDLOW2-vm--1002--disk--0
  │          252:17   0    40G  0 lvm
  ├─vDLOW2-vm--1003--disk--0
  │          252:18   0    40G  0 lvm
  └─vDLOW2-vm--1004--disk--0
             252:19   0    35G  0 lvm
[root@srv-02 ~



Hi.

I updated the host to the latest version of proxmox

I hope this can be helpful and you find the solution.
 

Attachments

  • Captura de pantalla 2024-11-07 a la(s) 5.27.28 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.27.28 a.m..png
    61.7 KB · Views: 2
  • Captura de pantalla 2024-11-07 a la(s) 5.42.08 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.42.08 a.m..png
    119.8 KB · Views: 2
  • Captura de pantalla 2024-11-07 a la(s) 5.42.16 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.42.16 a.m..png
    106.7 KB · Views: 2
  • Captura de pantalla 2024-11-07 a la(s) 5.42.25 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.42.25 a.m..png
    99.4 KB · Views: 2
  • Captura de pantalla 2024-11-07 a la(s) 5.42.33 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.42.33 a.m..png
    107.6 KB · Views: 2
  • Captura de pantalla 2024-11-07 a la(s) 5.44.40 a.m..png
    Captura de pantalla 2024-11-07 a la(s) 5.44.40 a.m..png
    117.5 KB · Views: 2
Code:
[root@srv-02 ~]# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-2-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.8: 6.8.12-2
proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.3
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.1
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.10
libpve-storage-perl: 8.2.5
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-4
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.2.0
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.13-2
pve-ha-manager: 4.0.5
pve-i18n: 3.2.3
pve-qemu-kvm: 9.0.2-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 
Code:
[root@srv-02 ~]# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-2-pve)
pve-manager: 8.2.7 (running version: 8.2.7/3e0176e6bb2ade3b)
[...]
This looks better!
Do you mean the configuration in FreeNas?
No, my question was directed at the Proxmox VE side, i.e., whether you have configured an iSCSI storage on your PVE node (see the admin guide [1]). From the /etc/pve/storage.cfg you posted, it doesn't look like it (there is no entry with storage type iscsi). This means you have to set up Open-iSCSI manually for discovery and logins to your targets, which is possible, but possibly a bit cumbersome (especially in a cluster). If you define an iSCSI storage in PVE, PVE will take care of discovery and logins. See the admin guide [1] for more information.

Also, as you have multiple redundant paths to the iSCSI targets, you'd also need to set up multipath-tools on the PVE side to use them correctly, see the wiki article [2].

I found my mistake, when defining the size of the zvol I had left it at 128k :confused:
I see, good to hear you found it! Does everything work as expected now?

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_open_iscsi
[2] https://pve.proxmox.com/wiki/Multipath
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!