Hi , I am trying to add a new iscsi volume , The scan shows it correctly, but when I want to log in It shows me the following:
discovered:
version of pve
If I list the disks I don't see the correct size for sdh - sdi
any idea? is it a bug in iscsi?
Code:
iscsiadm -m node --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals
discovered:
Code:
iscsiadm -m discovery -t st -p 10.10.10.1
10.10.10.1:3260,2 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-b
[root@srv-02 ~]# iscsiadm -m discovery -t st -p 10.10.9.1
10.10.9.1:3260,1 iqn.2024-11.com.storage1.domain.com:tgt-iscsi-a
version of pve
Code:
[root@srv-02 ~]# pveversion -v
proxmox-ve: not correctly installed (running kernel: 6.5.13-1-pve)
pve-manager: not correctly installed (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
If I list the disks I don't see the correct size for sdh - sdi
Code:
[root@srv-02 ~]# lsscsi -s
[0:0:1:0] disk ATA KINGSTON SA400S3 B1H5 /dev/sda 120GB
[1:0:1:0] disk ATA KINGSTON SA400S3 B1H5 /dev/sdb 120GB
[3:0:0:0] disk ATA CT240BX500SSD1 052 /dev/sdc 240GB
[6:0:0:6] disk FreeNAS iSCSI Disk 0123 /dev/sdd 1.28TB
[6:0:0:12] disk FreeNAS iSCSI Disk 0123 /dev/sdf 6.15TB
[7:0:0:6] disk FreeNAS iSCSI Disk 0123 /dev/sde 1.28TB
[7:0:0:12] disk FreeNAS iSCSI Disk 0123 /dev/sdg 6.15TB
[8:0:0:1] disk FreeNAS iSCSI Disk 0123 /dev/sdh 131kB
[9:0:0:1] disk FreeNAS iSCSI Disk 0123 /dev/sdi 131kB
any idea? is it a bug in iscsi?