PVE 8.3 - Fresh install, can't get iscsi Disk

Chico008

New Member
Nov 26, 2024
4
0
1
France - IDF
HI

I'm freshly new to Promox, i have to test it on one of our server to cherck alternatives from Vwmare.

So, i have a Dell R630 running, connected to 2 dell Emc Scv3000 san.

PVE install is ok, i can connect to Ihm admin.
i install lssci and multipath-tools.

I want to use San to store Vm disks, using iscsi.

on my PVE, i have 2 link connected to the 2 Dell Emc San.

My network config is
en0 > admin / prod IP
en3 > iscsi IP 1
en4 > iscsi IP 2

i can ping my 2 Emc.

on my Dell EMC, i add the proxmox server, and presented a volume.

on PVE, datacenter > storage, i added a iscsi storage, using portal and iqn.
but now ?
i can see the San storage, but can't see what's in it.
i can't create lvm on this storage

what did i miss ?
 
Hi @Chico008 , welcome to the forum.

You need to provide more details:
- output of : cat /etc/pve/storage.cfg
- output of: pvesm status
- output of: pvesm list [iscsi_storage_name]
- output of: iscsiadm -m node
- output of: iscsiadm -m session
- output of : journalctl -n 50 (immediately after doing "list" )



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi
Here's result

Code:
root@s-esx1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

iscsi: SAN-HDV-disqueB
        portal 192.168.168.70:3260
        target iqn.2002-03.com.compellent:5000d310046f1a3b
        content images

root@s-esx1:~# pvesm status
Name                   Type     Status           Total            Used       Available        %
SAN-HDV-disqueB       iscsi     active               0               0               0    0.00%
local                   dir     active        81435064         2901712        74350724    3.56%
local-lvm           lvmthin     active       179220480               0       179220480    0.00%
root@s-esx1:~# pvesm list SAN-HDV-disqueB
Volid Format  Type      Size VMID
root@s-esx1:~# iscsiadm -m node
192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3b
192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3c
192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3d
192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3e
192.168.168.90:3260,0 iqn.2002-03.com.compellent:5000d310046f1c3b
192.168.168.90:3260,0 iqn.2002-03.com.compellent:5000d310046f1c3c
192.168.168.90:3260,0 iqn.2002-03.com.compellent:5000d310046f1c3d
192.168.168.90:3260,0 iqn.2002-03.com.compellent:5000d310046f1c3e
root@s-esx1:~# iscsiadm -m session
tcp: [1] 192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3b (non-flash)
tcp: [2] 192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3c (non-flash)
tcp: [3] 192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3d (non-flash)
tcp: [4] 192.168.168.70:3260,0 iqn.2002-03.com.compellent:5000d310046f1a3e (non-flash)
root@s-esx1:~# journalctl -n 50
Nov 26 13:01:38 s-esx1 pvedaemon[1437]: <root@pam> successful auth for user 'root@pam'
Nov 26 13:17:01 s-esx1 CRON[23765]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Nov 26 13:17:01 s-esx1 CRON[23766]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Nov 26 13:17:01 s-esx1 CRON[23765]: pam_unix(cron:session): session closed for user root
Nov 26 13:17:38 s-esx1 pvedaemon[1437]: <root@pam> successful auth for user 'root@pam'
Nov 26 13:33:38 s-esx1 pvedaemon[1435]: <root@pam> successful auth for user 'root@pam'
Nov 26 13:39:26 s-esx1 pmxcfs[5260]: [dcdb] notice: data verification successful
Nov 26 13:49:37 s-esx1 pvedaemon[1437]: <root@pam> successful auth for user 'root@pam'
Nov 26 13:52:13 s-esx1 pveproxy[14352]: worker exit
Nov 26 13:52:13 s-esx1 pveproxy[1444]: worker 14352 finished
Nov 26 13:52:13 s-esx1 pveproxy[1444]: starting 1 worker(s)
Nov 26 13:52:13 s-esx1 pveproxy[1444]: worker 30036 started
Nov 26 14:04:37 s-esx1 pvedaemon[1436]: <root@pam> successful auth for user 'root@pam'
Nov 26 14:07:56 s-esx1 pveproxy[15122]: worker exit
Nov 26 14:07:56 s-esx1 pveproxy[1444]: worker 15122 finished
Nov 26 14:07:56 s-esx1 pveproxy[1444]: starting 1 worker(s)
Nov 26 14:07:56 s-esx1 pveproxy[1444]: worker 32854 started
Nov 26 14:09:43 s-esx1 pveproxy[17021]: worker exit
Nov 26 14:09:43 s-esx1 pveproxy[1444]: worker 17021 finished
Nov 26 14:09:43 s-esx1 pveproxy[1444]: starting 1 worker(s)
Nov 26 14:09:43 s-esx1 pveproxy[1444]: worker 33157 started
Nov 26 14:17:01 s-esx1 pvedaemon[34465]: starting termproxy UPID:s-esx1:000086A1:00101F12:6745CA4D:vncshell::root@pam:
Nov 26 14:17:01 s-esx1 pvedaemon[1436]: <root@pam> starting task UPID:s-esx1:000086A1:00101F12:6745CA4D:vncshell::root@pam:
Nov 26 14:17:01 s-esx1 pvedaemon[1437]: <root@pam> successful auth for user 'root@pam'
Nov 26 14:17:01 s-esx1 login[34468]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Nov 26 14:17:01 s-esx1 systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Nov 26 14:17:01 s-esx1 systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Nov 26 14:17:01 s-esx1 systemd-logind[1075]: New session 22 of user root.
Nov 26 14:17:01 s-esx1 systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Nov 26 14:17:01 s-esx1 systemd[1]: Starting user@0.service - User Manager for UID 0...
Nov 26 14:17:01 s-esx1 (systemd)[34474]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Nov 26 14:17:01 s-esx1 CRON[34490]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Nov 26 14:17:01 s-esx1 systemd[34474]: Queued start job for default target default.target.
Nov 26 14:17:01 s-esx1 CRON[34491]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Nov 26 14:17:01 s-esx1 CRON[34490]: pam_unix(cron:session): session closed for user root
Nov 26 14:17:01 s-esx1 systemd[34474]: Created slice app.slice - User Application Slice.
Nov 26 14:17:01 s-esx1 systemd[34474]: Reached target paths.target - Paths.
Nov 26 14:17:01 s-esx1 systemd[34474]: Reached target timers.target - Timers.
Nov 26 14:17:01 s-esx1 systemd[34474]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Nov 26 14:17:01 s-esx1 systemd[34474]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
Nov 26 14:17:01 s-esx1 systemd[34474]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Nov 26 14:17:01 s-esx1 systemd[34474]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Nov 26 14:17:01 s-esx1 systemd[34474]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Nov 26 14:17:01 s-esx1 systemd[34474]: Reached target sockets.target - Sockets.
Nov 26 14:17:01 s-esx1 systemd[34474]: Reached target basic.target - Basic System.
Nov 26 14:17:01 s-esx1 systemd[34474]: Reached target default.target - Main User Target.
Nov 26 14:17:01 s-esx1 systemd[34474]: Startup finished in 156ms.
Nov 26 14:17:01 s-esx1 systemd[1]: Started user@0.service - User Manager for UID 0.
Nov 26 14:17:01 s-esx1 systemd[1]: Started session-22.scope - Session 22 of User root.
Nov 26 14:17:01 s-esx1 login[34493]: ROOT LOGIN  on '/dev/pts/0'
root@s-esx1:~#
 
Alright
i manage to get lun appaered on the iscsi san storage
manage to create an Lvm on it.

but now, how can i uppoad an iso file/create folder, etc on this storage ?
in order ton set up my first vm ?
 
yeah, i remembered i had to create lv volume.
but still, event with that, i can mount lv to my server, i can upload isos with winscp, but pve won't see Isos to install vm.
i must upload them in local storage / Isos :s

i was excpecting something more like in vmware, you attache san volumes, then you create iso folder or else, an you can use ISOs from there.
 
yeah, i remembered i had to create lv volume.
but still, event with that, i can mount lv to my server, i can upload isos with winscp, but pve won't see Isos to install vm.
i must upload them in local storage / Isos :s

i was excpecting something more like in vmware, you attache san volumes, then you create iso folder or else, an you can use ISOs from there.
PVE storage access is somewhat similar to ESXi at 10,000 foot level.

ESXi uses Datastore that points to NFS, PVE uses storage Pool that points to NFS. Both can store files, so ISO can go there.

ESXi has a Datastore that connects to iSCSI, Proxmox has storage Pool type that connects to iSCSI. Even though ESXi then places VMFS on top - thats not, normally, where people store ISOs.

In PVE you either need a storage pool of type Directory where you would store ISO and other files, or, if you want central shared location, NFS/CIFS.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox