Lun allocation from DELL Storage direct to VM with zero interuption

scottd

New Member
Apr 10, 2025
2
0
1
Hi Guys.
I have been hunting around the internet for a few days now. I am helping some of our guys with a test Proxmox setup. The issue we have is this. In the symmetrix powermax world. For direct management from a deployed management VM. We need to assign some luns from the storage. Directly. Just like RAW devices in VMware.

This allows us to send our syscalls over the device connection to the arrays and execute commands on the array via our cli solution. We need these devices to be seen as symmetrix. "Vendor" devices so the CLI knows where they come from. We are trying to achive the follwoing layout as that proxmox just passes on the disk as a raw device.
direct.png

What I should be seeing on the VM side is similar to.

lsblk on working VM host.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sdb 8:16 0 5.6M 0 disk
sdc 8:32 0 5.6M 0 disk

When I list the disks on the VM host I want to be able to see SDB and SDC like the below from my symcli command.

Device Product Device
---------------- --------------------------- ---------------------------
Name Type Vendor ID Rev Ser Num Cap (GB)
---------------- --------------------------- ---------------------------
/dev/sdb GK EMC SYMMETRIX 6079 1XXXXX000 0.0
/dev/sdc GK EMC SYMMETRIX 6079 1XXXXX000 0.0


The VM Configuration is as below.
vm1.png

When we goto the client VM and list the disks "lsblk" we are seeing them as, SDC to SDI
lsblk.png

when we then go and try and run an enquiry on the disks we are seeing them as. QEMU Vendor and ID.
syminq.png

We need the disks to be be "ported" for lack of a better word directly to the VM and bypassing the hypervisor.

I have already reiewed

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)#:~:text=By adding the raw physical,Clonezilla or Ubuntu Rescue Remix.&text=As the disk is attached,on the host can stutter.

https://pve.proxmox.com/wiki/Performance_Tweaks

https://pve.proxmox.com/pve-docs/qm.1.html

I was unable to make the system do what we need after reviewing the above. So I reach out to you super humans in hope that you may be able to point me in the right direction here.

Thank you all so much for all your help and support.
 
Last edited:
Hi @scottd , welcome to the forum.

If you want to see native FC raw device in the VM you must use the pass-through.
You said that you've reviewed the relevant documentation. However you have not presented the results of following it. Providing the CLI commands/outputs and the results in the VM would help with any advice.
Please remember to use CODE tags </>.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bbgeek17,

Thank you. I'm part of the testing team and work with @scottd.
Below are sample VM config files: We tried on both Linux + Windows based deployments. Appreciate your help.

Code:
root@proxhost:/etc/pve/qemu-server# pveversion
pve-manager/8.3.0/c1689ccb1065a83b (running kernel: 6.8.12-4-pve)
root@proxhost:/etc/pve/qemu-server#

Code:
Linux VM

boot: order=scsi0;scsi1
cores: 4
cpu: x86-64-v2-AES
machine: pc,viommu=virtio
memory: 16384
meta: creation-qemu=9.0.2,ctime=1743658557
name: slen-plxmgt004-Unisphere
net0: virtio=BC:24:11:C2:DE:E0,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: cephdatastore:vm-105-disk-0,iothread=1,size=25G
scsi1: cephdatastore:vm-105-disk-1,iothread=1,size=5G
scsi10: /dev/disk/by-id/wwn-0x60000970000xxxxxx5533030413839,backup=0,size=7680K
scsi2: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413745,cache=writethrough,size=9600K
scsi3: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413746,size=9600K
scsi4: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413830,size=9600K
scsi5: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413831,size=9600K
scsi6: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413835,size=9600K
scsi7: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413836,size=9600K
scsi8: /dev/disk/by-id/scsi-3600009700002xxxxxx533030413837,size=9600K
scsi9: /dev/disk/by-id/scsi-36000097000029xxxxx30413838,aio=io_uring,backup=0,cache=writethrough,replicate=0,size=7680K
scsihw: virtio-scsi-single
smbios1: uuid=b7f7b38b-54e0-4ff9-b984-944fe480b6ea
sockets: 1
vmgenid: 244dec8b-15ab-47fa-84b7-dc670d7948f7

---Windows VM1---
bios: ovmf
boot: order=ide0
cores: 2
cpu: x86-64-v2-AES
efidisk0: cephdatastore:vm-113-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: cephdatastore:vm-113-disk-2,size=80G
ide2: none,media=cdrom
machine: pc-q35-9.0
memory: 4096
meta: creation-qemu=9.0.2,ctime=1744070257
name: slen-winSETest01
net0: e1000=BC:24:11:70:F2:51,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: /dev/disk/by-id/scsi-3600009xxxxxx33030413842,cache=directsync,size=9600K
scsi1: /dev/disk/by-id/scsi-3600009xxxxxx33030413843,cache=writethrough,size=9600K
scsihw: virtio-scsi-single
smbios1: uuid=c1a6ab3e-054c-499a-bdc9-5d74dee4fb96
sockets: 1
tpmstate0: cephdatastore:vm-113-disk-1,size=4M,version=v2.0
vmgenid: c311ae91-6878-46dd-8fd1-4379b63e9b17

---Windows VM2---
bios: ovmf
boot: order=ide0
cores: 2
cpu: x86-64-v2-AES
efidisk0: cephdatastore:vm-114-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: cephdatastore:vm-114-disk-2,size=80G
ide1: /dev/disk/by-id/wwn-0x60000xxxx533030413844,cache=directsync,size=9600K
ide2: none,media=cdrom
ide3: /dev/disk/by-id/wwn-0x6000097xxxx5533030414130,cache=writethrough,size=9600K
machine: pc-q35-9.0,viommu=virtio
memory: 4096
meta: creation-qemu=9.0.2,ctime=1744070185
name: slen-winSETest02
net0: e1000=BC:24:11:C1:6A:CD,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=b6238a84-ed22-4e56-9140-85c9f9a4c039
sockets: 1
tpmstate0: cephdatastore:vm-114-disk-1,size=4M,version=v2.0
vmgenid: b31a1ea2-f646-4e21-aea8-fe8eb30ff824

---Windows VM3---
bios: ovmf
boot: order=ide0
cores: 2
cpu: x86-64-v2-AES
efidisk0: cephdatastore:vm-115-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: cephdatastore:vm-115-disk-2,size=80G
ide1: /dev/disk/by-id/wwn-0x6000097000xxxxx30414131,cache=directsync,size=9600K
ide2: none,media=cdrom
ide3: /dev/disk/by-id/wwn-0x600009700002xxxx030414132,cache=writethrough,size=9600K
machine: pc-q35-9.0
memory: 4096
meta: creation-qemu=9.0.2,ctime=1744070289
name: slen-winSETest03
net0: e1000=BC:24:11:19:6C:EE,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=eff50167-7ade-496f-b66d-35bbc26d7fc2
sockets: 1
tpmstate0: cephdatastore:vm-115-disk-1,size=4M,version=v2.0
vmgenid: a0feca10-8af9-4975-bd14-f1a875c1a6eb

We attached the passthrough drives using #qm setcommand

Code:
root@proxhost:~# qm set 105 -scsi10 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413839
update VM 105: -scsi10 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413839

root@proxhost:~# qm set 113 -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx13843
update VM 113: -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx13843
root@proxhost:~# qm set 113 -virtio1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413842
update VM 113: -virtio1 /dev/disk/by-id/wwn-0x0x60000970000xxxxx330xx413842
 
root@proxhost:~# qm set 114 -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413844
update VM 114: -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413844
root@proxhost:~# qm set 114 -virtio1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414130
update VM 114: -virtio1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414130
 
root@proxhost:~# qm set 115 -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414131
update VM 115: -virtio0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414131
root@proxhost:~# qm set 115 -virtio1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414132
update VM 115: -virtio1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414132
 
 
root@proxhost:~# qm set 113 -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413842
update VM 113: -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413842
root@proxhost:~# qm set 113 -scsi1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413843
update VM 113: -scsi1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413843
 
 
root@proxhost:~# qm set 114 -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413844
update VM 114: -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx413844
root@proxhost:~# qm set 114 -scsi1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414130
update VM 114: -scsi1 /dev/disk/by-id/wwn-wwn-0x60000970000xxxxx330xx414130
 
 
root@proxhost:~# qm set 115 -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414131
update VM 115: -scsi0 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414131
root@proxhost:~# qm set 115 -scsi1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414132
update VM 115: -scsi1 /dev/disk/by-id/wwn-0x60000970000xxxxx330xx414132
 
 
root@proxhost:~# qm set 113 -scsi0 /dev/disk/by-id/scsi-360000970000xxxxx330xx413842
update VM 113: -scsi0 /dev/disk/by-id/scsi-360000970000xxxxx330xx413842
root@proxhost:~# qm set 113 -scsi1 /dev/disk/by-id/scsi-360000970000xxxxx330xx413843
update VM 113: -scsi1 /dev/disk/by-id/scsi-360000970000xxxxx330xx413843
 
Last edited:
Hi @shankarsubramanian , welcome to the forum as well.

Your only choice may be to pass-through the entire FC controller. I'd venture to guess that this is not a desirable solution. If only Dell/EMC would move the Symm out of 80ties :)

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: david_tao
I think may you can use FC HBA virtual function to pass-through FC interface into VM, then direct use FC Zone and SYMMETRIX volume mapping to pass the volume on SYMMETRIX into Guest VM. Just FYR.
 
Looks like a proxmox problem not a DELL problem.
a free software does not have to cater to proprietary hardware used by people who only came because they got priced out by another proprietary product.
There is nothing for proxmox in developing special sauce for a dead product.
 
Looks like a proxmox problem not a DELL problem.
I still remember the VMware RDM have both Physical compatibility mode and Virtual compatibility mode, I'm thinking as the output you posted, currently Proxmox's Disk passthrough function [https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)] more like Virtual compatibility mode. So if use Fibre Channel SR-IOV Virtual Functions to passthrough one of virtual interface for FC HBA into VM, let VM direct query FC device's attribute, it's possible solve your requirement. Either you can also follow bbgeek17's recommend to passthrough entire FC HBA into VM, it depends do you have more then one VM requires to get the Storage devices "Vendor" related attributes.
 
Last edited: