Sharing iSCSI NICs between host and VMs

hashman

New Member
Jun 11, 2025
5
0
1
Coming from a VMware environment (and will need to keep VMware around for a while) where we user iSCSI SAN connections for a majority of our datastores. In some instances we also have iSCSI connections directly to the VMs (primarily to overcome the VMware size limit of 62TB for VMDKs). In VMware we can just have a VMK and port group on the same virtual switch, how would we accomplish the same thing in Proxmox? Note that we do have two iSCSI connections for each host for redundancy and would want to keep them both going forward (so no using 1 for the host and 1 for the VM(s)). The end goal is to be able to take iSCSI volumes attached to VMware VMs and move them to Proxmox VMs and back to VMware if needed.
 
Last edited:
Coming from a VMware environment (and will need to keep VMware around for a while) where we user iSCSI SAN connections for a majority of our datastores....
Hi hashman:
I think that is no problem If needs a VM direct to access an iSCSI volume. Following have two methods can do it.
1. Use passthrough Physical Disk to VM, please refer to "https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)". But I'm not sure does this way can VMware and Proxmox VE attached the same iSCSI volume at same time to let you move the VM between both platform!
2. Use software iSCSI initiator in your VM OS to attached an external iSCSI volume, this way I think may not have above uncertant things because you can make sure the IP address and iSCSI CHAP in VM will not change when you moveing the VM between VMware and Proxmox VE.
For iSCSI connectivity redundancy, VMware use it own Multipath to handle traffic balance and failover between two different IP Address of iSCSI RAID. This same way Proxmox VE also can be done by using multipath package. For the detailed setup steps at https://pve.proxmox.com/wiki/ISCSI_Multipath can be reference.
 
Hi David, thanks for the response. I work with the OP here so I can offer some more details. One important distinction is that we aren't passing physical volumes through to the VM. All of our storage is iSCSI based. We have two separate 10 GB ports for each host that are connected to a separate network entirely that's dedicated entirely to storage/iSCSI access. I was able to assign static IPs to those ports and setup iSCSI connectors back to our iSCSI volumes without an issue. I used the multipath setup referenced above and it worked great.

The problem is, I need to be able to create a network port on a VM inside the host that is able to access that same storage network. We have to have the static IPs on the iSCSI ports themselves (not bonded, two separate IPs) so that the iSCSI connections can add storage to the hosts. Unfortunately, I can't make a linux bridge with ports that have static IPs. I attempted to move the static IPs to the linux bridge, but when I do a tracert, it shows traffic is trying to go over the VM's other port. My guess is because the default gateway is attached to that port.

The overall goal is to have the iSCSI adapters available to the host, as well as have them setup so that we can assign an adapter to a VM in order to connect to the iSCSI network. One caveat is the storage/iSCSI network isn't VLAN tagged, as it's the default network for it. I attempted to use SDNs but in order to set those up for VLANs I have to tag the traffic, which doesn't allow connectivity.