Multipath configuration to Pure storage array (VLANS)

elnino54

New Member
Oct 29, 2024
6
1
3
Hi all, we're looking at moving to proxmox from vmware so I've downed a host for testing and I am trying to connect to our pure storage array.

What I'm trying to do is basically this https://forum.proxmox.com/threads/iscsi-network-considerations-best-practices.167084/post-776167

I have 2 x 25Gb nics in the server, connected via 2 x Cisco Nexus switches to the pure (among other things). These are the same nics that I use for hosting the VMS.

I have something like this:
vmbr0 - Nic0 and Nic1

vlan10 bound to vmbr0 -> Main Server LAN
vlan20 bound to vmbr0 -> DMZ
vlan30 bound to vmbr0 with an IP for management.

iscsi_A vlan 100, nic0 ip 10.20.30.100/24
iscsi_B vlan 101, nic1 ip 10.20.31.100/24

San is on 10.20.30.200 and 10.20.31.200

This is basically replicating the existing setup in vmware so I know the vlans work and I'm certain the nics are properly identified but I cant even seem to ping the san. Is there something I'm misunderstanding? Is there a fundamental mistake i've made in the network config of the host?
 
Last edited:
Your SAN and iscsi_* are on the same vlan, so if you can't ping, your L2 doesn't work. Check tcpdump for arp and fix firewall/network config.
 
@elnino54, how your Proxmox VE pass iscsi traffic from VLAN10/20/30 to VLAN100/101? via router? in my memory that is not a good idea?
You should setup VLAN with IP Address for VLAN100 and VLAN101 on your Proxmox VE host. you may setup following network topology.

Code:
nic0(MTU 9000) + nic1(MTU 9000) -> bond0(MTU 9000)
bond0 -> bond0.10 -> vmbr10
bond0 -> bond0.20 -> vmbr20
bond0 -> bond0.30 -> vmbr30
bond0 -> bond0.100(MTU 9000) -> vmbr100(MTU 9000)
bond0 -> bond0.101(MTU 9000) -> vmbr101(MTU 9000)
 
how your Proxmox VE pass iscsi traffic from VLAN10/20/30 to VLAN100/101?
based on OP's description I don't see a reason for iSCSI traffic to ever be generated from 10/20/30 or passed to 100/101. So I do not think this would come up as an issue.
if you try to put a vlan on a bond member-nic
I also do not see where OP mentions using Bond. It seems to me they have isolated ports connected to individual Cisco switches (?). The NICs are then assigned to the virtual bridge VMBR0, and later VLANs/IPs are assigned to the virtual bridge.

If my reading is correct: while this is trying to mimic ESX vswitch, it is not a correct configuration for the PVE/Linux. iSCSI dedicated ports should not be part of the VMBR0. They should have VLAN/IP assigned directly.
If there are only two ports and traffic must share the interfaces, OP needs LACP+MLAG. Or, perhaps, assign VLANs to raw ports, then assign virtual VLAN interface to the bridge. However, I have not tested this personally and I am not sure this will work.


P.S. I don't think you can have a functioning IP assigned directly to a NIC that is part of virtual bridge.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: cfgmgr
P.S. I don't think you can have a functioning IP assigned directly to a NIC that is part of virtual bridge.
You can. it works. Its a bad idea, but it can be done. (as an aside, its a bad idea on vmware too; iscsi interfaces dont like to share the link.)

the interfaces would look like this:

Code:
iface nic0 inet manual
        mtu 9000

iface nic1 inet manual
        mtu 9000

auto bond0
iface bond 0 inet manual
        bond-slaves nic0 nic1 # change to actual interfaces
        bond-miimon 100 # change for you link and switch capabilities
        bond-mode 802.3ad
        ... etc

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.100
        netmask 255.255.256.0
        gateway 192.168.1.1
        bridge-ports bond0 # you CAN leave it as interfaces, but then you'll need to STP one of them which means it wont do what you actually want
        bridge-stp off                                                                                                          
        bridge-fd 0
        mtu 1500   # this is gonna cause you issues; even if linux allows you to have different MTUs on subinterfaces most switches will ignore this setting

auto nic0.100
iface nic0.100 inet static
        address 10.20.30.100
        netmask 255.255.255.0
        mtu 9000

auto nic1.101
iface nic1.101 inet static
        address 10.20.31.100
        netmask 255.255.255.0
        mtu 9000
 
  • Like
Reactions: bbgeek17
the interfaces would look like this:
I don't think its a valid config. You may be able to assign an IP to a NIC that is part of the Bridge, but you definitely can't assign IP to a NIC that is part of a Bond.

Your VLAN would need to sit on the Bond, but if each switch is handling only one VLAN - this will not work either.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
It will. I've used this kind of config in the lab before. like I said, not ideal for a production environment but can be done.

edit- I'll explain the logic. nic1 has mac1. nic 2 has mac2

in the above config, either mac1 or 2 is present in V0; mac1 is present in v100, and mac2 is present in v101. no conflicts.
 
Last edited:
Would using NFS be better, and or an option? We have an X20 showing up in a month or two and I'll be using NFS. As much I would love to run iSCSI/FC instead... the lack of something like VMFS seems to make those a bit more problematic than I like.

Old RHEL setup i had just had dedicated interface for each VLAN (no bonding bridge etc) and it worked like a charm.

I dont expect NFS performance to be earth shattering but hopefully quite a bit better than what I have.
 
Last edited:
Old RHEL setup i had just had dedicated interface for each VLAN (no bonding bridge etc) and it worked like a charm.
Sure. the problem isnt with the networking per se; its that you're shoving 3 vlans into two nics, 2 of which really want their own interface. this would be very simply solved if you just added another interface (or 2) for non iscsi traffic.

the lack of something like VMFS seems to make those a bit more problematic than I like.
That should probably be the first layer in your decision making. if lvm-thick and its limitations are not suitable for your use case, then the rest of the discussion is academic. NFS works- and tuned properly performs too.
 
  • Like
Reactions: cfgmgr
Would using NFS be better, and or an option?
You have an L2/L3/L4 issue, not L7. If nothing else changes - it will not make a difference whether you use NFS or iSCSI.
Old RHEL setup i had just had dedicated interface for each VLAN (no bonding bridge etc) and it worked like a charm.
Were you also running a hypervisor and placing the VMs on this host at the same time?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You have an L2/L3/L4 issue, not L7. If nothing else changes - it will not make a difference whether you use NFS or iSCSI.

Were you also running a hypervisor and placing the VMs on this host at the same time?



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

nah vanilla RHEL host running Oracle. Fabulous performance. Dedicated interfaces for iSCSI. Agree on the L2/L3/L4 comment.
 
If this was my environment, I were limited in number of NICs that could be added to the host, and I was not looking to create a one-of-the-kind config, I would:

- MLAG the network switches
- configure LACP across the switches (either with MTU 9000 network wide, or if you cant reconfigure the rest of the environment - 1500)
- add VLANs as needed to the LACP bond
- create Virtual Bridge for VM network access using one of the VLAN interfaces that sits on the bond.

This way you will have L1/L2 redundancy for both iSCSI and VMs, as well as traffic isolation for security.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If this was my environment, I were limited in number of NICs that could be added to the host, and I was not looking to create a one-of-the-kind config, I would:

- MLAG the network switches
- configure LACP across the switches (either with MTU 9000 network wide, or if you cant reconfigure the rest of the environment - 1500)
- add VLANs as needed to the LACP bond
- create Virtual Bridge for VM network access using one of the VLAN interfaces that sits on the bond.

This way you will have L1/L2 redundancy for both iSCSI and VMs, as well as traffic isolation for security.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

I thought iSCSI and bonds were a "no-no" but I'd admit my knowledge could be a bit dated. I thought there could be troubles with link flapping etc.
 
I thought iSCSI and bonds were a "no-no" but I'd admit my knowledge could be a bit dated. I thought there could be troubles with link flapping etc.
Sounds like something that would be perpetuated by a storage vendor who couldn't figure out how to build a good iSCSI stack and/or support an LACP bond :-)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: cfgmgr
I thought iSCSI and bonds were a "no-no" but I'd admit my knowledge could be a bit dated. I thought there could be troubles with link flapping etc.
having iscsi links separated is IDEAL, and works in most instances. iscsi over LACP can, AT BEST, match the performance of seperated links, but in practice this can be quite challenging.

Sounds like something that would be perpetuated by a storage vendor who couldn't figure out how to build a good iSCSI stack and/or support an LACP bond
Storage vendors rarely have any control over the end user network stack. As mentioned above, separate links/vlans always works.
 
  • Like
Reactions: cfgmgr
That was it! Thanks.

As a heads up, this will break on PVE 9+

We are currently building on out network to two separate VLANs to work as Proxmox recommends. It's pretty easy to setup on the Pure, it kind of "just works" when assigning different subnets to the iSCSI interfaces.
 
  • Like
Reactions: cfgmgr