Best Practise for Multipath iSCSI

damiengm

New Member
Jun 4, 2025
8
2
3
Hi
I’m wondering what the best practice setup is for a iSCSI NAS backend with multi-path. This is coming from a VMware environment, where we are migrating over to ProxmoxVE.

Environment is:
NAS: Synology SA3200D, dual controller NAS to a RAID, iSCSI (but NFS possible). 3x 10Gbe on each controller, controllers work in master/slave mode. One 10Gbe is used for data production network for file sharing, leaving the remaining 2 for iSCSI tasks

2x 10Gbe swtiches

2x servers, with 4x10Gbe ports on each.

With the multiple networks connections, is it recommended to isolate the two different paths on their own network segment and VLANs?
(hmm can't put an image in, so in words
NAS m/s nic 1: 172.24.1.1/24 <-> switch 1 <-> server1 nic 1: 172.24.1.10/24 & server2 nic 1: 172.24.1.11/24
NAS m/s nic 2: 172.23.2.1/24 <-> switch 2 <-> server1 nic 2: 172.23.2.10/24 & server2 nic 2: 172.23.2.11/24
or is it ok they are all in the same network segment?

And I’ve seen is its only possible to use LVM on iSCSI… so no thin provisioning, so will need more space...

Also, I’ve read the quorum network should be setup on its own switch to reduce latency, is there any reason I can’t use a private VLAN on an existing switch?
Regards
Damien
 
The setup is the same as for any other hypervisor or Linux environment and there nothing special about PVE. PVE does not have any logic in providing this, you need to configure it as you would on any other linux.
 
@damiengm ,
You may find this article helpful: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thank you for that great article!

It does not yet take into account that snapshots are now possible with the new PVE9 version, right?

What I am missing a little are topics like size limitations of lvm volumes or recommendations like e.g. not exceeding specific limits in order to prevent problems. Also other common tasks while beeing in production. Something like resizing LUNs on SAN side and what that means for PVE hosts. iscsiadm session rescans, multipath map refreshs and the according pvesm commands if necessary.
 
It does not yet take into account that snapshots are now possible with the new PVE9 version, right?
For this the fact that the new snapshot feature is still "experimental" might play a part. Blockbridge aims at the Enterprise sector as target customers, you don't want to relie on "experimental" for production-critical data. Don't get me wrong: I think it's great that one of the biggest reasons for Companys to stick with other hypervisors is getting adressed, it just needs some (hopefully short) time to reach stability before it can be adopted for business usecases
 
Thank you for that great article!
You are welcome, happy we could help.

@johannes is correct. The main reason we haven't updated the lvm-shared-storage KB article is that the lvm snapshot bits are not yet ready for production. A considerable amount of focused development and testing is needed for it to be reliable in the way we expect.

If you are looking for a technical description and our initial viability assessment of the Technology Preview, you can find it here: https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/

When PVE9 came out, we put a great deal of effort into understanding how this might fit one of our customers' needs, alongside Blockbridge's native integration. The KB article was the product of the initial assessment for a customer.

LVM snapshots are under active development, and we'll continue to help provide technical feedback.

What I am missing a little are topics like size limitations of lvm volumes or recommendations like e.g. not exceeding specific limits in order to prevent problems. Also other common tasks while beeing in production. Something like resizing LUNs on SAN side and what that means for PVE hosts. iscsiadm session rescans, multipath map refreshs and the according pvesm commands if necessary.
Keep in mind that in 99.9% of PVE+Blockbridge deployments, don't use LVM. Only our legacy deployment use LVM (i.e., those deployed > 5 years ago). And, even in those systems, multipathing, mapping, authentication, and resizing were handled automatically.

Multipath management is highly vendor-specific and related to the underlying HA architecture of the storage solution. The "Understanding LVM Shared Storage In Proxmox" is a vendor-agnostic article that focuses on enabling you to understand your storage system and account for the unique differences between deployments.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S