How to Configure iSCSI shared storage for thin Snapshots properly

PmoxUser1

New Member
Apr 1, 2026
2
0
1
I am planning a migration from VMWARE to proxmox and I am in the process of learning and testing proxmox to see how it would work in our environment. I have the following hardware that I will be using:

- 2 Dell server hosts of the same HW config on the same cluster
- 1 qdevice (for quorum)
- 1 md3200i iSCSI SAN for shared VM storage

I need to be able to have the following features:

- Shared Storage between the hosts for VMs using the MD3200i
- Live VM migration for VMs on the same MD3200i storage from one host to another
- Ability to take temporary "thin" snapshots of VMs at the host level (NOT at the storage level)
- Not a requirement but nice to have: Thin VM disk provisioning
- Not a requirement but nice to have: Live VM storage migration for VMs from the MD3200i to another iSCSI SAN (ME5224) storage that I will add down the road.

I was able to get almost everything working with Proxmox except the snapshots. It turns out that since I wanted to use shared storage I had to configure the md3200i as LVM not LVM-Thin. So the shared stuff worked but not the snapshotting. I then turned on the "snapshot-as-volume-chain" option and after working through some problems with the VMs TPM and EFI disks I discovered that I could actually take snapshots of the machines stored in the md3200i through proxmox and still have them shared across hosts BUT the snapshots were the same size as the disks of the machines. I will eventually have VMs that disks that are 2 to 4Tbs in size. This would mean that taking a snapshot of these VMs would require that much space PER snapshot on the storage unit. Is this right? Is there a different way around this? It seems like I am missing something here in terms of creating snapshots that will only keep track of the delta and not the whole disk. I am happy to go back and re-read documentation if someone can point me in the right direction as to what I have missed.

Thank-you
 
Seeing that the MD3200i is a legacy SAN, maybe cheaper to migrate to Ceph. Been migrating VMware legacy SAN infrastructure to Proxmox Ceph. IMO, need a minimum of 5-nodes. So, can lose 2 nodes and still have quorum. Isolated switches for Corosync and Ceph. All hardware the same. No PERCs, using Dell HBA330s. This is all with 13-gen Dells. YMMV. Can see my previous posts on optimizations.
 
eeing that the MD3200i is a legacy SAN, maybe cheaper to migrate to Ceph.
that is almost never so. ceph requires fast networks and SSDs to be performant. MD3200i doesnt even support anything faster then 1gb, and yet will yield more satisfying results at the cost of entry (likely free or next to free.) For your use case an MD3200 is a non starter, but for OPs (2 servers) it should be fine.

Is there a different way around this?
have a look here: https://forum.proxmox.com/threads/free-starwind-x-proxmox-san-storage-plugin.180377/
 
that is almost never so. ceph requires fast networks and SSDs to be performant. MD3200i doesnt even support anything faster then 1gb, and yet will yield more satisfying results at the cost of entry (likely free or next to free.) For your use case an MD3200 is a non starter, but for OPs (2 servers) it should be fine.


have a look here: https://forum.proxmox.com/threads/free-starwind-x-proxmox-san-storage-plugin.180377/
This is very promising. Thank-you for sharing this!