Zfs over iscsi plugin

The main problem is that multipathing with a lot of luns (1vm disk = 1 lun), is really painfull to manage. (remove, resize volumes...)

In past, I have also see multipath daemon blocking at 100% cpu when a lof of luns are present.

Best way is to use bonding for now.
 
The main problem is that multipathing with a lot of luns (1vm disk = 1 lun), is really painfull to manage. (remove, resize volumes...)

In past, I have also see multipath daemon blocking at 100% cpu when a lof of luns are present.

Best way is to use bonding for now.

ok I see how this is working now, I’m looking at it from a different perspective of the hypervisor managing the multi-pathing and only having a few LUNs I.e. max 9tb in size with 30-50 VM per lun.

this would make the vm disk just a sparse file sitting on block storage formatted with something like lvm/ ext4 etc.

which is how VMware handles their iSCSI connections to a San or nas presenting block storage, I understand better now how KVM is or should I say qemu is managing this as each vm disk is a lun this is very similar to VMware‘s vvol technology https://kb.vmware.com/s/article/2113013

Ok now I understand this a lot better.

Do you know if normal iSCSI connectivity uses multi-path?

””Cheers
G
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!