Disk Image being assigned to multiple VM's

kchouinard

New Member
Feb 15, 2023
5
1
3
We are using iSCSI disk images rather than LVM. I notice that it is possible to connect a disk image to multiple VM's or multiple times to a single VM. This of course will cause corruption. There doesn't seem to be a way to prevent or warn when this happens.

Full disclosure: We are transitioning from using KVM natively to Proxmox. Our shared storage is Nimble, which does not offer NAS (only SAN). Nimble also does block level, multi-data center replication - which is a requirement we have for DR. So, I realize we are not using Proxmox in a typical way, but at this time I don't have a choice.

That said, each drive on a VM is a separate iSCSI disk image. KVM's VMmanager app tracks what drives are attached to guests and warns when you attach a drive to multiple guests.

Is there a way to prevent or warn in this scenario in Proxmox. I think it's only a matter of time before we unintentionally attach a drive to more than one VM and cause corruption. I suppose I could write a hook script to check all other guests at pre-start. I'm hoping there is a better solution.

Thanks ahead for any thoughts on this odd question.
 
  • Like
Reactions: Tmanok
So it sounds like you are using "direct LUN" approach with a LUN per VM disk/image. This is something we are familiar with, as that's exactly how our first Proxmox customer used our storage, before we built native Proxmox storage plugin.

I am afraid there is nothing in PVE today that would safeguard you from shooting yourself in the foot. You are not alone in using such approach but its probably not common enough for Proxmox developers to spend time on. There are many variables here, i.e. a disk could be presented via multiple iSCSI targets and so the only way to recognize that its the same disk is via its signature/ID.

On the other hand, there could be legitimate reasons for sharing LUN across VMs. For example, if VMs are running a cluster aware file system or use other cluster software that implements disk access arbitration.
I'd say that automation is your best bet. Taking a human aspect out of the equation will reduce the risk surface.

Good luck.

P.S. you can always add an enhancement request here https://bugzilla.proxmox.com/ , so at least its being tracked and others can weigh in case they need it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Tmanok
BBGeek17 is right, if you are going to share LUNs even to your PVE hosts you need a locking file system that presents the storage as a mounted directory. Some examples of this include GFS2 and OCFS2 which I have used successfully in the past and really like.

I'm not sure I understand why or how you would erroneously attach a LUN to two VMs at the same time, that automation is clearly concerning. It's best to either not have that happen automatically or have it always happening and then use a locking file system which is what HyperV and VMWare vSAN use with their own native proprietary solutions.

Cheers,


Tmanok
 
if you are going to share LUNs even to your PVE hosts you need a locking file system that presents the storage as a mounted directory. Some examples of this include GFS2 and OCFS2 which I have used successfully in the past and really like.
@kchouinard is using direct raw LUN pass-through, there is no hypervisor file system involved. The arbitrator of access in this case is Proxmox. When there is no misconfiguration exists (ie LUN is not assigned to multiple VMs), the PVE ensures that VM only runs on a single node.
This is perfectly fine approach, supported by PVE.

I'm not sure I understand why or how you would erroneously attach a LUN to two VMs at the same time
One could simply "qm set [vmid] scsi0 samepath" on two VMs. This can happen with any disk, even qcow. There is no bumper in place. Depending on the method OP is using for iSCSI connectivity they could even misclick in UI when selecting the Lun.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I wrote a hook script to check if any other VM's have the same disks attached. It's very first pass and somewhat specific to my environment, but could be easily adapted. Thought I would share if someone finds this post in the future.

Thanks for all the input from everyone. :)
 

Attachments

  • check_iscsi_duplicates.pl.txt
    1.8 KB · Views: 4

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!