VM Shared Disk Between Guests

riaanp

New Member
Jun 15, 2022
23
1
1
Good Day,

We are currently investigating moving our oVirt to Proxmox. We run a lot of Oracle RAC clusters with shared disks between them. Sharing a disk in Ovirt is easy between guests. But proxmox it seems requires a lot of manual changes to make this work.

My question is does anyone have a documentation link or current method with Proxmox 7.2 for sharing disks?
 
Last edited:
  • Like
Reactions: semanticbeeng
hi,

what do you mean exactly by sharing disks?
do you want to use the same virtual disk in multiple VMs?

what is your use case for this?
 
Oracle RAC (Real Application Clusters ) Clusters uses a shared disk or disks between nodes. here is a simple diagram to explain the concept.

virtual-rac.jpg

So effectivly one machine has a the disk, but it is shared and accessed at the same time.
 
So effectivly one machine has a the disk, but it is shared and accessed at the same time.
it is technically possible to create a virtual disk for say VM100 and attach it to VM101 (by manually editing the configuration file).

to make it accessible between different nodes, you'd have to use some kind of shared storage, e.g. NFS or SMB/CIFS

though i would be careful with writing to the disk from both VMs as that could cause inconsistencies depending on the filesystem that you put on the disk.
 
  • Like
Reactions: semanticbeeng
The point of the shared disk is that both VM's access that disk at the same time and also read/write to it. Oracle RAC deals with this. It knows when a disk should not be written to etc

We have done this in VMWare as well as oVirt (KVM) and even Virtualbox for demo purposes. This is why i was wonder what is the procedure to do the same thing with Proxmox.

Is there documentation somewhere that explains this with proxmox?
 
This is why i was wonder what is the procedure to do the same thing with Proxmox.
you can test it like the following:

1. create a VM (let's say ID 100) with distro of your choosing
2. create a second VM (ID 101)
3. go to VM 100 in the GUI, click the "Hardware" tab and add a new hard disk. this will create an entry for that disk inside the configuration file (located in /etc/pve/qemu-server/100.conf)
4. in a shell, edit the configuration file for VM 101, e.g. vim /etc/pve/qemu-server/101.conf and copy the line with your virtual disk from the 100.conf into it
5. reboot both VMs

the disk then should be accessible from both VMs and you can do your testing there.
 
A similar question has just recently been posted to the forum:
https://forum.proxmox.com/threads/r...ared-to-2-virtual-machine.110893/#post-477816

Keep in mind that while the procedure that @oguz outlined may work - there are no bumpers in place for such configuration. Removing/Destroying one VM will remove the shared disk as "Referenced disks will always be destroyed.".
Since it sounds like you have an application that may call for Proxmox cluster - the underlying storage must also be multi-host/shared capable. Most likely, it would have to be some kind of external to PVE storage.

As I mentioned in the other thread, absent official end-to-end support in PVE it is much safer to attach storage directly to VM, ie in iSCSI case connect from VM to storage. Or as the other poster is doing - pass-through the FC disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We don't own a NAS or SAN (this is why we do the shared local disk). We know this works since we have done and tested this on petty much all hypervisor technologies. This is not for a production environment either. Just for DEV/TEST/QA. So we know the KVM disk shared works. Just didn't know if it does for Proxmox
 
Just for DEV/TEST/QA.
On a solo Proxmox node it should work fine with local or even local-lvm.

you dont even need to edit files:
Code:
pvesm alloc local-lvm 2002 vm-2002-disk-3 1
qm set 2003 -scsihw virtio-scsi-pci --scsi2 local-lvm:vm-2002-disk-3
qm set 2002 -scsihw virtio-scsi-pci --scsi2 local-lvm:vm-2002-disk-3

But the "destroy" warning still applies. Removing VM 2002 will remove all disks, including the shared one. You may want to have a placeholder VM.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: riaanp
I also ran such a setup and it works with one disk shared to multiple VMs. Keep in mind, that you also cannot snapshot the VMs, because you will snapshot the shared disk twice (from each VM once). In order to solve that problem, we have a fully virtualized multi-node RAC with snapshotting and solved the multiple snapshot problem by using a storage VM (with iSCSI) that shares the asm disks. In such a setup you can snapshot everything and do rollback-rollfront to e.g. various patch leves and even different database or grid versions on the same VMs. Great for testing out stuff.
 
I also ran such a setup and it works with one disk shared to multiple VMs. Keep in mind, that you also cannot snapshot the VMs, because you will snapshot the shared disk twice (from each VM once). In order to solve that problem, we have a fully virtualized multi-node RAC with snapshotting and solved the multiple snapshot problem by using a storage VM (with iSCSI) that shares the asm disks. In such a setup you can snapshot everything and do rollback-rollfront to e.g. various patch leves and even different database or grid versions on the same VMs. Great for testing out stuff.
This is my other option that i am looking at using native iSCSI or TrueNas for easy management. So far I am just researching what is already being done vs what might be done in the future (iSCSI being the better option)
 
This is my other option that i am looking at using native iSCSI or TrueNas for easy management. So far I am just researching what is already being done vs what might be done in the future (iSCSI being the better option)
Yes, I also went your way ... first shared disk than "the snapshot problem" arose and now it's working like a charm. I can recommend using QCOW2 for tree-like snapshots.
 
I also ran such a setup and it works with one disk shared to multiple VMs. Keep in mind, that you also cannot snapshot the VMs, because you will snapshot the shared disk twice (from each VM once). In order to solve that problem, we have a fully virtualized multi-node RAC with snapshotting and solved the multiple snapshot problem by using a storage VM (with iSCSI) that shares the asm disks. In such a setup you can snapshot everything and do rollback-rollfront to e.g. various patch leves and even different database or grid versions on the same VMs. Great for testing out stuff.

Hi, did you ever test the performance difference between sharing the disk directly vs going through an intermediate iSCSI host VM?
 
Hi, did you ever test the performance difference between sharing the disk directly vs going through an intermediate iSCSI host VM?
Yes i did. The native disk performance will always be faster.

But iscsi performance is not that bad either. Been running iscsi with oracle databases for a while now since i posted this question and so far the performance is not bad, keep in mind this is a dev/test environment so I dont need that much performance. I would suggest if you are going the iscsi way that you install 2.5Gi / 10Gi networking (dedicated network for iscsi would be better) for better iscsi performance in production.
 
Last edited:
Yes i did. The native disk performance will always be faster.

But iscsi performance is not that bad either. Been running iscsi with oracle databases for a while now since i posted this question and so far the performance is not bad, keep in mind this is a dev/test environment so I dont need that much performance. I would suggest if you are going the iscsi way that you install 2.5Gi / 10Gi networking (dedicated network for iscsi would be better) for better iscsi performance in production.
Thanks, would you happen to had measured the difference in performance or able to estimate how big a difference did it make?
 
Thanks, would you happen to had measured the difference in performance or able to estimate how big a difference did it make?
Sadly can not remember the exact performance. It was something like 125 megabytes, with a bonded network it upped to 250 megabytes. This was a 1Gi network. Hence why i recommended a 10Gi network. I have NVME storage on this server and the native performance is obviously not there. With the Oracle Cluster itself i cant remember, the DBA's gave me the stats. Point was they were happy and it does work beautifully with iSCSI. We are in the process of upgrading the networking to at least 2.5Gi so we will get a iSCSI performance boost.
 
I am also looking for examples of setting up a two nodes Oracle RAC on Proxmox.
With all the options for storage, I could not find key word "concurrent access",
In VMWare term, The disk mode should be shared and independent-consistent.
Any equivalent disk type/mode in Proxmox world?
Thanks!
--Kang
 
Last edited:
I could not find key word "concurrent access"
In Proxmox disks are not meant to be concurrently accessed with any built-in solution. Whether its Thick (shared LVM), Ceph or local ZFS - Proxmox virtual disk (image) is a subset (LVM slice, Ceph RBD volume, ZFS zvol) and is only accessible to one VM at a time.

As was posted in #10 almost a year ago, you can achieve concurrent access with almost all storage, however there are no bumpers in place. It will be very easy to loose data without strict change controls in place.

By "independent-consistent" did you mean "Independent Persistent"? I.e. :
- Independent mode, which is unaffected by snapshots
- Persistent – The disk operates normally except that changes to the disk are permanent even if the virtual machine is reverted to a snapshot.
- Nonpersistent – The disk appears to operate normally, but whenever the virtual machine is powered off or reverted to a snapshot, the contents of the disk return to their original state. All later changes are discarded.

In Proxmox there is no direct match for any of the above. All disks operate normally, all disks can be snapshotted (if underlying storage supports it), all disks will revert to snapshot state if rolled back, disks will not revert on power off.

If one of our customers was building something as picky as Oracle running on PVE and they needed concurrent disk access, we would recommend direct iSCSI or NVMe/TCP connection in the VM for that particular disk, i.e. by-passing hypervisor.

[centos@cluster-client ~]$ bb host attach -d disk-1 --persist --multi-writer
=============================================================================
service-2/disk-1 attached (read-write) to cluster-client.localnet as /dev/sda
=============================================================================
...
[centos@features-client ~]$ bb host attach -d disk-1 --persist --multi-writer
==============================================================================
service-2/disk-1 attached (read-write) to features-client.localnet as /dev/sda
==============================================================================


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: semanticbeeng
In Proxmox disks are not meant to be concurrently accessed with any built-in solution. Whether its Thick (shared LVM), Ceph or local ZFS - Proxmox virtual disk (image) is a subset (LVM slice, Ceph pool allocation, ZFS zvol) and is only accessible to one VM at a time.
That's the way the GUI works. You already gave the way to do it in comment-10 and I answered, that it works like this and I used it for many years now with the presented downsides. In order to work, you have to set caching to none in the disk preferences for the shared disk. This may not be a supported way to operate in PVE, but this works in any KVM-based hypervisor and works like a charm. I still suggest to run your own iSCSI portal for an Oracle cluster as described in comment-11.
 
  • Like
Reactions: semanticbeeng

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!