VM Shared Disk Between Guests

I answered, that it works like this and I used it for many years now with the presented downsides.
One other thing that came to mind - if the operator decides that they need to "live migrate" or "HA" the VMs, that can lead to chaos as well. Depending on the method of connectivity, PVE can de-activate storage on migration/failover which will yank the disk away from remaining node, or two VMs could move to different hosts. The operator needs to have deep understanding of the technologies in use (PVE) and limitations. Or as you said, connect storage directly to VMs, even if an intermediary head is involved.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
One other thing that came to mind - if the operator decides that they need to "live migrate" or "HA" the VMs, that can lead to chaos as well. Depending on the method of connectivity, PVE can de-activate storage on migration/failover which will yank the disk away from remaining node, or two VMs could move to different hosts.
Good remark, yet AFAIK as long as the volume if open, it cannot be deactivated .. so as always ... it depends.

iSCSI portal is the way to go. I run my 4 node test cluster in qcow2 with different OS versions and different grid version and can switch as I like to test something. I don't know about performance, but it's a cluster for testing the cluster technology, not having a good workload. Oracle is licensewise still very cheap on real hardware and a nightmare in virtualization beside Oracle VM, which is the only partiting-allowing virtualization platform from Oracle LMS.
 
  • Like
Reactions: bbgeek17
I am also looking for examples of setting up a two nodes Oracle RAC on Proxmox.
With all the options for storage, I could not find key word "concurrent access",
In VMWare term, The disk mode should be shared and independent-consistent.
Any equivalent disk type/mode in Proxmox world?
Thanks!
--Kang

On my ProxMox cluster, after checking with our Oracle DBA team, was to have a "storage VM" ( I believe this is what others refer to as an iSCSI portal ) that exports the drives to the two RAC nodes for use. My understanding is that Oracle ASM handles concurrent access to the data files. The performance over 25G network is pretty decent even with some heavy automated application test loads.
 
Last edited:
On my ProxMox cluster, after checking with our Oracle DBA team, was to have a "storage VM" ( I believe this is what others refer to as an iSCSI portal ) that exports the drives to the two RAC nodes for use.
Yes, iSCSI and (direct) NFS are the two supported modes of access. You could also passthrough FC hosts and build your own FC-based SAN, but that is another beast. I tried it for a proof-of-concept.

My understanding is that Oracle ASM handles concurrent access to the data files.
Yes, ASM is awesome and ACFS even more. Sad to see that the open source world lacks such a technology.
 
  • Like
Reactions: nywst
Yes, iSCSI and (direct) NFS are the two supported modes of access. You could also passthrough FC hosts and build your own FC-based SAN, but that is another beast. I tried it for a proof-of-concept.


Yes, ASM is awesome and ACFS even more. Sad to see that the open source world lacks such a technology.
Great info!
Sorry I know this thread is old, but it's hard to find valueable info elsewhere.

I run into some issues by setting up 2 nodes Oracle 19c RAC (Oracle Linux 7.8) on Proxmox VM. I use TrueNAS iSCSI targets for oracle ASM shared disks.
Everything is fine, I am able to use oracleasm to create ASM disks and list them on both nodes. But when I install Grid, Grid GUI couldn't discover the disks. Grid installation log has no errors. This really drives me crazy, prob some Oracle bugs?

INFO: [Jul 12, 2024 11:06:32 AM] ... discoveryString = /dev/oracleasm/disks/*
INFO: [Jul 12, 2024 11:06:32 AM] Determining the number of disks...
INFO: [Jul 12, 2024 11:06:32 AM] Executing [/u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/oracleasm/disks/*']
INFO: [Jul 12, 2024 11:06:33 AM] Starting Output Reader Threads for process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin
INFO: [Jul 12, 2024 11:06:33 AM] Parsing Shallow discovery returned 1 devices
INFO: [Jul 12, 2024 11:06:33 AM] The process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin exited with code 0
INFO: [Jul 12, 2024 11:06:33 AM] Waiting for output processor threads to exit.
INFO: [Jul 12, 2024 11:06:33 AM] Output processor threads exited.
INFO: [Jul 12, 2024 11:06:33 AM] Discovering the disks...
INFO: [Jul 12, 2024 11:06:33 AM] Executing [/u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/oracleasm/disks/*']
INFO: [Jul 12, 2024 11:06:33 AM] Starting Output Reader Threads for process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin
INFO: [Jul 12, 2024 11:06:33 AM] The process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin exited with code 0
INFO: [Jul 12, 2024 11:06:33 AM] Waiting for output processor threads to exit.
INFO: [Jul 12, 2024 11:06:33 AM] Output processor threads exited.

Seems Grid is able to detect the disks, "Parsing Shallow discovery returned 1 devices"; but later on return just none.


Any ideas please? Thanks!
 
I run into some issues by setting up 2 nodes Oracle 19c RAC (Oracle Linux 7.8) on Proxmox VM. I use TrueNAS iSCSI targets for oracle ASM shared disks.
Everything is fine, I am able to use oracleasm to create ASM disks and list them on both nodes. But when I install Grid, Grid GUI couldn't discover the disks. Grid installation log has no errors. This really drives me crazy, prob some Oracle bugs?

INFO: [Jul 12, 2024 11:06:32 AM] ... discoveryString = /dev/oracleasm/disks/*
INFO: [Jul 12, 2024 11:06:32 AM] Determining the number of disks...
INFO: [Jul 12, 2024 11:06:32 AM] Executing [/u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/oracleasm/disks/*']
INFO: [Jul 12, 2024 11:06:33 AM] Starting Output Reader Threads for process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin
INFO: [Jul 12, 2024 11:06:33 AM] Parsing Shallow discovery returned 1 devices
INFO: [Jul 12, 2024 11:06:33 AM] The process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin exited with code 0
INFO: [Jul 12, 2024 11:06:33 AM] Waiting for output processor threads to exit.
INFO: [Jul 12, 2024 11:06:33 AM] Output processor threads exited.
INFO: [Jul 12, 2024 11:06:33 AM] Discovering the disks...
INFO: [Jul 12, 2024 11:06:33 AM] Executing [/u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/oracleasm/disks/*']
INFO: [Jul 12, 2024 11:06:33 AM] Starting Output Reader Threads for process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin
INFO: [Jul 12, 2024 11:06:33 AM] The process /u01/app/grid/19.3.0/gridhome_1/bin/kfod.bin exited with code 0
INFO: [Jul 12, 2024 11:06:33 AM] Waiting for output processor threads to exit.
INFO: [Jul 12, 2024 11:06:33 AM] Output processor threads exited.

Seems Grid is able to detect the disks, "Parsing Shallow discovery returned 1 devices"; but later on return just none.


Any ideas please? Thanks!
I don't use libasm, too much problems and too little "features" over plain linux block devices, we have used for decades.
I have a test RAC running inside of PVE and I use a simple iSCSI multipath disk and I don't see any difference to a real hardware setup, which is still the much, much better cost-effective way of running a valid (and license audit compliant) setup. The iSCSI server is also a VM and I can with snapshots jump around and reset the whole cluster including storage, which is a very nice thing to do.

FYI: Oracle Linux 7 is EOL.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!