Proxmox VE and Oracle RAC

  • Thread starter Thread starter kenp
  • Start date Start date
K

kenp

Guest
Hi,I have been struggling with this for sometime. I have the following setup:

1. Proxmox host2.
2 CentOS 5.9 VMs
3. Qnap NAS with iscsi targetsI want to setup shared storage for Oracle ASM across the VMs (it is likely I will add additional VMs to test scalability of the Oracle RAC).

On my proxmox host, i have setup an iscsi target and lvm group:

root@abhimanyu:~# pvesm status
>> iscsi_crsvote iscsi 1 0 0 0 100.00%
local dir 1 19685656 15763600 2922072 84.86%
>> lv_crsvote lvm 0 0 0 0 100.00%
vg_vm lvm 1 2047995904 0 1995567104 0.50%

I can see the volume group on the proxmox host:root@abhimanyu:~# vgscan
Reading all physical volumes. This may take a while...
>> Found volume group "vg_crsvote" using metadata type lvm2
Found volume group "vg_local" using metadata type lvm2

However, I cannot see this from either of the CentOS vm's. I cannot use the 'Add Hard Disk' option for the VM because Oracle ASM needs unformatted block device. The 'Add Hard Disk' option forces the use of LVM group as a base for creating virtual disks on it (right?), rather I want to present the whole lv to ASM so it can manage the storage itself.

Not sure if I've explained it well but happy to answer any questions to get this sorted.

Thanks!
 
Last edited by a moderator:
rather I want to present the whole lv to ASM so it can manage the storage itself.

You get the whole lv (logical volume)!. But maybe you want the whole VG (volume group)? If so, you need to add the iscsi lun directly.
 
First, thanks for the quick reply Dietmar.

I want the VG, but my guest VMs cannot see it. I've tried running vgscan, pvscan etc from the guest (CentOS) but no luck! My question really is how to make vg's (or lv's) visible from the guest VM's.

Thanks.
 
Please post your VM config and /etc/pve/storage.cfg

root@abhimanyu:/etc/iscsi# cat /etc/pve/storage.cfg
lvm: vg_vm
vgname vg_local
shared
content images

dir: local
path /var/lib/vz
shared
content images,iso,vztmpl,rootdir
maxfiles 0

iscsi: is_crsvote
target iqn.2004-04.com.qnap:ts-569l:iscsi.crsvote.d6aff0
portal 192.168.0.100
content none

lvm: lv_crsvote
vgname vg_crsvote
base is_crsvote:0.0.0.scsi-36001405fc2daf1edbfb4d4404d8ff3de
shared
content images

root@abhimanyu:/etc/iscsi# qm config 101
bootdisk: virtio0
cores: 2
ide2: cdrom,media=cdrom
memory: 2048
name: oralinvm01
net0: e1000=9A:A7:98:CA:B7:DB,bridge=vmbr0
net1: e1000=F2:64:9A:D4:AF:F1,bridge=vmbr0
ostype: l26
sockets: 1
vga: cirrus
virtio0: vg_vm:vm-101-disk-1,size=20G
virtio1: vg_vm:vm-101-disk-2,size=30G

Thanks!
 
Do not use the vg_vm storage. Instead, set the 'Use LUNs directly' flag on the iscsi storage 'is_crsvote'.

Then you can use the whole LUN as VM disk.
 
Do not use the vg_vm storage. Instead, set the 'Use LUNs directly' flag on the iscsi storage 'is_crsvote'.

Then you can use the whole LUN as VM disk.

Ok, I will try that. Thanks.

In fact, I was able to create an lvm group and (somehow) it was visible to one of the guest vms. However, i can't see it from the other one! Both VM's are identical.

Cheers.
 
hi dietmar,

another thing ... vg_vm is the local volume group where the primary disks for the VMs are created i.e. /dev/sda, /dev/sdb etc.

I want to use the iscsi targets from my NAS to simulate a 'raw device' for Oracle which can be shared across VMs (all running oracle databases in a RAC configuration i.e. oracle's database cluster.) Oracle needs a shared storage for its RAC setup.
 
I want to use the iscsi targets from my NAS to simulate a 'raw device' for Oracle which can be shared across VMs (all running oracle databases in a RAC configuration i.e. oracle's database cluster.) Oracle needs a shared storage for its RAC setup.

I thought I just answered that question?
 
yes you did and i tried to turn 'Use LUN directly' on and yet i can see it only from one of the guest VM, not from the other. Is there any setting on the guest VM that I need to do?

This is what i see from VM #1:
[root@oralinvm01 ~]# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root 9 Aug 1 18:43 ata-QEMU_DVD-ROM_QM00003 -> ../../hdc
lrwxrwxrwx 1 root root 9 Aug 1 18:43 scsi-36001405fc2daf1edbfb4d4404d8ff3de -> ../../sda

where sda is the iscsi target ... and,

this is from VM #2:
[root@oralinvm02 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Aug 1 18:43 ata-QEMU_DVD-ROM_QM00003 -> ../../hdc

no sda showing here!

Both VMs are identical and on the same proxmox host (i have only one proxmox host.)
 
Hi,


I recently discovered the existence of Proxmox VE 5, so I am a newbie and sorry for mistakes
I come from virtualbox....:rolleyes:

I've a Gen8 Microserver (16Go + Xeon + 6 To)
I'd like to create database servers (OLE 7.2) via KVM
Creating standalone oracle server is OK, and easy to realize.:)

But, now i try to create an Oracle RAC cluster (on the same PVE)
(i have only one proxmox host.)

Are required two KVM servers (on the same PVE) and some shared devices to implement ASM storage.
On PVE5 i'll create :
KVMserver1 => Oracle RAC Node 1
KVMserver2 => Oracle RAC Node 2

A storage is to be shared by the oracle cluster nodes (Read/Write concurrent access by each node)
It is possible to create on PVE5 virtual devices, that could be used as shared devices to by KVM at the same time ?

I can do it with virtualbox....without using openfiler and without any NAS or iscsi server.... ..



exemple :

$ # Create the disks and associate them with VirtualBox as virtual media.
$ VBoxManage createhd --filename asm1.vdi --size 20480 --format VDI --variant Fixed
$ VBoxManage createhd --filename asm2.vdi --size 20480 --format VDI --variant Fixed

$ # Make them shareable.
$ VBoxManage modifyhd asm1.vdi --type shareable
$ VBoxManage modifyhd asm2.vdi --type shareable

$ # Connect them to the first VM : node1 .
$ VBoxManage storageattach ol7-122-rac1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
$ VBoxManage storageattach ol7-122-rac1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable


$ # Connect them to the second VM : node2
$ VBoxManage storageattach ol7-122-rac2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
$ VBoxManage storageattach ol7-122-rac2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable



Thanks to you for your thoughts and directions

Regards

Mac
 
Last edited:
Easiest setup is to use a dedicated shared storage VM, which exports a disk via iSCSI to the RAC nodes. This way you can snapshot the entire RAC.

So you need:
one VM with iSCSI target and one shared disk (for 12.2 RAC, you need at least 38 GB according to the installer)
(at least) two VMs as nodes with iSCSI initiator and connection to the target with at least 8 GB (12.2 minimum for GI).

Setup works fine, I'm running multiple non-production clusters for testing.