shared virtual disk on vms

remzi akyuz

New Member
Feb 5, 2016
4
0
1
51
Hi,

can i use shared disks in vms? Is it possible?

I want to use this disk in 2 vms.

virtio1: depo0:8201/vm-8201-disk-3.qcow2,size=2G
virtio2: depo0:8201/vm-8201-disk-2.qcow2,size=2G

How can i share they ?

Thanks.


root@proxmox:~# qm config 8201
agent: 1
balloon: 2048
bootdisk: ide0
cores: 4
hotplug: memory,cpu,network,disk,usb
ide0: depo0:8100/base-8100-disk-1.raw/8201/vm-8201-disk-1.qcow2,cache=writeback,size=64G
memory: 8704
name: w2k12cls1
net0: virtio=66:65:38:63:32:65,bridge=vmbr0
net1: virtio=66:37:66:34:38:39,bridge=vmbr1
numa: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=ad84ea64-d9ce-4152-a4e0-02be1ec31807
sockets: 1
vcpus: 2
vga: cirrus
virtio1: depo0:8201/vm-8201-disk-3.qcow2,size=2G
virtio2: depo0:8201/vm-8201-disk-2.qcow2,size=2G
root@proxmox:~#

root@proxmox:~# qm config 8202
agent: 1
balloon: 2048
bootdisk: ide0
cores: 4
hotplug: memory,cpu,network,disk,usb
ide0: depo0:8100/base-8100-disk-1.raw/8202/vm-8202-disk-1.qcow2,cache=writeback,size=64G
memory: 8704
name: w2k12cls2
net0: virtio=32:61:63:36:61:39,bridge=vmbr0
net1: virtio=66:61:63:64:63:39,bridge=vmbr1
numa: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=73f21ed5-f1bd-4bf3-b063-d330f5a0ddd3
sockets: 1
vcpus: 2
vga: cirrus
 
Thanks for your information.
But it is not enough for us. Because of cluster files system requires shared disk.
 
OK, well I've not tried it, but you could try simply editing the VM's config files so that they use the same disk image (eg both using vm-8201-disk-1.qcow2 for ide0).

You can't do that from the GUI though - you'll have to edit the VM's configs directly in /etc/pve/nodes/proxmox/qemu-server/

Whether this actually works may depend on the guest though - I might try it myself on my test machine as I'm curious to see what might happen.
 
Hi JonathanB19 and fireon,
Thanks for yours advices.

I can use Glusterfs/iscsi or like them, but this solution take more resources.

Proxmox <<-->> iscsi server(freenas/openfiler) VM under proxmox <<-->>MS windows iscsi client1
<<-->>MS windows iscsi client2
I dont like edit config file by the hand.

I will try centos qemu-kvm solution. May be centos qemu-kvm usefully this time.
 
Hi,

I'm testing sharing virtual disks between VMs to deploy an ocfs2 cluster.

I have Proxmox with an LVM shared storage.

I've modified configuration file by hand, simply copying the lines for disks configuration and disabling cache.

This works fine but I experienced an issue with migrating the VMs; infact with a two nodes proxmox cluster and with a VM with disks shared for each node, if I force live migration of VM1 from node 1 to node 2 the migration process succeded and task status is OK, but the way if I want to put back VM1 to node 1 the migratioin process completed but the task status is ERROR.

The failure is in deactivating LVs, I think because Proxmox try to deactivate LVs even if the same LVs are currently used by VM2.

Logs:
May 17 09:38:19 migration status: completed
can't deactivate LV '/dev/LVM_PROXOMOX_1/vm-1010-disk-4': Logical volume LVM_PROXOMOX_1/vm-1010-disk-4 in use.
can't deactivate LV '/dev/LVM_PROXOMOX_1/vm-1010-disk-2': Logical volume LVM_PROXOMOX_1/vm-1010-disk-2 in use.
can't deactivate LV '/dev/LVM_PROXOMOX_1/vm-1010-disk-3': Logical volume LVM_PROXOMOX_1/vm-1010-disk-3 in use.
can't deactivate LV '/dev/LVM_PROXOMOX_1/vm-1010-disk-5': Logical volume LVM_PROXOMOX_1/vm-1010-disk-5 in use.
May 17 09:38:47 ERROR: volume deativation failed: LVM_proxmox_1:vm-1010-disk-4 LVM_proxmox_1:vm-1010-disk-2 LVM_proxmox_1:vm-1010-disk-3 LVM_proxmox_1:vm-1010-disk-5 at /usr/share/perl5/PVE/Storage.pm line 932.
May 17 09:38:52 ERROR: migration finished with problems (duration 00:01:30)
TASK ERROR: migration problems

Confs:

VM1
bootdisk: virtio0
cores: 4
ide2: zfs_iso_temp:iso/V100082-01.iso,media=cdrom
memory: 4096
name: mailserver1.1
net0: virtio=66:35:32:37:38:64,bridge=vmbr2,tag=67
numa: 0
ostype: l26
smbios1: uuid=fb5680ee-c49c-4e8e-a3c7-b63d507a38b2
sockets: 1
virtio0: LVM_proxmox_1:vm-1010-disk-1,size=8G
virtio1: LVM_proxmox_1:vm-1010-disk-2,cache=none,size=8G
virtio2: LVM_proxmox_1:vm-1010-disk-3,cache=none,size=8G
virtio3: LVM_proxmox_1:vm-1010-disk-4,cache=none,size=8G
virtio4: LVM_proxmox_1:vm-1010-disk-5,cache=none,size=8G


VM2
bootdisk: virtio0
cores: 4
ide2: zfs_iso_temp:iso/V100082-01.iso,media=cdrom
memory: 4096
name: mailserver1.2
net0: virtio=36:65:30:31:31:39,bridge=vmbr2,tag=67
numa: 0
ostype: l26
smbios1: uuid=f5f9bbea-54a5-4ae6-8115-dda742479b5c
sockets: 1
virtio0: LVM_proxmox_1:vm-1011-disk-1,size=8G
virtio1: LVM_proxmox_1:vm-1010-disk-2,cache=none,size=8G
virtio2: LVM_proxmox_1:vm-1010-disk-3,cache=none,size=8G
virtio3: LVM_proxmox_1:vm-1010-disk-4,cache=none,size=8G
virtio4: LVM_proxmox_1:vm-1010-disk-5,cache=none,size=8G
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!