Can't share disk between VMs

chgonzalez

New Member
Apr 3, 2017
7
0
1
43
I need to share a virtual disk between three VMs with CentOS Linux. The filesystem will be GFS2. We are using both a Ceph cluster and a GlusterFS cluster as storage for Proxmox.

I've created an RBD in Ceph and mapped it to all Proxmox nodes, then added it to the three VMs (need to edit the /etc/pve/qemu-server/<id>.conf files by hand). After that, Proxmox shows the disk attached to the three VMs, but only the first VM can actually access to the disk (it appears under "fdisk -l").

The same happens when using a QCOW image stored in GlusterFS: after edit <id>.conf files, only the first VM can actually use it.

¿Do Proxmox support shared disk between VMs? We know we need to use a cluster filesystem to avoid data loss (hence GFS2) but after trying Ceph and GlusterFS our guess is that the problem lies in Proxmox itself.

PS. Sorry for my english.
 
proxmox does not restrict access to shared (raw) disk, so I don't think this is a proxmox problem. But using a shared .qcow2 disk is technical impossible!!
 
OK, so Proxmox do not restrict access to shared disk but still only the first VM "see" the shared disk.

Is there a log or anything I can see to debug? I've grep'ed /var/log but not find anything useful. Any hint will be very useful for us.

Thanks in advance.
 
File /etc/pve/qemu-server/13101.conf in first Proxmox server:
Code:
bootdisk: virtio0
cores: 4
ide2: local:iso/CentOS-6.8-x86_64-minimal.iso,media=cdrom
memory: 4096
name: conf-web1
net0: virtio=46:D8:57:DE:FD:48,bridge=vmbr0,tag=131
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=7bbbc520-38e1-4ce7-a9b7-f95dc2e4efdd
sockets: 1
virtio0: neon-lvm:vm-13101-disk-1,size=100G
virtio1: ceph:vm-13101-disk-1,size=100G

File /etc/pve/qemu-server/13102.conf in second Proxmox server:
Code:
bootdisk: virtio0
cores: 4
ide2: local:iso/CentOS-6.8-x86_64-minimal.iso,media=cdrom
memory: 4096
name: conf-web2
net0: virtio=2E:40:E9:3C:93:CC,bridge=vmbr0,tag=131
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=245aa9cd-b45b-4b95-9c6a-1095411028fa
sockets: 1
virtio0: hidrogeno-lvm:vm-13102-disk-1,size=100G
virtio1: ceph:vm-13101-disk-1,size=100G

Both VMs had a local disk (LVM) and a second disk (Ceph) but only the first VM can "see" the second disk.
 
Maybe some problem with Ceph. In general it works perfectly with LVM. Yet be aware, you cannot snapshot the VMs correctly, because the second machine uses the disk from the first one.

I solved a similar problem by using a third VM with an iSCSI target and exported one disk to a number of machines simulating a SAN, so that everything is snapshotable.
 
I didn't know about gfs2, Thanks for this post, i will try someday ;)

I see you have glusterfs storage. It's not easiest to mount a gluster share on every Vm to access the Data instead oh the share disk? For the record that's my config at home, to share my 'home directory' between the different Vm.

But if it's work with a share disk with gfs2, it could be an alternative.
 
Maybe some problem with Ceph.
The same problem appears when using GlusterFS, so I'm not sure whether Ceph is to blame here.
I solved a similar problem by using a third VM with an iSCSI target and exported one disk to a number of machines simulating a SAN, so that everything is snapshotable.
Guess I'll have to test this option. Sadly, it requires an additional VM.

I'm still puzzled why Proxmox doesn't allow a RBD over Ceph or a qcow2 image over GlusterFS to be shared between two or more VMs.
 
Do you get any output when you start the VMs from the command line via `qm start $vmid`?
 
No, there is zero output when starting the VM vía qemu start. But something even more strange happens: now the second VM can "see" the shared disk but the first VM has "lost" the shared disk.

This is the output of "dmesg" in the first VM:
Code:
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
Buffer I/O error on device vdb, logical block 1
Buffer I/O error on device vdb, logical block 2
Buffer I/O error on device vdb, logical block 3
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
end_request: I/O error, dev vdb, sector 209715192
Buffer I/O error on device vdb, logical block 26214399
end_request: I/O error, dev vdb, sector 209715192
Buffer I/O error on device vdb, logical block 26214399
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
Buffer I/O error on device vdb, logical block 1
Buffer I/O error on device vdb, logical block 2
Buffer I/O error on device vdb, logical block 3
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
end_request: I/O error, dev vdb, sector 209715192
Buffer I/O error on device vdb, logical block 26214399
end_request: I/O error, dev vdb, sector 209715192
Buffer I/O error on device vdb, logical block 26214399
end_request: I/O error, dev vdb, sector 0
Buffer I/O error on device vdb, logical block 0
 
OK, I have no idea what is going on but I've started the second VM with "qm start" and then I've stopped and started the first and third VMs... and now all three nodes can "see" the shared disk!

So, AFAIK, wbumiller's suggestion has produced an unknown change in my Proxmox cluster :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!