OK, I have no idea what is going on but I've started the second VM with "qm start" and then I've stopped and started the first and third VMs... and now all three nodes can "see" the shared disk!
So, AFAIK, wbumiller's suggestion has produced an unknown change in my Proxmox cluster :)
No, there is zero output when starting the VM vía qemu start. But something even more strange happens: now the second VM can "see" the shared disk but the first VM has "lost" the shared disk.
This is the output of "dmesg" in the first VM:
end_request: I/O error, dev vdb, sector 0
Buffer I/O...
The same problem appears when using GlusterFS, so I'm not sure whether Ceph is to blame here.
Guess I'll have to test this option. Sadly, it requires an additional VM.
I'm still puzzled why Proxmox doesn't allow a RBD over Ceph or a qcow2 image over GlusterFS to be shared between two or more VMs.
I assume you're talking about this: https://docs.gluster.org/en/v3/Administrator%20Guide/Setting%20Up%20Clients/#using-nfs-to-mount-volumes
Problem is the VMs don't "see" the storage network where GlusterFS lives. Every Proxmox node have access to the storage network (Ceph and GlusterFS) but...
OK, so Proxmox do not restrict access to shared disk but still only the first VM "see" the shared disk.
Is there a log or anything I can see to debug? I've grep'ed /var/log but not find anything useful. Any hint will be very useful for us.
Thanks in advance.
I need to share a virtual disk between three VMs with CentOS Linux. The filesystem will be GFS2. We are using both a Ceph cluster and a GlusterFS cluster as storage for Proxmox.
I've created an RBD in Ceph and mapped it to all Proxmox nodes, then added it to the three VMs (need to edit the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.