Hi,
I was able to test a Storage Cluster backend for Proxmox yesterday. Since I only had to Server, which should run a replicated scenario and be able to add more servers I chose GlusterFS.
The setup was really easy and I could add it using the PVE Gui and locally.
I got 2 Server running a RAID10 with 6 SAS disks and they are directly connected via 1 GB Ethernet. Normally I would have done it with DRDB, but i wanted to be able to expand this cluster setup.
As I understand it, Ceph only makes sence starting with 3 nodes?
So here's my problem, all VMs running RAW images really run quite well and the speed is OK, I didn't expect a miracle because I dont have a distributed setup. But everytime I chose a qcow2 image (during image creation or moving from local to remote storage) I get a massive I/O wait and the raid disks running wild. Proxmox reports lock timeouts and i have to wait a couple of minutes for the gluster to fsync all pending writes.
So what I read is that 3.4 Gluster Setups should work very well with qemu. Should I avoid using the storage as a filesystem mount and create the vm manually with gluster using libgfapi?
Will there be an option in the PVE gui for this or is the non-Fuse mount of the storage enough?
Maybe someone could help adding his Gluster Experience to the Proxmox wiki.
I was able to test a Storage Cluster backend for Proxmox yesterday. Since I only had to Server, which should run a replicated scenario and be able to add more servers I chose GlusterFS.
The setup was really easy and I could add it using the PVE Gui and locally.
I got 2 Server running a RAID10 with 6 SAS disks and they are directly connected via 1 GB Ethernet. Normally I would have done it with DRDB, but i wanted to be able to expand this cluster setup.
As I understand it, Ceph only makes sence starting with 3 nodes?
So here's my problem, all VMs running RAW images really run quite well and the speed is OK, I didn't expect a miracle because I dont have a distributed setup. But everytime I chose a qcow2 image (during image creation or moving from local to remote storage) I get a massive I/O wait and the raid disks running wild. Proxmox reports lock timeouts and i have to wait a couple of minutes for the gluster to fsync all pending writes.
So what I read is that 3.4 Gluster Setups should work very well with qemu. Should I avoid using the storage as a filesystem mount and create the vm manually with gluster using libgfapi?
Will there be an option in the PVE gui for this or is the non-Fuse mount of the storage enough?
Maybe someone could help adding his Gluster Experience to the Proxmox wiki.