Hi,
We have been using GlusterFS with Proxmox VE on a 3 Host cluster for close to a year now for low IOPS VMs without a issue.
Now we plan to make a new proxmox cluster for a customer with Xeon E5-200 v4 -20 Core, 256GB DDR4 RAM, 120GB ZFS SSD Mirror for Proxmox, 1.92TB SSD * 4 (VM Storage) + 1.8TB 10K RPM SAS * 8 in each server for storage pools and 40Gbps QDR Infiniband for the cluster / storage traffic.
What would perform better ?
now that gluster supports sharded volumes and vm files can be split into smaller chunks the issue of syncing a single large file is no longer there.
Which is easier / faster to recover from a disk crash
How much of a performance penalty does erasure coded volumes have vs replicated volumes.
what about maintaining multiple snapshots of VM in a secondary storage out of the storage cluster (ceph or glusterfs) like pve-zsync (using ZFS). multiple snapshots every 15 mins for 1 day, 4 hours for a week, weekly for a month etc...
We have been using GlusterFS with Proxmox VE on a 3 Host cluster for close to a year now for low IOPS VMs without a issue.
Now we plan to make a new proxmox cluster for a customer with Xeon E5-200 v4 -20 Core, 256GB DDR4 RAM, 120GB ZFS SSD Mirror for Proxmox, 1.92TB SSD * 4 (VM Storage) + 1.8TB 10K RPM SAS * 8 in each server for storage pools and 40Gbps QDR Infiniband for the cluster / storage traffic.
What would perform better ?
now that gluster supports sharded volumes and vm files can be split into smaller chunks the issue of syncing a single large file is no longer there.
Which is easier / faster to recover from a disk crash
How much of a performance penalty does erasure coded volumes have vs replicated volumes.
what about maintaining multiple snapshots of VM in a secondary storage out of the storage cluster (ceph or glusterfs) like pve-zsync (using ZFS). multiple snapshots every 15 mins for 1 day, 4 hours for a week, weekly for a month etc...