Hi, I am curious to ask if anyone can comment.
I'm looking at setting up for a client, a modest sized Proxmox cluster. It will be running on OVH hosted environment, ie, we have access to various pre-configured server builds and my goal is to get a config which optimizes (monthly spend, features - CPU,Ram, and storage).
Client has a requirement for HA VM storage. So My options for this appear to be
(a) Something like hyper-converged storage solution built in at proxmox node level. Ceph is not a great fit given the size of their environment I think (ie, probably 5 physical server nodes approx, and we lack true 10gig ether for ceph, the 'vrack' OVH offers now seems to cap at 3gig or 30% of a 10gig pipe for the most part?)
(b) possibly something like LizardFS is good candidate. I've used this before in collaboration with a colleague, but not as HA primary storage for VMs with proxmox, rather just for "VM Backup storage target" where a few dedicated nodes acted as storage servers; the lizard server roles were not on proxmox at all/ proxmox hosts acted purely as Lizard clients. In this config it has been smooth/stable but I do realize that primary VM storage is very different role /config than this.
(c) not sure if I am missing some other 'good option' which is actually stable and managable.
I am not keen on anything based on DRBD - I tested it a few years ago and it always felt 'too delicate' in case of any server going offline / the recovery process was 'delicate and sometimes outright painful or terrible' at least in my testing. Possibly I just didn't do things right, but end of day it was enough hassle I was not open to going this route.
I realize there are now more ZFS Integration features in proxmox under the hood than was true a few years ago. Including async ZFS replication, which is 'pretty close to HA' for many use case scenarios. I don't think (?) there is a ZFS realtime sync HA I can easily do which is also 'reliable, and easy' but if I'm wrong on this please let me know.
Anyhow. Any comments (ideally grounded in real-world experience) are very much appreciated.
Thanks,
Tim
I'm looking at setting up for a client, a modest sized Proxmox cluster. It will be running on OVH hosted environment, ie, we have access to various pre-configured server builds and my goal is to get a config which optimizes (monthly spend, features - CPU,Ram, and storage).
Client has a requirement for HA VM storage. So My options for this appear to be
(a) Something like hyper-converged storage solution built in at proxmox node level. Ceph is not a great fit given the size of their environment I think (ie, probably 5 physical server nodes approx, and we lack true 10gig ether for ceph, the 'vrack' OVH offers now seems to cap at 3gig or 30% of a 10gig pipe for the most part?)
(b) possibly something like LizardFS is good candidate. I've used this before in collaboration with a colleague, but not as HA primary storage for VMs with proxmox, rather just for "VM Backup storage target" where a few dedicated nodes acted as storage servers; the lizard server roles were not on proxmox at all/ proxmox hosts acted purely as Lizard clients. In this config it has been smooth/stable but I do realize that primary VM storage is very different role /config than this.
(c) not sure if I am missing some other 'good option' which is actually stable and managable.
I am not keen on anything based on DRBD - I tested it a few years ago and it always felt 'too delicate' in case of any server going offline / the recovery process was 'delicate and sometimes outright painful or terrible' at least in my testing. Possibly I just didn't do things right, but end of day it was enough hassle I was not open to going this route.
I realize there are now more ZFS Integration features in proxmox under the hood than was true a few years ago. Including async ZFS replication, which is 'pretty close to HA' for many use case scenarios. I don't think (?) there is a ZFS realtime sync HA I can easily do which is also 'reliable, and easy' but if I'm wrong on this please let me know.
Anyhow. Any comments (ideally grounded in real-world experience) are very much appreciated.
Thanks,
Tim