Hi all,
I am planning to branch out from basic web hosting and start providing specialised VPS solutions to certain customers and Proxmox VE seems to be a good platform to build it on..
The initial cloud will probably consist of 3-4 nodes linked my GbE.. Live migration will be a requirement and probably HA as well.. Hopefully the cluster will grow to more nodes before too long..
What I cant decide is how to provision the storage (will be using standard servers configured for serving up the storage not high end NAS/SAN hardware)..
Firstly NFS seems the most simple and flexible in terms of backups and switch over to a spare storage server should there ever be a problem.. Just not sure if the performance and scalability is there.. Especially when the number of VM's starts to increase on the host.. What are your thoughts and experiences??
Next seems to be iSCSI.. Haven't seen much to suggest its any more scalable that NFS would be and its more complicated to implement and move around if there is a problem.. Also haven't seen anything to suggest improved IO performance for the added complexity although is may be inaccurate..
Moving on from that there are the expensive full blown SAN options but those are too expensive at the moment.. Maybe later if things go well and it can be justified..
Finally there are the "new" storage solutions like GlusterFS (very slow currently), Ceph and Sheepdog with ProxmoxVE support for the latter two imminent..
So I would be very interested to hear the thoughts from anyone running the larger clusters and anyone who has tested the "new" storage solutions and what your thoughts/experiences are with things like Ceph or Sheepdog (From an architecture perspective Sheepdog seem to be a good solution because of no need for a metadata server)..
I think a detailed discussion on storage may be useful to others searching out information and I couldn't find anything comprehensive..
Thanks..
I am planning to branch out from basic web hosting and start providing specialised VPS solutions to certain customers and Proxmox VE seems to be a good platform to build it on..
The initial cloud will probably consist of 3-4 nodes linked my GbE.. Live migration will be a requirement and probably HA as well.. Hopefully the cluster will grow to more nodes before too long..
What I cant decide is how to provision the storage (will be using standard servers configured for serving up the storage not high end NAS/SAN hardware)..
Firstly NFS seems the most simple and flexible in terms of backups and switch over to a spare storage server should there ever be a problem.. Just not sure if the performance and scalability is there.. Especially when the number of VM's starts to increase on the host.. What are your thoughts and experiences??
Next seems to be iSCSI.. Haven't seen much to suggest its any more scalable that NFS would be and its more complicated to implement and move around if there is a problem.. Also haven't seen anything to suggest improved IO performance for the added complexity although is may be inaccurate..
Moving on from that there are the expensive full blown SAN options but those are too expensive at the moment.. Maybe later if things go well and it can be justified..
Finally there are the "new" storage solutions like GlusterFS (very slow currently), Ceph and Sheepdog with ProxmoxVE support for the latter two imminent..
So I would be very interested to hear the thoughts from anyone running the larger clusters and anyone who has tested the "new" storage solutions and what your thoughts/experiences are with things like Ceph or Sheepdog (From an architecture perspective Sheepdog seem to be a good solution because of no need for a metadata server)..
I think a detailed discussion on storage may be useful to others searching out information and I couldn't find anything comprehensive..
Thanks..