Hello,
we are currently using Proxmox 1.9 with 4 Cluster-Nodes and about 100 VM's (mainly KVM, some with openVZ). Hardware-Configuration for each node:
2x QuadCore Intel Xeon E5620 @ 2.60 GHz with Hyperthreading
32 GB RAM
6x 300 GB SAS Drives (Hardware RAID10)
In the next few weeks we need to move our cluster to a new datacenter. The new cluster will be based on Proxmox 2, we will backup each VM and transfer it to the new location. With the new setup, we would like to have a new and flexible storage solution. Unfortunatly, there are many solutions around - Hardware SAN, Distributed Replicated Storage (GlusterFS, MooseFS, Ceph, SheepDog, ...), Software SAN like Nexenta, NFS and more...
Hardware SAN is too expensive. We will have 2x GBit NIC's in each of the new nodes and tried already some Distributed Filesystems, but they were either too slow in terms of I/O or they had too many bugs and haven't been recommended for production use at this time. We want a flexible solution, so that faulty hardware can be removed without interrupting the cluster and that we can easily extend the storage size at anytime. As mentioned before, we also have some openVZ systems - but if it's necessary, we would replace them with regular KVM images to get a good and working solution. A software SAN which will be build with 10 TB of storage in the beginning which cannot be extended is no solution for us.
I know that Proxmox 2 has beta support for Ceph and SheepDog, but I'm not sure if they are ready for production use yet. If someone has already built a working cluster with up to 250-500 VM's with Distributed Storage, then I would really like to see your recommendations for the configuration and why you choosed a specific storage backend.
Michael
we are currently using Proxmox 1.9 with 4 Cluster-Nodes and about 100 VM's (mainly KVM, some with openVZ). Hardware-Configuration for each node:
2x QuadCore Intel Xeon E5620 @ 2.60 GHz with Hyperthreading
32 GB RAM
6x 300 GB SAS Drives (Hardware RAID10)
In the next few weeks we need to move our cluster to a new datacenter. The new cluster will be based on Proxmox 2, we will backup each VM and transfer it to the new location. With the new setup, we would like to have a new and flexible storage solution. Unfortunatly, there are many solutions around - Hardware SAN, Distributed Replicated Storage (GlusterFS, MooseFS, Ceph, SheepDog, ...), Software SAN like Nexenta, NFS and more...
Hardware SAN is too expensive. We will have 2x GBit NIC's in each of the new nodes and tried already some Distributed Filesystems, but they were either too slow in terms of I/O or they had too many bugs and haven't been recommended for production use at this time. We want a flexible solution, so that faulty hardware can be removed without interrupting the cluster and that we can easily extend the storage size at anytime. As mentioned before, we also have some openVZ systems - but if it's necessary, we would replace them with regular KVM images to get a good and working solution. A software SAN which will be build with 10 TB of storage in the beginning which cannot be extended is no solution for us.
I know that Proxmox 2 has beta support for Ceph and SheepDog, but I'm not sure if they are ready for production use yet. If someone has already built a working cluster with up to 250-500 VM's with Distributed Storage, then I would really like to see your recommendations for the configuration and why you choosed a specific storage backend.
Michael