I'm currently planning the architecture for a private cloud, and am leaning towards a Proxmox + Ceph setup. Basically Proxmox for the VMs and Ceph to provide an *expandable*, cost effective, and shared storage pool (as well as REST based S3 compatible object storage). Before committing to the hardware I want to fully understand how Proxmox uses storage on the compute nodes [ not answered via: http://pve.proxmox.com/wiki/Storage_Model ]. Specifically;
1. Under a SAN environment, is the KVM "image" copied to the compute node from the SAN before it is started up (similarly to Open Stack)? This would require at least enough local storage to cover the running VMs on a particular node. Or, alternatively, are the KVM images run directly from the SAN (e.g. via a NFS or CephFS mount)?
2. Do Proxmox KVM instances feature "ephemeral" root storage (the instance itself + runtime changes), and rely on mounted "volumes" [e.g. Amazon EBS] for persistent storage (similarly to Open Stack)?
3. Can CEPH's RBD be expanded (e.g. storage capacity increased by adding additional disks/OSDs) while Proxmox is actively using it as a Disk? I'm still unsure as to what aspects of Ceph Proxmox is leveraging via: http://pve.proxmox.com/wiki/Storage:_Ceph but assume it attaches to the RBD as an iSCSI target??
My preliminary setup looks like;
3 node proxmox cluster (each node: 96GB RAM, 2x E5-2609, 1x128GB SSD [OS], 2x2TB 7200RPM [Instance Storage??])
1 node database server (SSD backed local storage)
3 node ceph cluster (each node: 1x128GB SSD [OS], 6x2TB 7200RPM [Ceph OSD])
1 node Ceph Rados Gatway + Front End Proxy
Please let me know if something looks awry. Thanks for your pointers!
~ Brice
1. Under a SAN environment, is the KVM "image" copied to the compute node from the SAN before it is started up (similarly to Open Stack)? This would require at least enough local storage to cover the running VMs on a particular node. Or, alternatively, are the KVM images run directly from the SAN (e.g. via a NFS or CephFS mount)?
2. Do Proxmox KVM instances feature "ephemeral" root storage (the instance itself + runtime changes), and rely on mounted "volumes" [e.g. Amazon EBS] for persistent storage (similarly to Open Stack)?
3. Can CEPH's RBD be expanded (e.g. storage capacity increased by adding additional disks/OSDs) while Proxmox is actively using it as a Disk? I'm still unsure as to what aspects of Ceph Proxmox is leveraging via: http://pve.proxmox.com/wiki/Storage:_Ceph but assume it attaches to the RBD as an iSCSI target??
My preliminary setup looks like;
3 node proxmox cluster (each node: 96GB RAM, 2x E5-2609, 1x128GB SSD [OS], 2x2TB 7200RPM [Instance Storage??])
1 node database server (SSD backed local storage)
3 node ceph cluster (each node: 1x128GB SSD [OS], 6x2TB 7200RPM [Ceph OSD])
1 node Ceph Rados Gatway + Front End Proxy
Please let me know if something looks awry. Thanks for your pointers!
~ Brice