Which storage type?

svensven

New Member
Oct 5, 2014
3
0
1
We want to use Proxmox in a productive environment for a server which receives a lot of web requests for serving a website and a REST service via a mySQL and a MongbDB database. MongoDB data files are relatively big and change frequently. The server frequently gets a lot of small requests from many users. Using proxmox we aim at being able to do a live-migration to another host in case that our server hardware fails. We have 2 host servers with SSDs - both located at the same data centre and our idea is to create a KVM based guest with proxmox and to distribute the KVM VM data file to allow a fast live-migration.

As far as I can see we will need to use a storage type that allows 2 hosts to share the storage. Looking at the proxmox wiki I found several types that are supported including GLusterFS and RBD/Ceph. (While browing the web I also found DRBD, but that doesn't seem to be supported by Proxmox (is that correct?)). - However, I didn't find any information about which type may be the best for our scenario.

Does anyone have experience? What would you prefer as a strorage type?

As a side-question: Does anyone of them support compression (mongodb creates relatively big data files which can be compressed very well)? I could imagine that this may be good during the network transfer and maybe it can also allow us to use the space of the drived more efficiently...?
 
Last edited:
You could also consider ZFS as a storage option. Be aware that due to a kernel bug you will not be able to live migrate container (openvz) deployed on GlusterFS.

As to your question regarding compression. No matter what you choose for storage proxmox will not be able to apply compression during migration (storage migration). For host migration this is not an issue since host migration only migrates the configuration and in case of live migration also the memory.
 
You could also consider ZFS as a storage option.

Thanks a lot for the tip but in the wiki it looked like ZFS is ony supported as a local storage and not for the replication/distribution. If we use ZFS, would that mean that we need to manually distribute it - i.e. without involving Proxox?

We plan to use KVM



Be aware that due to a kernel bug you will not be able to live migrate container (openvz) deployed on GlusterFS.

OK, but we currently planned to use KVM anyway because as far as I know, this will be easier when it comes to live-migration. Isn't that correct?



From Ceph, GlusterFS and ZFS: What would you recommend?
 
Thanks a lot for the tip but in the wiki it looked like ZFS is ony supported as a local storage and not for the replication/distribution. If we use ZFS, would that mean that we need to manually distribute it - i.e. without involving Proxox?
That is a misunderstanding. ZFS is not supported for local storage. It is only supported for remote storage like Omnios, Nexenta. ZOL (ZFS on Linux) can be used but is not recommended for enterprise. Read more here: http://pve.proxmox.com/wiki/Storage:_ZFS
OK, but we currently planned to use KVM anyway because as far as I know, this will be easier when it comes to live-migration. Isn't that correct?
Well kind of since live migration is not really possible with openvz until proxmox supports ploop: http://openvz.org/Ploop
From Ceph, GlusterFS and ZFS: What would you recommend?
If you have the money I would recommend Nexenta (ZFS) since Nexenta has support for a clustered storage (HA). Ceph is also an option but if you need lots of IO and throughput it requires at least 3 dedicated ceph servers with a lot of disks. The cheap way, if you can live without clustered storage, is Omnios (ZFS). If you are not experienced with Solaris I can recommend to install a web interface named napp-it. If you are planning to use it in production I would recommend buying the addon monitor plugin to napp-it since this gives you very detailed life health and performance statistics. See http://www.napp-it.org/extensions/monitoring_en.html