Disk images vs. Raw LVM (Logical Volume) vs. Shared storage

michaeljk

Renowned Member
Oct 7, 2009
59
2
73
Hello,


after some tests on our new hardware we would like to choose our new storage model. Currently, we're using Proxmox 2.x with mostly RAW disk images as simple files on ext3 on our production systems. These servers have been installed with the normal Proxmox Installation CD without special settings.


We want to switch all current vServers to KVM Virtualization, openVZ will not be used anymore. This gives us several possibilities for the new infrastructure with mainly to important questions:


1. Will KVM I/O be noticeable faster on a direct raw LVM Logical Volume instead of a RAW/qcow2 file image? Our vServers will mostly contain simple LAMP installations (Apache2, PHP5, MySQL5, Postfix, ProFTPd, Webfrontend) with a lot of small read/write requests. Unfortunatly, we have no possibility to compare our big vServers which are running with normal RAW file images to a server running on a LVM LV. Does someone of you have real world experiences with such a setup and can tell us if the performance will increase? An image file would be a "Virtual Filesystem in an image on a host-filesystem", so I guess that this would always be slower in terms of I/O.


The Proxmox Installer uses one big volume group ("pve") with a big logical volume ("/dev/pve/data") for the image files as default. What would be the preferred way if we choose local raw LVM as storage for the future? Should we install the Proxmox nodes with the Installer CD and simply remove/shrink the "data" LV after that, or would it be better to install a minimal Debian Wheezy with Proxmox on top of it?


2. Should we use local storage on each Node (either with image files/LV groups) or switching to shared storage? The big advantage of a shared storage solution would be the possibility of live migration and a central backup directly from the storage node. But it can also be single point of failure and I'm mostly concerned about the resulting speed over the network. If we use simple iSCSI on a seperate VLAN with a 10 Gbit connection on the storage node (and 1 GBit or 2 GBit bonded on the Proxmox nodes) together with LVM, would that be sufficient for running several hundred vServers? If yes, which hardware should be used on the storage side?


Instead of iSCSI we could also switch to clustered storage solutions (Ceph, Sheepdog, GlusterFS, ...). The development for such solutions is really fast and as I could seem, Proxmox 3 has implemented Ceph already, but Sheepdog and GlusterFS is not usable for production yet? The wiki article for Ceph seems also not up-to-date. Will RBD storage be nearly as fast as LVM on iSCSI and how flexible is it, particularly if you want to extend the storage or replace a faulty node?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!