Confused !!!!!! (KVM) disk raw or qcow2, NFS or ISCSI - please help me decide

kumarullal

Renowned Member
Jun 17, 2009
184
0
81
LA, USA
Hi all,
I need some help in deciding what to do.
I have setup a 2 node cluster (Dell 710, 64 GB Ram) with proxmox 2.1 with HA (Fencing configured)
I was so excited to move all my VM form VMware to Proxmox. I migrated 5 VM (Windows 2003 servers) already. The Disk is qcow2 and storage is NFS (iomega sotrcebter ix4-200d).
Then I fired up all the VMs. Everything was good. But when I was comparing the performance of the same VMs with vmware, I found, that the VMs were running much faster in vmware then in proxmox.
SO I attributed the performance to the storage, since VMware is using iscsi and proxmox was using NFS.
So I decided that I should now move the vms in Proxmox from NFS to ISCSI and then do a comparison.
So I added ISCSI targets to proxmox. Then I created LVM volumes using the ISCSI LUNS.
I soon realized that I could not move my vms form NFS to ISCSI (LVM) because the disks were of qcow2 format.
So I converted the disks from qcow2 to Raw. The performance improved, However........
I found there were several disadvantages of using raw compared to qcow2.
The disk size in qcow2 for instance was 45G (Even though, inside the Windows VM the Disk was 250G)
Therefore the Backups and restores were very quick with qcow2.
The moment I converted the disk form qcow2 to raw, the disk size increased from 45G to 270G
The backups and restored were hours vs minutes in qcow2.
So here is the question.
If I want to retain the flexibility of qcow2 (Faster backups and Restores) and still achive the performance of Iscsi storage, how do I do it.
Does anyone have comparison of iscsi on openfiler and NFS with ZFS on Freenas?
My main intention is to have good performance with faster backups.
Please advise.........
icon9.png
 
No One???

Hi,
each "format" has pro and cons - like you wrote before. I normaly use lvm-storage (FC/DRBD/iSCSI) with "big" VM-disks. The disks for the OS are normaly small (6-60GB) and additional vm-disks are labeld with backup=no. So I can do fast backups (weekly) with proxmox (to get the VM back - if something strange happens) and use an restore inside the VM (in my case bacula) to backup the important data nightly.

Udo
 
Don't know if it have any technical advice, but I can say that I find the pointed bloat in the file system to be a deal breaker. I am primarily a web application developer and admin. At work when we adopted VMware as out virtualization standard, I was pretty happy, and so went out and setup rig at home to tinker with as well. Since that time it has been a couple of years of issue after issue at work, space being a major one (which blows my mind).

That whole time I was managing to accomplish things at home with ease that were complicated and a mess at work when the admin there did it, if he even could. In time I discovered it was that I had gone with an NFS approach at home, they went lvm at work. I was just using the files system liek a filesystem and they were trapped in the methods associated with esxi's hefty and limiting file system.

I already have a bias towards flexibility. I think it's virtue shines through. The performance loss is always a balancing act... I find that I can counter the the benefits in web applications by distributing resources, aggressive caching virtual raids and 64k clusters in places where that kind of grab is needed.

There are many ways to improve application performance, but very few ways to deal with another two terabytes gone in a week =^)

So I stay with the versatile file system, and put some logical savvy in the design to compensate.

If that helps at all