KVM vs. OpenVZ vzdump performance

yccj

New Member
Apr 25, 2011
4
0
1
I've looked through the threads regarding performance with vzdump and haven't seen exactly this question answered. I am seeing a dramatic performance difference between vzdump backups of KVM and OpenVZ machines. I think the numbers speak for themselves pretty well:
Code:
VMID       STATUS       TIME       SIZE  FILENAME
25321 ok 00:23:24 1.10GB 
/mnt/pve/Backup_Server/vzdump-openvz-101-2011_04_23-11_35_01.tgz
25322 ok 00:11:02 5.58GB 
/mnt/pve/Backup_Server/vzdump-qemu-102-2011_04_23-11_58_25.tgz
25323 ok 06:17:39 61.41GB 
/mnt/pve/Backup_Server/vzdump-openvz-103-2011_04_23-12_09_27.tgz
25324 ok 00:18:38 12.09GB 
/mnt/pve/Backup_Server/vzdump-qemu-104-2011_04_23-18_27_06.tgz
25326 ok 00:11:17 5.51GB 
/mnt/pve/Backup_Server/vzdump-qemu-106-2011_04_23-18_45_44.tgz
The KVM images are all raw disks local to the machine. The OpenVZ containers are also stored locally. Both are being backed up to a dedicated remote backup server that is under no other load during this process. I am guessing this has something to do with tarring and then compressing the openvz images, but when a 5.6GB compressed KVM is done in 11 minutes and a 1.1GB compressed OpenVZ container takes 23 minutes something seems wrong.

I am interested in any suggestions to speed this up and I'd really like to understand the reason for such a huge discrepancy in performance. I will probably experiment with pigz as described in this thread and see if it helps.
 
I've looked through the threads regarding performance with vzdump and haven't seen exactly this question answered. I am seeing a dramatic performance difference between vzdump backups of KVM and OpenVZ machines. I think the numbers speak for themselves pretty well:
Code:
VMID       STATUS       TIME       SIZE  FILENAME
25321 ok 00:23:24 1.10GB 
/mnt/pve/Backup_Server/vzdump-openvz-101-2011_04_23-11_35_01.tgz
25322 ok 00:11:02 5.58GB 
/mnt/pve/Backup_Server/vzdump-qemu-102-2011_04_23-11_58_25.tgz
25323 ok 06:17:39 61.41GB 
/mnt/pve/Backup_Server/vzdump-openvz-103-2011_04_23-12_09_27.tgz
25324 ok 00:18:38 12.09GB 
/mnt/pve/Backup_Server/vzdump-qemu-104-2011_04_23-18_27_06.tgz
25326 ok 00:11:17 5.51GB 
/mnt/pve/Backup_Server/vzdump-qemu-106-2011_04_23-18_45_44.tgz
The KVM images are all raw disks local to the machine. The OpenVZ containers are also stored locally. Both are being backed up to a dedicated remote backup server that is under no other load during this process. I am guessing this has something to do with tarring and then compressing the openvz images, but when a 5.6GB compressed KVM is done in 11 minutes and a 1.1GB compressed OpenVZ container takes 23 minutes something seems wrong.

I am interested in any suggestions to speed this up and I'd really like to understand the reason for such a huge discrepancy in performance. I will probably experiment with pigz as described in this thread and see if it helps.
Hi,
I guess it's don't belong to the compression. I think the brake is the many small files which are backuped in case of openvz. With kvm the process can continous read one big file (or some big files).

Udo
 
I tend to agree with you, udo. I think it's probably the tarring stage of the tar and zip that is slowing it down, but it's slowing it down a LOT. Is this just disk i/o and so it's a physical hardware limitation?

FWIW, the server is running dual xeon 5660s with a perc 6/i raid controller and 6 x 300GB 10k RPM SAS drives.