KVM Backup Speed

matthew

Renowned Member
Jul 28, 2011
210
5
83
I have always worked with OpenVZ containers up tell now. LXC has no user quotas so I guess I am forced too move to KVM. If I do a KVM backup with suspend how much downtime would I have typically?

I am thinking of using EXT4 instead of ZFS because of a server crash and I am not sure its stable. With EXT4 snapshots are not possible are they?

How do KVM backups work anyway? Does the compressed file contain all the files or just an image? Is it possible to mount the image to see the file system?
 
2 easy things: If you have an HW Raidcontroller use ext, if doesn't have use ZFS. ZFS is also fast and stable.

Snapshots aren't a problem. Works fine with ext4 and ZFS. Maybe ZFS is faster because it uses RAW format. Snapshots with ex4 are available but just with qcow2.

The compressed file contain the image and the configfile. And yes you can mount the image. But it is better to have an extra filebackup, like Backuppc (as Server Opensource, recommend), or in Windows directly Shadowcopy with ... for example.. Backupassist (costs money and not central).

Here some infos about: https://pve.proxmox.com/wiki/Backup_and_Restore
BTW: We use KVM a long long time in proxmox. And Backup and Restore worked always fine.

Test it, do it.
Best Regards
 
How do KVM backups work anyway? Does the compressed file contain all the files or just an image? Is it possible to mount the image to see the file system?

kvm backup don't need snapshot support from storage.

the backup job is reading all blocks, and write them sequentially (with optionnal compression). If a vm write occur on a not yet backuped block, the old block is backuped then the new write replace the block.

The backup is the image state at the timestart of the backup job.

You can't mount the backup directly, you need to restore it first.
 
I am new to ZFS. Say I use the proxmox 4.1 install ISO/usb and install on two or more 2TB sata drives in a ZFS software raid1 type array for redundancy. What are the next steps in making everything work and using the local drives for VM's? Documentation seems a bit short on this. Think I have got it working I am just doubting what I did was right. One machine had kernel panic on reboot too so really doubting.

In Proxmox 3.x and single local drive or hardware raid it just worked after install. I have actually installed Proxmox 3.x in past on a single SSD then mounted /var/lib/vz on two addition local drives in a software raid1 array and it worked pretty well.

Thanks!
 
Just add an new ZFSpool in Storagetab. I think it is called "pve-1" . For easier adminisration you can also install the system on two SSDs and after then add an extra pool with your HDDs. I do this so on bigger systems. You can't save VMs directly on ZFS. Only on an dataset. For example:

Code:
zfs list
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
rpool                                                  10.4G  16.4G   144K  /rpool
rpool/ROOT                                             6.85G  16.4G   144K  /rpool/ROOT
rpool/ROOT/pve-1                                       6.85G  16.4G  2.95G  /
rpool/ROOT/pve-1/vm-108-disk-1                         3.50G  16.4G  3.50G  -
rpool/swap                                             3.59G  20.0G  17.6M  -
v-machines                                             3.09T  2.17T   104K  /v-machines
v-machines/home                                        2.90T  2.17T  2.82T  /v-machines/home
v-machines/subvol-109-disk-1                            321M  7.69G   321M  /v-machines/subvol-109-disk-1
v-machines/vm-100-disk-2                               5.97G  2.17T  5.89G  -
v-machines/vm-101-disk-1                               15.5G  2.17T  14.9G  -
v-machines/vm-102-disk-1                               3.43G  2.17T  3.23G  -
v-machines/vm-102-state-vor_grafischem_Paketinstaller   765M  2.17T   765M  -
v-machines/vm-103-disk-2                               35.1G  2.17T  34.4G  -
v-machines/vm-104-disk-1                               40.3G  2.18T  38.7G  -
v-machines/vm-105-disk-1                               6.46G  2.17T  6.46G  -
v-machines/vm-106-disk-1                               41.3G  2.18T  39.4G  -
v-machines/vm-107-disk-1                               40.3G  2.17T  39.1G  -
v-machines/vm-110-disk-1                               5.00G  2.17T  5.00G  -

The pool from the proxmoxinstaller is "rpool". And the dataset what you can use to store vm's is "pve-1". In this case, v-machines is an extra pool with HDDs. It looks so:
Code:
zpool status 
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       sda3    ONLINE       0     0     0
       sdb3    ONLINE       0     0     0

errors: No known data errors

  pool: v-machines
 state: ONLINE
  scan: resilvered 1.09T in 4h45m with 0 errors on Sat May 23 02:48:52 2015
config:

    NAME                                            STATE     READ WRITE CKSUM
    v-machines                                      ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D0KRWP  ONLINE       0     0     0
       ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0343538    ONLINE       0     0     0
     mirror-1                                      ONLINE       0     0     0
       ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D688XW  ONLINE       0     0     0
       ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D63WM0  ONLINE       0     0     0
     mirror-2                                      ONLINE       0     0     0
       ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0381420    ONLINE       0     0     0
       ata-WDC_WD20EURS-63S48Y0_WD-WMAZA9381012    ONLINE       0     0     0

errors: No known data errors
The thing is, on ZFS every should be perfectly, same diskstype, an Pro or Enterprise, type SAS is recommended (Featurelist) , default on proxmox is 4k phy sectorsize. Every HDD musst have same sectorsize. A real SATA/SAS controller is needet, no Fakraid, no SATA with his own bios... On us test we had also kernelpanic with no real satacontrollers. Testet also with BSD/Nas4free/Freenas and solaris.

So when you have a problem with ZFS installation, the most thing is a wrong hardware. We have a lot of servers with zfs running. We used solaris or nas4free. Since a time we use for all us phy servers proxmox. With HW Raid with ZFS. Mixed what we need. And so some hardware is simply not compatible.

So what you need for backup. It is the same as your installation with single disk and softraid.
 
I did a fresh install of proxmox 4.1 on two 4TB identical sata drives in ZFS raid1. I now have this:

root@k6:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 16.7G 3.50T 96K /rpool
rpool/ROOT 802M 3.50T 96K /rpool/ROOT
rpool/ROOT/pve-1 802M 3.50T 802M /
rpool/swap 15.9G 3.51T 64K -
root@k6:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors
root@k6:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 8.9M 3.2G 1% /run
rpool/ROOT/pve-1 3.5T 802M 3.5T 1% /
tmpfs 7.9G 31M 7.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
rpool 3.5T 128K 3.5T 1% /rpool
rpool/ROOT 3.5T 128K 3.5T 1% /rpool/ROOT
tmpfs 100K 0 100K 0% /run/lxcfs/controllers
cgmfs 100K 0 100K 0% /run/cgmanager/fs
/dev/fuse 30M 12K 30M 1% /etc/pve

Now do I go in GUI and create a new ZFS storage and use "rpool/ROOT/pve-1" as ZFS pool, select thin provision and use something like zfs-pool as ID?
 
By default in the GUI after install there is something in storage called "local" and it states its a directory "/var/lib/vz". For content it has everything selected besides vzdump. Why cannot this be used for containers and vm's?

It does not work when I try to create containers on local though, ends with this error.

Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /var/lib/vz/images/100/vm-100-disk-1.raw' failed: exit code 144

Storing LXC templates and ISO's to local does work fine though. Creating containers on the zfs_storage I created created in GUI on "rpool/ROOT/pve-1" seems to work fine though. Just trying to understand everything.
 
By default in the GUI after install there is something in storage called "local" and it states its a directory "/var/lib/vz". For content it has everything selected besides vzdump. Why cannot this be used for containers and vm's?


Afaik in order to use a standard "local" Storage as created by the Proxmox installer, all you need to do is go into "datacenter > Storage > select local > click edit", go into the dropdown-list "content" and select "Diskimage (for KVM)" and "Container (for LXC)". Blue means selected, white means deselected. Then ofc press "OK".


I use that regularly for testing VM's and Containers.
 
Using "rpool/ROOT/pve-1" does not appear to work.

mounting container failed - command 'mount -o bind /rpool/ROOT/pve-1/subvol-100-disk-1 /var/lib/lxc/100/rootfs/' failed: exit code 32
 
Nice... yes i think. I've tested here on my machine to add an new LXC to the rpool. And i have exactly the same error. Looks likes an Bug.
I always used an extra pool for this things, therefore mine is also never noticed. Add an extra dataset an test it again.
Code:
zfs create rpool/LXC
Don't forget to add this in the storage tab.
 
I have never worked with KVM much. Running Proxmox 4.x on ZFS and I want to create a Centos 7 VM. If I specify 500GB for disk size will it actually use 500GB of disk space? If the 500GB runs low is there an easy way to add space to the VM and expand its drive?

Stuff like that was easy on Openvz. Now with no user quotas on LXC I am being forced to move to something else.
 
If I specify 500GB for disk size will it actually use 500GB of disk space?
no it's thin provisionned, take 0 space at create.

If the 500GB runs low is there an easy way to add space to the VM and expand its drive?
click disk resize button ;)
(you still need to resize parition if you have them, and extend filesystem)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!