Can I use LVM Snapshot backup for KVM in 3.1?

e100

Renowned Member
Nov 6, 2010
1,268
45
88
Columbus, Ohio
ulbuilder.wordpress.com
The new KVM LiveBackup feature is great, but I believe it is causing issues for some windows VMs.

Something changed in 3.0 that is causing issues during backup.
After doing my best to solve it I am left with one last item, the new backup method.

I want to use the old LVM style backup for a couple weeks so I can see if the problem goes away.
If it does, then we know that these recent backup issues I and others are having are related to the new backup method.

How can I install the older version of vzdump that used LVM snapshots when backing up KVM?

pveversion -v
Code:
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
 
I am not the only person who has had problems since the introduction of the new backup method for KVM.
To determine if it is the new backup method or some other issue I need to be able to use the old LVM backup with KVM machines while running Proxmox 3.1

Is there any way to accomplish this?

There is a problem, we need to get to the bottom of it.
Right now the only solution is to NOT backup some VMs, this is not acceptable.
 
I haven't seen any problems yet with windows server 2012 vms, but my systems are all very low load right now, so I'm not a valid test case. I'm assuming you have some other imaging solution to run on the vms so you aren't totally exposed to failure?
 
We backup three times a week, you can see the backup spikes on cpu load. green = average orange=peak
last 3 months:
loadchange2.png

Last 6 months:
loadchange3.png

You can see at the begining of Aug the load during the backups is much higher than it has been in the past.
The peak and average load has doubled, clearly something changed for the worse.
This problem exists on all of our servers with varying hardware.
Some VMs are ok with the slower operations, some are not.

I have checked everything I could think of, tuned everything I can think of, still this problem exists.
Either something in KVM or Kernel it to blame or it is this new backup method.

I think it is the new backup method, to prove or disprove that I need to be able to use the old KVM LVM snapshot backup method inside 3.1.
If the problem goes away, backup method is to blame, if it stays then it must be something in the kernel or KVM.


I would never leave myself totally exposed to failure, but not having whole VM backups makes restoring more difficult.
Wanted to create a signature yesterday but they must have that feature disabled, this is what I wanted to add:

Your backups must be tested
So you know they work as expected
Offline is best
So you can rest
When lightening strikes unexpected
 
You can see at the begining of Aug the load during the backups is much higher than it has been in the past.
The peak and average load has doubled, clearly something changed for the worse.

I assume the backup is simply much faster now?
 
I assume the backup is simply much faster now?
Actually it is slower with new backup method.

Old method made 166GB lzo file in 00:46:40:
Code:
156: Jun 30 23:00:40 INFO: Starting Backup of VM 156 (qemu)
156: Jun 30 23:00:40 INFO: status = running
156: Jun 30 23:00:41 INFO: update VM 156: -lock backup
156: Jun 30 23:00:41 INFO: backup mode: snapshot
156: Jun 30 23:00:41 INFO: ionice priority: 7
156: Jun 30 23:00:41 INFO: creating archive '/backup/dump/vzdump-qemu-156-2013_06_30-23_00_40.vma.lzo'
------
156: Jun 30 23:46:59 INFO: transferred 354334 MB in 2778 seconds (127 MB/s)
156: Jun 30 23:46:59 INFO: archive file size: 166.63GB
156: Jun 30 23:46:59 INFO: delete old backup '/backup/dump/vzdump-qemu-156-2013_05_03-23_01_57.vma.lzo'
156: Jun 30 23:47:20 INFO: Finished Backup of VM 156 (00:46:40)

New backup method made a 167GB lzo file in 01:56:38 :
Code:
156: Sep 08 23:02:03 INFO: Starting Backup of VM 156 (qemu)
156: Sep 08 23:02:03 INFO: status = running
156: Sep 08 23:02:03 INFO: update VM 156: -lock backup
156: Sep 08 23:02:04 INFO: backup mode: snapshot
156: Sep 08 23:02:04 INFO: ionice priority: 7
156: Sep 08 23:02:04 INFO: creating archive '/backup/dump/vzdump-qemu-156-2013_09_08-23_02_03.vma.lzo'
------
156: Sep 09 00:57:18 INFO: transferred 354334 MB in 6914 seconds (51 MB/s)
156: Sep 09 00:57:18 INFO: archive file size: 167.32GB
156: Sep 09 00:57:18 INFO: delete old backup '/backup/dump/vzdump-qemu-156-2013_07_07-23_00_45.vma.lzo'
156: Sep 09 00:57:56 INFO: delete old backup '/backup/dump/vzdump-qemu-156-2013_08_07-23_02_33.vma.lzo'
156: Sep 09 00:58:41 INFO: Finished Backup of VM 156 (01:56:38)

Same hardware, only difference is in Proxmox versions.

I have seen instances on other servers where it was slightly faster.
The speed of the backup is not my problem, even if it was much faster I would be still be unhappy if it is causing problems.

Faster and causing apps in my VM to crash is not an improvement.
Faster and more load causing my VMs to run slower during backup is not an improvement.

I like the new design, it allows snapshot backup even without LVM, this new method is good in many aspects.
But I suspect it has some downsides too, I would like to use the old KVM LVM Snapshot method so I can prove/disprove that the new backup method is casuing my problems.
If it turns out that the new method is causing my problems my proposal would be to allow us to chose new methhod or LVM snapshot method.

Right now all I want to do is provide some conclusive evidence that the new method is or is not to blame for my recent problems, can you help me do that?
 
Right now all I want to do is provide some conclusive evidence that the new method is or is not to blame for my recent problems, can you help me do that?

I need a way to reproduce the problems. Here, the new method is faster. Maybe you can post at least the VM config, so that I can play around with that here? What do you run inside the VM - is it very active during backup?
 
Windows 2003 seems to have the most problems.
Some of my 2008 servers have issues where they are just horribly slow during the backup, did not have this problem with old backup, see screen shot at bottom.

This particular VM is idle nearly all the time, windows 2003.
Code:
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: XXXXXXXXX
net0: e1000=9A:C6:99:5C:4A:DF,bridge=vmbr0
onboot: 1
ostype: wxp
sockets: 1
virtio0: vm14-vm13:vm-156-disk-1,cache=none

The application running in the VM is Adobe Connect, it is idle during the backup time.

When the backup starts I see this in the Event Viewer (this screen shot I made a few days ago from a different 2003 VM but error is the same in both):
attachment.php


During that same time, the application running in the VM does a health check about once a minute, that check will timeout and the time jumps in the logs a couple minutes.

I have a suggestion on comparing old vs new backup in your lab.

Do some disk IO benchmarks inside the VM during the backup.
I suspect that disk IO is much slower when using new backup method.
That would explain the problems I am seeing, I just have no way to test that myself because I have no idea how to use old backup method in 3.1.

My backup media is a single SATA disk connect to the motherboard.
It is encrypted using LUKS with cryptsetup.
encrypted volume is formatted ext4, mounted like this:
Code:
mount -o barrier=0,noatime,data=writeback /dev/mapper/backup /backup


Regarding my slow 2008 vms during backup, one runs MSSQL.
We recently upgraded this machine to 128GB of RAM, did lots of tuning the last couple of weeks (dirty write buffers, THP) that did help as shown in the last two weeks.
But we also have a couple of backup failures the last two weeks due to disk being full, you can see the Friday backup this week and last are still pretty bad and horribly worse that before the new backup method.

This is a graph of how long it takes to log into a website that uses this 2008 DB server.
You can see, starting early August during backups it takes a much longer time to log into the site.
Backups are three times a week, Mon,Wed, Fri had this same schedule for years now.

logintime.png

NOTE: The Server running the 2008 DB VM had Proxmox and Hardware upgraded in early August, the servers running 2003 VMs only had Proxmox upgraded.

EDIT: We did have some bad RAM in the two DRBD paired servers where this 2008 DB server resides. We had temporarily moved the VM to the other node that has a slightly different backup schedule. That is why the spikes do not line up with Mon,Wed,Fri consistently. Those three larges spikes per week in Aug are the days this 2008 DB VM was getting backed up.
 
Last edited:
I just have no way to test that myself because I have no idea how to use old backup method in 3.1.

That is simply not possible, so you need to run an old version if you want old backup code.

My backup media is a single SATA disk connect to the motherboard.
It is encrypted using LUKS with cryptsetup.
encrypted volume is formatted ext4, mounted like this:

Please can you test with a reasonable fast disk without encrytion? (maybe AIO does not work with LUKS)

Also test if it help if you use CFQ scheduler (default is 'deadline' now).
 
Last edited:
Please can you test with a reasonable fast disk without encrytion? (maybe AIO does not work with LUKS)
I will see if I can perform some tests without encryption. But even if that helps this does not solve the problem.
If the new backup method performs bad because of encryption but the old method worked fine with it, then it is the new backup method that is the problem not the encryption.

From my logs above with the old backup method it wrote 166.63GB in 2778 seconds.
That is an average of 60MB/sec and that includes reading, compressing and writing the data, seems fast enough to me.
I have benchmarked the encryption before and easily get over 100MB/sec in sequential writing.

If the issue is AIO related from the new backup method then it is the new backup method that is causing problems.

Also test if it help if you use CFQ scheduler (default is 'deadline' now).
Been there, done that, did not help at all.

As I have stated before, I have tried everything there is to try.
The only remaining item that makes any logical sense as being the cause is the new backup method.

This is what needs tested:
Code:
if ( backup_method == old and problem == false ) and ( backup_method == new and problem == true )
  print "new backup method is to blame, investigate why new method causes problems"
else
  print "go look somewhere else for the problem"  
end
 
I need more info about disk 'vm14-vm13:vm-156-disk-1,cache=none':

- what kind of storage is vm14-vm13
- how large is that disk exactly
 
Perphaps Proxmox 3 must add Backup Verification. I lost 1 VM this week, here the story:

I do backup with no error then delete VM. Move backup file to another Proxmox 3 server, restore it then failed!
Now, I add Backup verification manually for each backup file.
 
I do have the same story :(
I recently can't restore successfully backed up VMs after upgrading to version 3.x (I use two different types of storages: LVM over DRBD and NFS)

Is here any back up format that is 100% recoverable for now?
 
I do not know why you and kotakomputer are having issues with restoring, maybe the backup was corrupted when being copied as dietmar suggested.

We test our backups regularly and just tested restoring over 100 VMs that were backed up using the new 3.1 backup method.
Every single one restored perfectly fine.
The only problem I have is performance issues during the backup process.

If you want to discuss restore failures, please start a new thread so we do not have two very different topics in this thread.
 
And you verified the md5sum after that manual move?
You may think that the backup file corrupted, so I create a new backup then restore to same Server and the restore failed.

As @Whatever said, my server is also upgraded from Proxmox 2 to 3.

Here is my backup and restore detail into same Server:
Code:
INFO: starting new backup job: vzdump 36189 --remove 0 --mode snapshot --compress lzo --storage local --node dsu-036202
INFO: Starting Backup of VM 36189 (qemu)
INFO: status = stopped
INFO: update VM 36189: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-36189-2013_09_15-11_17_22.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '31da69b7-9b4c-4b9e-a82a-df105e5861b8'
INFO: status: 0% (128909312/26843545600), sparse 0% (2596864), duration 3, 42/42 MB/s
INFO: status: 1% (273481728/26843545600), sparse 0% (4575232), duration 6, 48/47 MB/s
INFO: status: 2% (548864000/26843545600), sparse 0% (13647872), duration 18, 22/22 MB/s
INFO: status: 3% (824508416/26843545600), sparse 0% (20217856), duration 32, 19/19 MB/s
INFO: status: 4% (1081737216/26843545600), sparse 0% (35422208), duration 50, 14/13 MB/s
INFO: status: 5% (1352794112/26843545600), sparse 0% (41238528), duration 69, 14/13 MB/s
INFO: status: 6% (1636368384/26843545600), sparse 0% (45727744), duration 78, 31/31 MB/s
INFO: status: 7% (1923940352/26843545600), sparse 0% (48869376), duration 81, 95/94 MB/s
INFO: status: 8% (2201092096/26843545600), sparse 0% (50540544), duration 94, 21/21 MB/s
INFO: status: 12% (3227451392/26843545600), sparse 3% (922673152), duration 97, 342/51 MB/s
INFO: status: 16% (4449697792/26843545600), sparse 7% (2112880640), duration 108, 111/2 MB/s
INFO: status: 19% (5325914112/26843545600), sparse 10% (2852564992), duration 111, 292/45 MB/s
INFO: status: 20% (5368709120/26843545600), sparse 10% (2853507072), duration 119, 5/5 MB/s
INFO: status: 21% (5679022080/26843545600), sparse 10% (2867904512), duration 132, 23/22 MB/s
INFO: status: 22% (5912330240/26843545600), sparse 10% (2878582784), duration 144, 19/18 MB/s
INFO: status: 23% (6240272384/26843545600), sparse 10% (2886692864), duration 155, 29/29 MB/s
INFO: status: 24% (6442778624/26843545600), sparse 10% (2888937472), duration 165, 20/20 MB/s
INFO: status: 25% (6760759296/26843545600), sparse 10% (2902261760), duration 182, 18/17 MB/s
INFO: status: 26% (6998065152/26843545600), sparse 10% (2910633984), duration 194, 19/19 MB/s
INFO: status: 27% (7290355712/26843545600), sparse 10% (2914455552), duration 207, 22/22 MB/s
INFO: status: 28% (7572946944/26843545600), sparse 10% (2916986880), duration 218, 25/25 MB/s
INFO: status: 29% (7788036096/26843545600), sparse 10% (2923110400), duration 232, 15/14 MB/s
INFO: status: 30% (8053063680/26843545600), sparse 10% (2925330432), duration 247, 17/17 MB/s
INFO: status: 31% (8346730496/26843545600), sparse 10% (2925916160), duration 266, 15/15 MB/s
INFO: status: 32% (8650096640/26843545600), sparse 10% (2931679232), duration 270, 75/74 MB/s
INFO: status: 33% (8934129664/26843545600), sparse 10% (2935578624), duration 283, 21/21 MB/s
INFO: status: 34% (9196273664/26843545600), sparse 10% (2940821504), duration 295, 21/21 MB/s
INFO: status: 35% (9412214784/26843545600), sparse 10% (2945323008), duration 305, 21/21 MB/s
INFO: status: 41% (11137122304/26843545600), sparse 17% (4591091712), duration 308, 574/26 MB/s
INFO: status: 48% (13022134272/26843545600), sparse 24% (6476103680), duration 311, 628/0 MB/s
INFO: status: 65% (17556570112/26843545600), sparse 41% (11010531328), duration 314, 1511/0 MB/s
INFO: status: 75% (20387921920/26843545600), sparse 51% (13733494784), duration 317, 943/36 MB/s
INFO: status: 83% (22524133376/26843545600), sparse 58% (15769849856), duration 320, 712/33 MB/s
INFO: status: 86% (23085449216/26843545600), sparse 60% (16331165696), duration 323, 187/0 MB/s
INFO: status: 94% (25282674688/26843545600), sparse 69% (18528391168), duration 326, 732/0 MB/s
INFO: status: 100% (26843545600/26843545600), sparse 74% (20089257984), duration 327, 1560/0 MB/s
INFO: transferred 26843 MB in 327 seconds (82 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 3.88GB
INFO: Finished Backup of VM 36189 (00:05:31)
INFO: Backup job finished successfully
TASK OK

Restore log:

Code:
restore vma archive: lzop -d -c  /var/lib/vz/dump/vzdump-qemu-36189-2013_09_15-11_17_22.vma.lzo|vma  extract -v -r /var/tmp/vzdumptmp90231.fifo - /var/tmp/vzdumptmp90231
CFG: size: 287 name: qemu-server.conf
DEV: dev_id=1 size: 26843545600 devname: drive-ide0
CTIME: Sun Sep 15 11:17:25 2013
Formatting  '/var/lib/vz/images/123/vm-123-disk-1.qcow2', fmt=qcow2  size=26843545600 encryption=off cluster_size=65536  preallocation='metadata' lazy_refcounts=off 
new volume ID is 'local:123/vm-123-disk-1.qcow2'
map 'drive-ide0' to '/var/lib/vz/images/123/vm-123-disk-1.qcow2' (write zeros = 0)
progress 1% (read 268435456 bytes, duration 1 sec)
progress 2% (read 536870912 bytes, duration 3 sec)
progress 3% (read 805306368 bytes, duration 4 sec)
progress 4% (read 1073741824 bytes, duration 9 sec)
progress 5% (read 1342177280 bytes, duration 17 sec)
progress 6% (read 1610612736 bytes, duration 25 sec)
progress 7% (read 1879048192 bytes, duration 32 sec)
progress 8% (read 2147483648 bytes, duration 37 sec)
progress 9% (read 2415919104 bytes, duration 41 sec)
progress 10% (read 2684354560 bytes, duration 41 sec)
progress 11% (read 2952790016 bytes, duration 41 sec)
progress 12% (read 3221225472 bytes, duration 47 sec)
progress 13% (read 3489660928 bytes, duration 47 sec)
progress 14% (read 3758096384 bytes, duration 47 sec)
progress 15% (read 4026531840 bytes, duration 47 sec)
progress 16% (read 4294967296 bytes, duration 47 sec)
progress 17% (read 4563402752 bytes, duration 47 sec)
progress 18% (read 4831838208 bytes, duration 47 sec)
progress 19% (read 5100273664 bytes, duration 47 sec)
progress 20% (read 5368709120 bytes, duration 50 sec)
progress 21% (read 5637144576 bytes, duration 57 sec)
progress 22% (read 5905580032 bytes, duration 65 sec)
progress 23% (read 6174015488 bytes, duration 73 sec)
progress 24% (read 6442450944 bytes, duration 119 sec)
progress 25% (read 6710886400 bytes, duration 141 sec)
progress 26% (read 6979321856 bytes, duration 162 sec)
progress 27% (read 7247757312 bytes, duration 175 sec)
progress 28% (read 7516192768 bytes, duration 189 sec)
progress 29% (read 7784628224 bytes, duration 203 sec)
progress 30% (read 8053063680 bytes, duration 222 sec)
progress 31% (read 8321499136 bytes, duration 225 sec)
progress 32% (read 8589934592 bytes, duration 242 sec)
lzop: /var/lib/vz/dump/vzdump-qemu-36189-2013_09_15-11_17_22.vma.lzo: Checksum error

** (process:90234): ERROR **: restore failed - short vma extent (3014656 < 3797504)
/bin/bash: line 1: 90233 Exit 1                  lzop -d -c /var/lib/vz/dump/vzdump-qemu-36189-2013_09_15-11_17_22.vma.lzo
     90234 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp90231.fifo - /var/tmp/vzdumptmp90231
temporary volume 'local:123/vm-123-disk-1.qcow2' sucessfuly removed
TASK  ERROR: command 'lzop -d -c  /var/lib/vz/dump/vzdump-qemu-36189-2013_09_15-11_17_22.vma.lzo|vma  extract -v -r /var/tmp/vzdumptmp90231.fifo - /var/tmp/vzdumptmp90231'  failed: exit code 133

We may think the HDD corrupted? I have /var/lib/vz/dump/vzdump-qemu-winxp-fff.vma.lzo backup which using Proxmox 2 (before I do upgrade), then I try to restore it and success, here:

Code:
restore vma archive: lzop -d -c  /var/lib/vz/dump/vzdump-qemu-winxp-fff.vma.lzo|vma extract -v -r  /var/tmp/vzdumptmp92019.fifo - /var/tmp/vzdumptmp92019
CFG: size: 279 name: qemu-server.conf
DEV: dev_id=1 size: 26843545600 devname: drive-ide0
CTIME: Wed Aug 28 04:17:58 2013
Formatting  '/var/lib/vz/images/444/vm-444-disk-1.qcow2', fmt=qcow2  size=26843545600 encryption=off cluster_size=65536  preallocation='metadata' lazy_refcounts=off 
new volume ID is 'local:444/vm-444-disk-1.qcow2'
map 'drive-ide0' to '/var/lib/vz/images/444/vm-444-disk-1.qcow2' (write zeros = 0)
progress 1% (read 268435456 bytes, duration 1 sec)
progress 2% (read 536870912 bytes, duration 3 sec)
progress 3% (read 805306368 bytes, duration 4 sec)
progress 4% (read 1073741824 bytes, duration 5 sec)
progress 5% (read 1342177280 bytes, duration 5 sec)
progress 6% (read 1610612736 bytes, duration 5 sec)
progress 7% (read 1879048192 bytes, duration 5 sec)
progress 8% (read 2147483648 bytes, duration 5 sec)
progress 9% (read 2415919104 bytes, duration 5 sec)
progress 10% (read 2684354560 bytes, duration 5 sec)
progress 11% (read 2952790016 bytes, duration 5 sec)
progress 12% (read 3221225472 bytes, duration 5 sec)
progress 13% (read 3489660928 bytes, duration 5 sec)
progress 14% (read 3758096384 bytes, duration 5 sec)
progress 15% (read 4026531840 bytes, duration 5 sec)
progress 16% (read 4294967296 bytes, duration 5 sec)
progress 17% (read 4563402752 bytes, duration 5 sec)
progress 18% (read 4831838208 bytes, duration 6 sec)
progress 19% (read 5100273664 bytes, duration 6 sec)
progress 20% (read 5368709120 bytes, duration 6 sec)
progress 21% (read 5637144576 bytes, duration 8 sec)
progress 22% (read 5905580032 bytes, duration 10 sec)
progress 23% (read 6174015488 bytes, duration 11 sec)
progress 24% (read 6442450944 bytes, duration 12 sec)
progress 25% (read 6710886400 bytes, duration 13 sec)
progress 26% (read 6979321856 bytes, duration 14 sec)
progress 27% (read 7247757312 bytes, duration 15 sec)
progress 28% (read 7516192768 bytes, duration 17 sec)
progress 29% (read 7784628224 bytes, duration 17 sec)
progress 30% (read 8053063680 bytes, duration 18 sec)
progress 31% (read 8321499136 bytes, duration 18 sec)
progress 32% (read 8589934592 bytes, duration 18 sec)
progress 33% (read 8858370048 bytes, duration 18 sec)
progress 34% (read 9126805504 bytes, duration 18 sec)
progress 35% (read 9395240960 bytes, duration 18 sec)
progress 36% (read 9663676416 bytes, duration 18 sec)
progress 37% (read 9932111872 bytes, duration 18 sec)
progress 38% (read 10200547328 bytes, duration 18 sec)
progress 39% (read 10468982784 bytes, duration 18 sec)
progress 40% (read 10737418240 bytes, duration 18 sec)
progress 41% (read 11005853696 bytes, duration 18 sec)
progress 42% (read 11274289152 bytes, duration 18 sec)
progress 43% (read 11542724608 bytes, duration 18 sec)
progress 44% (read 11811160064 bytes, duration 18 sec)
progress 45% (read 12079595520 bytes, duration 18 sec)
progress 46% (read 12348030976 bytes, duration 18 sec)
progress 47% (read 12616466432 bytes, duration 18 sec)
progress 48% (read 12884901888 bytes, duration 19 sec)
progress 49% (read 13153337344 bytes, duration 19 sec)
progress 50% (read 13421772800 bytes, duration 19 sec)
progress 51% (read 13690208256 bytes, duration 19 sec)
progress 52% (read 13958643712 bytes, duration 19 sec)
progress 53% (read 14227079168 bytes, duration 19 sec)
progress 54% (read 14495514624 bytes, duration 19 sec)
progress 55% (read 14763950080 bytes, duration 19 sec)
progress 56% (read 15032385536 bytes, duration 19 sec)
progress 57% (read 15300820992 bytes, duration 19 sec)
progress 58% (read 15569256448 bytes, duration 19 sec)
progress 59% (read 15837691904 bytes, duration 19 sec)
progress 60% (read 16106127360 bytes, duration 19 sec)
progress 61% (read 16374562816 bytes, duration 19 sec)
progress 62% (read 16642998272 bytes, duration 19 sec)
progress 63% (read 16911433728 bytes, duration 19 sec)
progress 64% (read 17179869184 bytes, duration 19 sec)
progress 65% (read 17448304640 bytes, duration 19 sec)
progress 66% (read 17716740096 bytes, duration 19 sec)
progress 67% (read 17985175552 bytes, duration 19 sec)
progress 68% (read 18253611008 bytes, duration 19 sec)
progress 69% (read 18522046464 bytes, duration 19 sec)
progress 70% (read 18790481920 bytes, duration 19 sec)
progress 71% (read 19058917376 bytes, duration 20 sec)
progress 72% (read 19327352832 bytes, duration 20 sec)
progress 73% (read 19595788288 bytes, duration 20 sec)
progress 74% (read 19864223744 bytes, duration 20 sec)
progress 75% (read 20132659200 bytes, duration 20 sec)
progress 76% (read 20401094656 bytes, duration 20 sec)
progress 77% (read 20669530112 bytes, duration 20 sec)
progress 78% (read 20937965568 bytes, duration 20 sec)
progress 79% (read 21206401024 bytes, duration 20 sec)
progress 80% (read 21474836480 bytes, duration 20 sec)
progress 81% (read 21743271936 bytes, duration 20 sec)
progress 82% (read 22011707392 bytes, duration 20 sec)
progress 83% (read 22280142848 bytes, duration 20 sec)
progress 84% (read 22548578304 bytes, duration 20 sec)
progress 85% (read 22817013760 bytes, duration 20 sec)
progress 86% (read 23085449216 bytes, duration 20 sec)
progress 87% (read 23353884672 bytes, duration 20 sec)
progress 88% (read 23622320128 bytes, duration 20 sec)
progress 89% (read 23890755584 bytes, duration 20 sec)
progress 90% (read 24159191040 bytes, duration 20 sec)
progress 91% (read 24427626496 bytes, duration 20 sec)
progress 92% (read 24696061952 bytes, duration 20 sec)
progress 93% (read 24964497408 bytes, duration 20 sec)
progress 94% (read 25232932864 bytes, duration 20 sec)
progress 95% (read 25501368320 bytes, duration 20 sec)
progress 96% (read 25769803776 bytes, duration 20 sec)
progress 97% (read 26038239232 bytes, duration 20 sec)
progress 98% (read 26306674688 bytes, duration 20 sec)
progress 99% (read 26575110144 bytes, duration 20 sec)
progress 100% (read 26843545600 bytes, duration 20 sec)
total bytes read 26843545600, sparse bytes 23585402880 (87.9%)
space reduction due to 4K zero bocks 1.41%
TASK OK


And here my pveversion -v
Code:
root@dsu-036202:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
root@dsu-036202:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!