lvdisplay /dev/pve/vzsnap*
--- Logical volume ---
LV Name /dev/pve/vzsnap-ns227086-0
VG Name pve
LV UUID aDHUMn-mkWJ-P0ct-a2es-OxUF-tH9D-za708x
LV Write Access read/write
LV snapshot status active destination for /dev/pve/data
LV Status available
# open 1
LV Size 500.00 GiB
Current LE 128000
COW-table size 1.00 GiB
COW-table LE 256
Allocated to snapshot [COLOR=#ff0000]0.02%[/COLOR]
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
vmserver:/var/log/vzdump# cat qemu-107.log
Nov 29 23:50:20 INFO: Starting Backup of VM 107 (qemu)
Nov 29 23:50:20 INFO: running
Nov 29 23:50:20 INFO: status = running
Nov 29 23:50:21 INFO: backup mode: snapshot
Nov 29 23:50:21 INFO: bandwidth limit: 65536 KB/s
Nov 29 23:50:21 INFO: ionice priority: 7
Nov 29 23:50:21 INFO: Logical volume "vzsnap-vmserver-0" created
Nov 29 23:50:21 INFO: creating archive '/offsite/vzdump-qemu-107-2011_11_29-23_50_20.tar'
Nov 29 23:50:21 INFO: adding '/offsite/vzdumptmp147138/qemu-server.conf' to archive ('qemu-server.conf')
Nov 29 23:50:21 INFO: adding '/dev/array/vzsnap-vmserver-0' to archive ('vm-disk-virtio0.raw')
Nov 30 00:20:26 INFO: Total bytes written: 86851864576 (45.89 MiB/s)
Nov 30 00:20:26 INFO: archive file size: 80.89GB
Nov 30 00:20:26 INFO: delete old backup '/offsite/vzdump-qemu-107-2011_11_28-23_46_34.tar'
Nov 30 00:20:45 INFO: Logical volume "vzsnap-vmserver-0" successfully removed
Nov 30 00:20:45 INFO: Finished Backup of VM 107 (00:30:25)
vmserver:/var/log/vzdump# cat qemu-108.log
Nov 30 00:20:45 INFO: Starting Backup of VM 108 (qemu)
Nov 30 00:20:45 INFO: running
Nov 30 00:20:45 INFO: status = running
Nov 30 00:20:46 INFO: backup mode: snapshot
Nov 30 00:20:46 INFO: bandwidth limit: 65536 KB/s
Nov 30 00:20:46 INFO: ionice priority: 7
Nov 30 00:20:46 INFO: Logical volume "vzsnap-vmserver-0" created
Nov 30 00:20:46 INFO: creating archive '/offsite/vzdump-qemu-108-2011_11_30-00_20_45.tar'
I see that. However, I'm not sure it's actually affecting anything for these kvm/lvm snapshot images, I have it set to 65536 and it didn't seem to limit i/o on any device... Regardless, I have made the change to 25000 and will report back after tonight's backup.
Interesting. I see i/o on devices much higher than that throughout the process using iostat -xk 1...
Adding '/dev/array/vzsnap-vmserver-0' to archive ('vm-disk-virtio0.raw')
I'm wondering why the backup affects system performance so much? Is lvm working hard to maintain the snapshot? There's not much i/o going on on the guests during backup... Would a faster backup target drive improve things?
CPU BOGOMIPS: 44756.82
REGEX/SECOND: 772976
HD SIZE: 7.14 GB (/dev/mapper/pve-root)
BUFFERED READS: 141.48 MB/sec
AVERAGE SEEK TIME: 0.25 ms
FSYNCS/SECOND: 149.38
DNS EXT: 53.63 ms
DNS INT: 42.97 ms (praece.com)
CPU BOGOMIPS: 44756.82
REGEX/SECOND: 788643
HD SIZE: 2750.67 GB (/dev/sdd1)
BUFFERED READS: 87.43 MB/sec
AVERAGE SEEK TIME: 14.88 ms
FSYNCS/SECOND: 21.73
DNS EXT: 51.28 ms
DNS INT: 52.18 ms (praece.com)