I'm not only talking about a load of 2 for some moments, but also about a much higher load that only occurs during the backup and makes our server nearly unusable for some hours...
Does it help if you decrease chunk size (rsize and wsize)?Here's the output of pveperf:
Our server is a HP Proliant DL180G6 with three drives for the proxmox section and nine drives for the VMs, each 500GB large (this one), both in a RAID 5. CPU: Intel E5504, quad-core at 2 GHz. 48 GB RAM. It is connected through Gigabit to our backup server with these mount options:Code:
root@proxmox:~# pveperf CPU BOGOMIPS: 31997.12 REGEX/SECOND: 817650 HD SIZE: 94.49 GB (/dev/mapper/pve-root) BUFFERED READS: 112.32 MB/sec AVERAGE SEEK TIME: 9.16 ms FSYNCS/SECOND: 2411.21 DNS EXT: 33.75 ms DNS INT: 1.01 ms (mydomain)
10.162.32.7:/backup/hd2/proxmox on /mnt/pve/ninja type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.162.32.7,mountvers=3,mountport=39276,mountproto=udp,local_lock=none,addr=10.162.32.7)
You cannot backup to an iSCSI lun, must be a filesystem for backup target.Just curios: Is all problems related to backing up to a NFS share?
Has somebody with problems tried to backup to an iSCSI LUN?
So write speed on VM being backed up is limited to the write speed of the backup storage?The VM being backed up would be limited to 30MB/sec Write speed
I elect your post for quote of the day.Once more: actions in the VM host should not affect any VM internals. The backup happening in the host should be invisible for the virtual machine. Otherwise, something is very wrong...
I really wonder why somebody claims such nonsense? Both things are fixable by using a temporary storage on the local hard disk (as LVM does).There are still two fundamental flaws in KVM Live Backup:
1. If the backup process IO stalls, IO in the VM stalls.
2. The write IO of a VM is limited to the speed of the backup media when writing to any un-archived block.
I am sure the developers could fix issue #1, but there is nothing they can do to fix issue #2.