Whole system slows down while backup is running

AdrianW

New Member
Aug 30, 2013
7
0
1
Hello

currently running proxmox 3.1 at one of my servers
as soon a backup or clone is started the whole hostsystem and all other virtualmachines are slowed down
(sometimes not really usable anymore)
[one of the VM is ~300GB and the backup tooks about 70minutes, this time the VM is not usable, and all
other VMs are really slowed down]

is there a chance to fix this ? / limit the filetransferrate ?

Backup settings :
- local, snapshot, lzo

Hardware :
- Intel Xeon E3 1230 (4x3,2Ghz + HT)
- 16GB ECC RAM
- 4 x 2TB (Seagate ST2000DM001) in Raid10 @ Adaptec 6405E
- HDD Cache : Write Cache : Disabled (write-through)
- Controller Cache : Write-cache mode : Enabled (write-back), Write-cache setting : Enabled (write-back)

Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      51197.60
REGEX/SECOND:      1297992
HD SIZE:           3536.13 GB (/dev/mapper/pve-data)
BUFFERED READS:    333.42 MB/sec
AVERAGE SEEK TIME: 12.97 ms
FSYNCS/SECOND:     1999.28
DNS EXT:           31.39 ms
DNS INT:           8.27 ms

Code:
pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2



Can you help me ?
also the fsync/sec are a bit low i think, or not ?


thank you very much,

greetings
adrian
 
Last edited:
Hi Adrian, we have the same problem. I was just about to post almost the same thread. ;-)
It happens for us only, when we want to backup Windows-2008 Server in snapshot mode. I haven't noticed this, when I backup Linux-VMs.
I have seen this in VE 2.2, 2.3 and 3.1
We are storing on NFS-shares. It happens with different NFS-shares. Can s.o. shed some light on this issue what to do ? Uwe
 
Well,
I backup to the local storage but the same happened as i backuped to an additional sata hdd.

And currently i'm only running linux vms (all are debian wheezy)

greetings adrian

Sent from my GT-I9505 using Tapatalk 4
 
currently testing with the 300gb VM backup (no compression)

Code:
iostat -d 5 -m -x
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda             122.40  5947.60 1006.40  257.40    54.80    54.32   176.82     3.01    2.39    1.51    5.83   0.79  99.98
dm-0              0.00     0.00    0.00    2.20     0.00     0.01     8.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-2              0.00     0.00 1128.80 6199.40    54.80    53.68    30.32    28.68    3.95    1.39    4.42   0.14  99.98
nb0               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.0

maybe this helps ?
the hdds are at 100% - is there a chance to limit it ?


vzdump supports those settings - how can i set them within proxmox ?
Code:
-bwlimit   integer (0 - N)

                    Limit I/O bandwidth (KBytes per second).
-ionice    integer (0 - 8)

                    Set CFQ ionice priority.

or wont this help ?



EDIT : /etc/vzdump.conf seems to be the right place - would this help ?
 
Last edited:
We have this aswell,

We use both local storage and NFS storage. Backing up to local slows the VM's down but not hugly, backing up to NFS causes the vms to hang until backup has finished.
We are using Prox 3.1
 
ok. This was a good idea. Thanks. I have the same problem. The hdd-performance slows down the System by factor 6 while taking the backup (lzo/snapshot).
Limiting the backup-dump-Speed/Performance might take the trick. I will do some more tests the next days. Can s.o. from the proxmox-team answer as well ?
 
Hi,

I have a productive 1.9 cluster with better components and a non-productive test system on 1.9 with cheaper hardware;
Not seeing any performance issues on the productive servers but on the test system;
The test system has one 3GHz Quad-Core Xeon, 8GB Ram, a cheap Raid-Controller and 4 S-ATA disks configured as Raid-10;
The productive servers have one 2.8GHz Six-Core Xeon, 48GB Ram, Areca SAS Raid-Controller and 8x Seagate Cheetah SAS disks configured as Raid-6;
Both save their backups via NFS to the same destination and only the test system has a noticeable decrease of performance during backup - so i would say it's related to the used hardware components;

Sure, it's the old 1.9 version and i'm just in the process to setup and test 3.1 but i think it will be the same result;

Alex
 
I experienced same problem with both local and LVM shared storage during backup to remote NFS-server. It could seem oddly, but I fixed this issue by adding "vm.swappiness = 0" to /etc/sysctl.conf
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!