Hi
After setting up CEPH storage directly running on the 3 Proxmox nodes and moving all VM's to the CEPH storage we noticed very slow backup speed to our NFS storage.
Backing up the same VM from local storage to the exactly same NFS storage is must faster.
How can we improve the backup speed?
Backup from CEPH storage to NFS
Backup from local storage to NFS
CEPH write performance
CEPH read performance
PVE Version
Many thanks in advance for your help
Best
Andreas
After setting up CEPH storage directly running on the 3 Proxmox nodes and moving all VM's to the CEPH storage we noticed very slow backup speed to our NFS storage.
Backing up the same VM from local storage to the exactly same NFS storage is must faster.
How can we improve the backup speed?
Backup from CEPH storage to NFS
Code:
[COLOR=#000000][FONT=tahoma]INFO: starting new backup job: vzdump 103 --remove 0 --mode snapshot --compress lzo --storage pvebackup-archive --node drz-pve01-02[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Starting Backup of VM 103 (qemu)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status = running[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: update VM 103: -lock backup[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: backup mode: snapshot[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: ionice priority: 7[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: creating archive '/mnt/pve/pvebackup-archive/dump/vzdump-qemu-103-2015_04_28-09_03_27.vma.lzo'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: started backup task 'a4359115-6214-4154-9947-bbe4094d514f'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 1% (203620352/12884901888), sparse 1% (137728000), duration 3, 67/21 MB/s[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 2% (336265216/12884901888), sparse 1% (149213184), duration 6, 44/40 MB/s[/FONT][/COLOR]
[FONT=tahoma][COLOR=#000000].
[/COLOR][/FONT].
.
[COLOR=#000000][FONT=tahoma]INFO: status: 99% (12794789888/12884901888), sparse 86% (11095097344), duration 213, 63/0 MB/s[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 100% (12884901888/12884901888), sparse 86% (11185209344), duration 215, 45/0 MB/s[/FONT][/COLOR]
[B][COLOR=#000000][FONT=tahoma]INFO: transferred 12884 MB in 215 seconds (59 MB/s)[/FONT][/COLOR][/B]
[COLOR=#000000][FONT=tahoma]INFO: archive file size: 755MB[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Finished Backup of VM 103 (00:03:37)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Backup job finished successfully[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK OK[/FONT][/COLOR]
Backup from local storage to NFS
Code:
[COLOR=#000000][FONT=tahoma]INFO: starting new backup job: vzdump 103 --remove 0 --mode snapshot --compress lzo --storage pvebackup-archive --node drz-pve01-02[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Starting Backup of VM 103 (qemu)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status = running[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: update VM 103: -lock backup[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: backup mode: snapshot[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: ionice priority: 7[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: creating archive '/mnt/pve/pvebackup-archive/dump/vzdump-qemu-103-2015_04_28-09_10_17.vma.lzo'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: started backup task '66f1667d-436e-4dab-a70b-866e354e3d9e'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 3% (421920768/12884901888), sparse 1% (161206272), duration 3, 140/86 MB/s[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 6% (832438272/12884901888), sparse 1% (170385408), duration 6, 136/133 MB/s[/FONT][/COLOR]
[FONT=tahoma][COLOR=#000000].
[/COLOR][/FONT].
.
[COLOR=#000000][FONT=tahoma]INFO: status: 92% (11920015360/12884901888), sparse 79% (10220331008), duration 24, 863/0 MB/s[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: status: 100% (12884901888/12884901888), sparse 86% (11185209344), duration 25, 964/0 MB/s[/FONT][/COLOR]
[B][COLOR=#000000][FONT=tahoma]INFO: transferred 12884 MB in 25 seconds (515 MB/s)[/FONT][/COLOR][/B]
[COLOR=#000000][FONT=tahoma]INFO: archive file size: 755MB[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Finished Backup of VM 103 (00:00:26)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]INFO: Backup job finished successfully[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK OK[/FONT][/COLOR]
CEPH write performance
Code:
root@drz-pve01-02:~# rados -p drz-pveceph01 bench -b 4194304 60 write -t 32 --no-cleanup Maintaining 32 concurrent writes of 4194304 bytes for up to 60 seconds or 0 objects
Object prefix: benchmark_data_drz-pve01-02_995542
Total time run: 60.835211
Total writes made: 4196
Write size: 4194304
Bandwidth (MB/sec): 275.893
Stddev Bandwidth: 80.7099
Max bandwidth (MB/sec): 444
Min bandwidth (MB/sec): 0
Average Latency: 0.463263
Stddev Latency: 0.214722
Max latency: 1.69095
Min latency: 0.115577
root@drz-pve01-02:~#
CEPH read performance
Code:
root@drz-pve01-02:~# rados -p drz-pveceph01 bench -b 4194304 60 seq -t 32 --no-cleanup
Total time run: 8.652504
Total reads made: 4196
Read size: 4194304
Bandwidth (MB/sec): 1939.785
Average Latency: 0.0657323
Max latency: 0.128604
Min latency: 0.0228346
root@drz-pve01-02:~#
PVE Version
Code:
root@drz-pve01-02:~# pveversion -vproxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-3 (running version: 3.4-3/2fc72fee)
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.4-3
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-32
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Many thanks in advance for your help
Best
Andreas