Proxmox 4.1-5 Backup NFS

Alvaro Fernandez

New Member
Mar 9, 2016
4
0
1
35
Hi All,

With the new version of promox 4.1, the backups take longer to do, attache the log here:

Code:
mar 09 10:51:17 INFO: Starting Backup of VM 429 (lxc)
mar 09 10:51:17 INFO: status = running
mar 09 10:51:17 INFO: mode failure - some volumes does not support snapshots
mar 09 10:51:17 INFO: trying 'suspend' mode instead
mar 09 10:51:17 INFO: backup mode: suspend
mar 09 10:51:17 INFO: ionice priority: 7
mar 09 10:51:17 INFO: starting first sync /proc/12183/root// to /mnt/pve/pollo_backup3tb/dump/vzdump-lxc-429-2016_03_09-10_51_17.tmp
mar 09 13:03:59 INFO: Number of files: 72,421 (reg: 62,504, dir: 6,454, link: 3,431, dev: 2, special: 30)
mar 09 13:03:59 INFO: Number of created files: 72,420 (reg: 62,504, dir: 6,453, link: 3,431, dev: 2, special: 30)
mar 09 13:03:59 INFO: Number of deleted files: 0
mar 09 13:03:59 INFO: Number of regular files transferred: 62,490
mar 09 13:03:59 INFO: Total file size: 8,456,298,598 bytes
mar 09 13:03:59 INFO: Total transferred file size: 8,454,384,503 bytes
mar 09 13:03:59 INFO: Literal data: 8,454,430,719 bytes
mar 09 13:03:59 INFO: Matched data: 0 bytes
mar 09 13:03:59 INFO: File list size: 1,900,392
mar 09 13:03:59 INFO: File list generation time: 0.001 seconds
mar 09 13:03:59 INFO: File list transfer time: 0.000 seconds
mar 09 13:03:59 INFO: Total bytes sent: 8,460,839,252
mar 09 13:03:59 INFO: Total bytes received: 1,245,124
mar 09 13:03:59 INFO: sent 8,460,839,252 bytes  received 1,245,124 bytes  1,062,742.15 bytes/sec
mar 09 13:03:59 INFO: total size is 8,456,298,598  speedup is 1.00
mar 09 13:03:59 INFO: first sync finished (7962 seconds)
mar 09 13:03:59 INFO: suspend vm
mar 09 13:03:59 INFO: starting final sync /proc/12183/root// to /mnt/pve/pollo_backup3tb/dump/vzdump-lxc-429-2016_03_09-10_51_17.tmp
mar 09 13:04:23 INFO: Number of files: 72,421 (reg: 62,504, dir: 6,454, link: 3,431, dev: 2, special: 30)
mar 09 13:04:23 INFO: Number of created files: 0
mar 09 13:04:23 INFO: Number of deleted files: 0
mar 09 13:04:23 INFO: Number of regular files transferred: 13
mar 09 13:04:23 INFO: Total file size: 8,449,000,147 bytes
mar 09 13:04:23 INFO: Total transferred file size: 100,035,294 bytes
mar 09 13:04:23 INFO: Literal data: 13,138,871 bytes
mar 09 13:04:23 INFO: Matched data: 86,896,423 bytes
mar 09 13:04:23 INFO: File list size: 0
mar 09 13:04:23 INFO: File list generation time: 0.001 seconds
mar 09 13:04:23 INFO: File list transfer time: 0.000 seconds
mar 09 13:04:23 INFO: Total bytes sent: 14,983,707
mar 09 13:04:23 INFO: Total bytes received: 169,330
mar 09 13:04:23 INFO: sent 14,983,707 bytes  received 169,330 bytes  618,491.31 bytes/sec
mar 09 13:04:23 INFO: total size is 8,449,000,147  speedup is 557.58
mar 09 13:04:23 INFO: final sync finished (24 seconds)
mar 09 13:04:23 INFO: resume vm
mar 09 13:04:23 INFO: vm is online again after 24 seconds
mar 09 13:04:24 INFO: creating archive '/mnt/pve/pollo_backup3tb/dump/vzdump-lxc-429-2016_03_09-10_51_17.tar.lzo'
mar 09 13:36:20 INFO: Total bytes written: 8574781440 (8.0GiB, 4.3MiB/s)
mar 09 13:36:24 INFO: archive file size: 6.91GB
mar 09 14:00:24 INFO: Finished Backup of VM 429 (03:09:07)

vzdump.conf

Code:
# vzdump default settings

#tmpdir: /tmp
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N:

bwlimit: 0
size: 512
ionice: 8
#tmpdir: /tmp
 
Where/what kind is your backup storage? If it's some kind of network storage, you should set the tmpdir in vzdump.conf, otherwise you are copying all the data to the network storage with rsync, then reading it back (over the network) to the node to create the (compressed) tar file which is copied over the network again to the backup storage. Also, 1,062,742.15 bytes/sec is not very fast, so either something else is slowing down your I/O performance on the node, or you are copying over a slow/congested network link.
 
  • Like
Reactions: Alvaro Fernandez
Hi fabian,

First of all thanks for your reply,

I have two backup storage 2TB NFS (JFS) and 3 TB NFS (JFS), I have a heavy CT with 1TB of stoarge, and i can't set tmpdir on rootfs because I haven't free space on it.

My network have 100/1000, and I think that the rsync to backup storage is so slow, Is better create a private LAN with Node Proxmox and Backup Stoarge?
 
If using network storage it's always a good idea to separate storage, cluster and external networks (if possible). Otherwise your backups and I/O will interfere with external (e.g., client) access and vice versa. Also, some storage technologies are really latency sensitive, in those cases it might not be a question of slowing down but of breaking completely.

You can avoid the round trip for backup if you use snapshottable storage for the container (e.g., ZFS locally, Ceph for shared storage). That should be faster than the suspend backup you are currently using, because the backup archive is created directly from the snapshot, without first rsyncing to a temporary directory. But getting rid of your bottle neck would still be advisable ;) The first step is probably to find out what exactly is the cause, your networking hardware (NICs, switches, cables), your networking setup (it seems you use one network for everything), your (local?) container storage (no idea what you are using here), your backup storage (which is probably not the limit at the moment - I hope your backup storage can write with more than 1MB/s) or a combination of those.
 
  • Like
Reactions: Alvaro Fernandez
Model Number of Backup Storage Disk : WDC WD30EFRX-68EUZN0

write test 1GB to backup storage :

Code:
dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09195 s, 262 MB/s

i discard errors of backup storage.

and yes, I only use one network for everything, maybe the next step is try to create another private network to server NFS and server Proxmox, and use dedicated switch gigabit.

My local stoarge is RAID 1 over mdadm, and fs is ext4.

Thanks again, fabian
 
That is not a valid benchmark at all (you are neither reading nor writing anything from a disk there, but calculating the md5sum of zeroes in memory). If you want a simple and quick test of the I/O performance, copy a big file (not /dev/zero, but with real content) locally on your NFS server and then from your proxmox node to the NFS server, and also test the network performance between your node and NFS server (for example with iperf).
 
  • Like
Reactions: Alvaro Fernandez
NFS Server:

Code:
iperf -c 172.168.1.252 -t 60
------------------------------------------------------------
Client connecting to 172.168.1.252, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[  3] local 172.168.1.254 port 35487 connected with 172.168.1.252 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  6.59 GBytes   944 Mbits/sec

Proxmox Node:

Code:
[  5] local 172.168.1.252 port 5001 connected with 172.168.1.254 port 35487
[  5]  0.0-60.2 sec  6.59 GBytes   941 Mbits/sec
 
I have the same problem. Very slow backup (lzo) on Proxmo 4.1. Version 3.x works quickly. There is a solution ?
 
I am having the same issue. Backups are extremely slow with Pve 4.2 but copying large files (16gig) from local disk to nfs mounted backup folder is quite fast. Something is up with vzdump.
 
I am also experiencing the same in Proxmox 4.4. When this Container used to be a KVM VM, the backup took 2 hours or so, now it takes 16 hours. backing up to NFS.
 
I am also experiencing the same in Proxmox 4.4. When this Container used to be a KVM VM, the backup took 2 hours or so, now it takes 16 hours. backing up to NFS.

without posting any details, nobody will be able to help you.

please post
  • pveversion -v
  • container configuration
  • storage configuration
  • vzdump configuration
  • vzdump command line
  • vzdump log of a backup
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!