Backup errors

Well, maybe your machine is just too slow (1.4MiB/s). You can try to increase snapshot size again, but I would simply go for the new machine (or use 'stop' mode).
 
I'll transfer both containers to the new node tomorrow morning.
I'll also increase the backupsize in the hope they work tonight, because its a nice feeling if you have a backup before you begin a migration ;).

btw what does the 'size' parameter exactly specifies?
 
Well backup succeeded, so I migrated the container to the new server.

This server has 8 sata disks running in raid 10, 8 xeon cores, 16GB ram.
It had empty pveperf reads of 450MB/s.
That dropped to 400MB/s, and proxmox shows io delays :eek:
 
First backup on the new server:

Dec 08 01:00:02 INFO: Starting Backup of VM 107 (openvz)
Dec 08 01:00:02 INFO: status = CTID 107 exist mounted running
Dec 08 01:00:02 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap')
Dec 08 01:00:02 INFO: Logical volume "vzsnap" created
Dec 08 01:00:02 INFO: mounting lvm snapshot
Dec 08 01:00:04 INFO: creating archive '/backup/vzdump-107.dat' (/mnt/vzsnap/private/107)
Dec 08 01:36:59 INFO: tar: ./var/qmail/mailnames/ecochip.nl/name/Maildir/new/1153752583.86760.admin18.123xs.com: Warning:
Cannot savedir: Input/output error
Dec 08 01:36:59 INFO: tar: ./var/qmail/mailnames/ecochip.nl/name/Maildir/new/1153752583.86760.admin18.123xs.com: Warning:
Cannot close: Bad file descriptor

..etc (27KB of these erros)
File ends with

Dec 08 03:28:42 INFO: Total bytes written: 21123225600 (20GiB, 2.3MiB/s)
Dec 08 03:28:42 INFO: file size 19.67GB
Dec 08 03:28:46 INFO: Logical volume "vzsnap" successfully removed
Dec 08 03:28:47 INFO: Finished Backup of VM 107 (02:28:45)

The logfile is 27KB, the backup finished but the performance is obviously very low.
There's only one ve on this big server, it takes two and a half hour to backup.
 
First, try a larger snapshot size for /etc/vzdump.conf
Code:
size: 3072
Second, what are you using for your I/O scheduler? I bet you're using the default one which performs incredibly poorly for me.

See which scheduler you're currently using:
cat /sys/block/sda/queue/scheduler

Try deadline, it works well for me:
/bin/echo "deadline" > /sys/block/sda/queue/scheduler
 
There's only one ve on this big server, it takes two and a half hour to backup.

Today I got the same error here - I will try to debug it.

But the slow speed is really strange. Try the 'deadline' scheduler as suggested by tog.
 
Thanks Tog, for the suggestion:

node1:~# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

I guess it allready uses deadline?
 
This is all new to me, but from what I read in Google I use cfq now..?
If I run /bin/echo "deadline" > /sys/block/sda/queue/scheduler do I need to restart anything?
 
You are/were using cfq, it was indicating that by surrounding it with square brackets.

No, you don't have to restart, the change takes immediate effect. Your I/O scheduler is changed to deadline as soon as you do that echo command.

If it works out for you and you'd like to make it permanent, put that echo command in /etc/rc.local
 
I changed it.
before the change (this is the 'big' server)
node2:~# pveperf
CPU BOGOMIPS: 32003.60
REGEX/SECOND: 741463
HD SIZE: 246.08 GB (/dev/pve/root)
BUFFERED READS: 285.08 MB/sec
AVERAGE SEEK TIME: 28.25 ms
FSYNCS/SECOND: 590.66
DNS EXT: 105.46 ms
DNS INT: 4.43 ms (xxxx)

After the change

CPU BOGOMIPS: 32003.60
REGEX/SECOND: 695497
HD SIZE: 246.08 GB (/dev/pve/root)
BUFFERED READS: 346.97 MB/sec
AVERAGE SEEK TIME: 20.34 ms
FSYNCS/SECOND: 3015.55
DNS EXT: 7.66 ms
DNS INT: 4.78 ms (xxxx)
 
Looks good, but of course the real test will be how quick your vzdump goes.

2.3MB/s is awful, even my simple little 2x 7200RPM 500GB SATA drive mirror with the performance of a single 7200RPM drive gets about 7-17MB/sec.
 
I have no idea, I hope you can tell me :)

edit: I've tried pveperf a few times in a row, mostly the seektime is between 8 and 13 ms...
 
Last edited:
I have no idea, I hope you can tell me :)

edit: I've tried pveperf a few times in a row, mostly the seektime is between 8 and 13 ms...

what hard drives do you use, model number?
 
Seek times between 8-20ms using 7200RPM SATA drives is fine and normal. If you really required a fast disk subsystem for an I/O-heavy workload you'd have to get some 10k RPM Raptors or use 15k RPM SAS or SCSI drives or use a SAN.

To be honest if my 7200RPM disk mirror ever became consistently overwhelmed with my workload I'd probably just put up a Solaris server using ZFS and use it as an iSCSI target rather than throw expensive hardware RAID stuff at Linux. I just love when one disk in a hardware RAID goes bad and overwrites your good copy of the data with bad data, really makes me enthusiastic about buying more $400-$600 hardware RAID controllers in the future.

For most typical workloads, normal 7200RPM drives are fine. Slower than 5MB/s backups is quite abnormal, hopefully changing your scheduler from cfq helped with that issue.

If you have a gigabit LAN you should do as the Proxmox guys suggest and stream your vzdumps to another server over an SMB or NFS mount.
 
I've changed the servers to the new io scheduler.
This is the log from tonight on the 2 disk raid-1 server:

Dec 09 01:00:02 INFO: Starting Backup of VM 103 (openvz)
Dec 09 01:00:02 INFO: status = CTID 103 exist mounted running
Dec 09 01:00:02 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap')
Dec 09 01:00:03 INFO: Logical volume "vzsnap" created
Dec 09 01:00:03 INFO: mounting lvm snapshot
Dec 09 01:00:04 INFO: creating archive '/backup/vzdump-103.dat' (/mnt/vzsnap/private/103)
Dec 09 03:03:19 INFO: Total bytes written: 19841679360 (19GiB, 2.6MiB/s)
Dec 09 03:03:19 INFO: file size 18.48GB
Dec 09 03:03:48 INFO: Logical volume "vzsnap" successfully removed
Dec 09 03:03:48 INFO: Finished Backup of VM 103 (02:03:46)

I took a look at my snapshot archive, the same server on 30 september had 5.3MiB/s but with far less sites (the backup was 5GB).

This is the server with 8 sata disks in raid-10 tonight:

Dec 09 01:00:02 INFO: Starting Backup of VM 107 (openvz)
Dec 09 01:00:02 INFO: status = CTID 107 exist mounted running
Dec 09 01:00:02 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap')
Dec 09 01:00:03 INFO: Logical volume "vzsnap" created
Dec 09 01:00:03 INFO: mounting lvm snapshot
Dec 09 01:00:03 INFO: creating archive '/backup/vzdump-107.dat' (/mnt/vzsnap/private/107)
Dec 09 01:56:10 INFO: tar: ./home/httpd/vhosts/somesite.nl/httpdocs/Leagues/Assets/..(6: Warning: Cannot stat: No such file or directory
Dec 09 02:10:10 INFO: Total bytes written: 21304832000 (20GiB, 4.9MiB/s)
Dec 09 02:10:10 INFO: file size 19.84GB
Dec 09 02:10:49 INFO: Logical volume "vzsnap" successfully removed
Dec 09 02:10:50 INFO: Finished Backup of VM 107 (01:10:48)

I've bought this server with the idea of hosting at least 20 customers vps's. Now it appears to be full with one 20GB container with low traffic websites (total of 75 or 85 GB a month).
It seems I have a problem :(
 
Last edited:
My only guess at this point about why your backups are slow is that you have a heavy I/O workload during the backup process. An 8-disk RAID-10 shouldn't be that slow to do backups. Maybe you don't have write caching on and your writes are really slow even though your reads are fast?

If you could maybe use an SMB or NFS mount to another server and do the vzdumps to that, that would probably help make it go faster.

I don't understand what exactly you mean by your server is full with one 20GB container. You mean your disk I/O capabilities are utilized or your free disk space is all utilized or what?

If your disks are being beaten on that much you may need to consider one of my earlier suggestions about getting a faster I/O subsystem setup. You could throw faster 10k RPM Raptor drives at the problem (using the existing SATA RAID controller) or setup a fast NAS server to place your containers on.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!