Proxmox and BACKUP-Strategie

rethus

Active Member
Feb 13, 2010
49
0
26
Hi @ all,
i use Proxmox, and its realy realy great. Thanks for this great Softwre.

So i try to setup a Backup-Strategie for my Server. I have two HDD in the Server, and 1 external (100GB) FTP-Storrage.

Now i try to install tartarus to backup the whole root to the ftp-storage. But a little Strange for me: /var/lib/vz/ is empty if i restore the backup.

How would u made backups from the system to that ftp-storage? Would u use Proxmox-Backup to an directory and save this backups via tartarus to the ftp-storrage ?

Problem is, that proxmox didnÄt save directly to an external ftp-storrage, so i have to use tartarus.

Hope to get some tips.
 
My Proxmox is at a data center, what i do is allow proxmox to do the backup which goes into /var/lib/vz/backups at 3am every sunday then i run a cronjob at 5am to ftp the files to a remote location.

Job did

there are lots of scripts available on the net some backup proxmox as well as your vm's

This script looks useful if i could just get the time to look at it.
 
My Proxmox is at a data center, what i do is allow proxmox to do the backup which goes into /var/lib/vz/backups at 3am every sunday then i run a cronjob at 5am to ftp the files to a remote location.

Job did

there are lots of scripts available on the net some backup proxmox as well as your vm's

This script looks useful if i could just get the time to look at it.

I would use rsync with SSH, it's more secure than ftp.
You could search on gogle: "howto: backup with rsync and ssh"
 
I am also in OVH and I use Duply for backup. Its highly configurable, You can verify backups, purge old ones etc. Installation instructions: http://trick77.com/2010/01/01/how-to-ftp-backup-a-linux-server-duply/

After install create backup profile and configure its "conf" file for use with FTP. I am attaching my conf so You can see how Ive made it in this particular OVH enviroment:
Code:
# gpg key data (for symmetric encryption comment out GPG_KEY), examples:
#  GPG_KEY='disabled' - disables encryption alltogether
#  GPG_KEY='01234567'; GPG_PW='passphrase' - public key encryption
#  GPG_PW='passphrase' - symmetric encryption using passphrase only
GPG_KEY='disabled'
GPG_PW='_GPG_PASSWORD_'
# gpg options passed from duplicity to gpg process (default='')
# e.g. "--trust-model pgp|classic|direct|always" 
#   or "--compress-algo=bzip2 --bzip2-compress-level=9"
#GPG_OPTS=''

# credentials & server address of the backup target (URL-Format)
# syntax is
#   scheme://[user:password@]host[:port]/[/]path
# probably one out of
#   file:///some_dir
#   ftp://user[:password]@other.host[:port]/some_dir
#   hsi://user[:password]@other.host/some_dir
#   cf+http://container_name
#   imap://user[:password]@host.com[/from_address_prefix]
#   imaps://user[:password]@host.com[/from_address_prefix]
#   rsync://user[:password]@other.host[:port]::/module/some_dir
#   rsync://user[:password]@other.host[:port]/relative_path
#   rsync://user[:password]@other.host[:port]//absolute_path
#   # for the s3 user/password are AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY
#   s3://[user:password]@host/bucket_name[/prefix]
#   s3+http://[user:password]@bucket_name[/prefix]
#   scp://user[:password]@other.host[:port]/some_dir
#   ssh://user[:password]@other.host[:port]/some_dir
#   tahoe://alias/directory
#   webdav://user[:password]@other.host/some_dir
#   webdavs://user[:password]@other.host/some_dir 
# ATTENTION: characters other than A-Za-z0-9.-_.~ in user,password,path have 
#            to be replaced by their url encoded pendants, see
#            http://en.wikipedia.org/wiki/Url_encoding 
#            if you define the credentials as TARGET_USER, TARGET_PASS below 
#            duply will url_encode them for you
TARGET='ftp://xXxUserNamexXx:xXxMyPassxXx@xXxNameOfOVHFtpServerxXx.ovh.net/vzbackup'
# optionally the username/password can be defined as extra variables
# setting them here _and_ in TARGET results in an error
#TARGET_USER='_backend_username_'
#TARGET_PASS='_backend_password_'

# base directory to backup
SOURCE='/data/backup'

# Time frame for old backups to keep, Used for the "purge" command.  
# see duplicity man page, chapter TIME_FORMATS)
# defaults to 1M, if not set
MAX_AGE=2W

# Number of full backups to keep. Used for the "purge-full" command. 
# See duplicity man page, action "remove-all-but-n-full".
# defaults to 1, if not set 
#MAX_FULL_BACKUPS=1


# verbosity of output (error 0, warning 1-2, notice 3-4, info 5-8, debug 9)
# default is 4, if not set
#VERBOSITY=5

# temporary file space. at least the size of the biggest file in backup
# for a successful restoration process. (default is '/tmp', if not set)
#TEMP_DIR=/tmp

# sets duplicity --time-separator option (since v0.4.4.RC2) to allow users 
# to change the time separator from ':' to another character that will work 
# on their system.  HINT: For Windows SMB shares, use --time-separator='_'.
# NOTE: '-' is not valid as it conflicts with date separator.
# ATTENTION: only use this with duplicity < 0.5.10, since then default file 
#            naming is compatible and this option is pending depreciation 
#DUPL_PARAMS="$DUPL_PARAMS --time-separator _ "

# activates duplicity --short-filenames option, when uploading to a file
# system that can't have filenames longer than 30 characters (e.g. Mac OS 8)
# or have problems with ':' as part of the filename (e.g. Microsoft Windows)
# ATTENTION: only use this with duplicity < 0.5.10, later versions default file 
#            naming is compatible and this option is pending depreciation
#DUPL_PARAMS="$DUPL_PARAMS --short-filenames "
 
# activates duplicity --full-if-older-than option (since duplicity v0.4.4.RC3) 
# forces a full backup if last full backup reaches a specified age, for the 
# format of MAX_FULLBKP_AGE see duplicity man page, chapter TIME_FORMATS
# Uncomment the following two lines to enable this setting.
#MAX_FULLBKP_AGE=1M
#DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE " 

# sets duplicity --volsize option (available since v0.4.3.RC7)
# set the size of backup chunks to VOLSIZE MB instead of the default 25MB.
# VOLSIZE must be number of MB's to set the volume size to.
# Uncomment the following two lines to enable this setting. 
VOLSIZE=200
DUPL_PARAMS="$DUPL_PARAMS --volsize $VOLSIZE "

# more duplicity command line options can be added in the following way
# don't forget to leave a separating space char at the end
#DUPL_PARAMS="$DUPL_PARAMS --put_your_options_here "
In this case I am backing up /data/backup folder to /vzbackup folder on OVH ftp
You configure Your credentials (userId/Pass) in line: TARGET='ftp://.....
Than You put it into cron by using
crontab -e
to make it automatically. (Ive tried to edit cron.d manually but it didnt worked.)
Please bear in mind that I didnt secured the whole transmission. The way I put credentials can leave track in logs with userID and passord. Additionally I verify every week or so if backups are indeed ok.
Hope this helps,
Piotr
 
Last edited:
Hi,

First of all thank you Proxmox Team for such a great solution. I just started using Proxmox and am very impressed.
I also like to contribute my backup strategy which I figured after some time of consideration.
Because of its simplicity and suspected long-term feasibility, I use vzdump to create dumps of all VZs (which are stored on LVs) to a local folder.
Afterwards, duplicity is used to stream incremental delta backups to an ftp server (could be a lot of other storage backends as well, e.g. Amazon S3, see the duplicity documentation). I believe duplicity is the best complement to vzdump so far because of the following reasons:
  • suspected long-term feasibility: Used as a backend for the default Ubuntu backup solution dejadup
  • amazingly idiot proof for a Linux program (I am very impressed about this), i.e. it tries to intelligently handle orphaned configuration files of deleted backups
  • very efficient incremental backups (only about 1MB compressed delta size between 2 VZ snapshots. Delta was due to logfiles etc.). Be sure to have the compression with vzdump disabled as otherwise the duplicity incremental backup would work very inefficiently (I guess, not tested)
  • just transfers deltas over the wire, so incremental backups are rather fast (and small)
  • built-in encryption based on PGP
I do not use duply in addition because it adds more complexity here instead of making things a lot easier.

To get a basic setup running, try the following:
Code:
> apt-get install mmv duplicity ncftp

create the following script /usr/bin/backupVZs.sh:

Code:
#!/bin/sh
cd /mnt/localbackup/vzDumps/
## rename the VZ images (if any) to a common name scheme (so that consecutive images get the same name) overwriting the previous image
## This is necessary for the incremental backup with duplicity, it removes all timestamps from the filenames. Unfortunately, it seems there is currently no way to tell vzdump to always use the same name
mmv -d "vzdump-qemu-*-*.*" "vzdump-qemu-#1.#3"
mmv -d "vzdump-openvz-*-*.*" "vzdump-openvz-#1.#3"
 
## transfer previous VZ images to ftp (if any)
PASSPHRASE=arbitraryPasswordForEncryptingTheArchieves FTP_PASSWORD=passwordForFTPServer duplicity /mnt/localbackup/vzDumps/ ftp://ftpuser@backupserver.org
 
## create new VZ images
## they are transferred just before the next backup
vzdump --dumpdir /mnt/localbackup/vzDumps/ --snapshot --all --bwlimit 0
Code:
Make the sript executable
>chmod u+x /usr/bin/backupVZs.sh

You can now use the cron deamon to do backups on a regular basis.
Code:
>crontab -e
Add this line to make the backup every day at 00:00
Code:
0 0 * * * /usr/bin/backupVZs.sh
Initially, it will create a new snapshot backup of all your VZs to the local backup storage (here: /mnt/localbackup/vzDumps/).
On the next invokation, it will normalize the snapshot file names (so that incremental backups will work) and transfer the snapshots to the ftp server
After that, new snapshots are created. If you have limited space on the local backup storage, you could delete the previous snapshots (which are already transferred to the ftp server) before this step.
You might want to change the order of the entire procedure if you prefer to directly transfer the most recent version to the ftp server. There are also a lot of other options that you might want to influence, so take a look at the duplicity docs.
Of course, there is always a room for improvement and I'd like to make some suggestions in case some of the Proxmox or Duplicity Gurus are listening ;)
  • I don't know how complicated this can get but an in memory backup directly to the remote storage (without having to store the images first locally) would be fantastic. I guess this requires a much deeper integration of vzDump and Duplicity, but maybe I pipe would be enough? I.e. instead of taring, vzdump could pipe the contents to duplicity (currently unsupported i believe) which will do the rest...
  • Second thing is that this should be controllable from the Web gui, so no shell crontab fiddling any more...
  • Restoration of individual files from the images is a bit of pain right now. A nice web based version chooser (which image version to restore) would be nice too...
Cheers,
fatzopilot
 
Thanks for all the answers.
Thats good Hints for FullBackup of the Proxmox Server Images.
But did you only backup those? Or did you backup some Data (maybe databases, webspace, mails) within the Server Images?

Cause, if one User want to have a Backup from last Month, but all the other stuff on the server should runn like before... thats not done with VZ-Restore, cause in this way, the whole Server is set back to "Last month"

Whats about this Backup strategy?
 
On my Proxmox at OVH data center:
1) Every VM does it's own daily backup according its own purpose and store it in a local folder.
2) Proxmox does a Daily Backup Image of each VM on dedicated NAS.
3) A cron SH script does the backup of the main config dirs of the Proxmox machine on a NAS.

If a customer needs a partial restore I use the local daily backups, if I need a complete VM restore I use the proxmox Image on NAS

Luc
 
On my Proxmox at OVH data center:
1) Every VM does it's own daily backup according its own purpose and store it in a local folder.
2) Proxmox does a Daily Backup Image of each VM on dedicated NAS.
3) A cron SH script does the backup of the main config dirs of the Proxmox machine on a NAS.

If a customer needs a partial restore I use the local daily backups, if I need a complete VM restore I use the proxmox Image on NAS

Luc

Hi Luc:

I recently started using PROXMOX.
Can you assist me in configuring backup and couple of other things.

BR
 
Please excuse if anything of the following is wrong or does not make any sense. I am pretty new to Proxmox and this is my first post to this forum but I wanted to share my backup strategy and hope for comments/improvements:

1) Daily backup cronjobs of all VMs plus host machine via 'Duply' but exclude '/var/lib/vz/images' on the host with MAX_AGE and MAX_FULLBKP_AGE=1M for VMs and '2W' for the host
2) Set up Proxmox to do weekly snapshot-backups of all VMs
3) Add 'find /var/lib/vz/backup/dump/* -mtime +30 -exec rm {} \;' to 'post' execution file of duply on the host to delete 30 days and older snapshots

This keeps me quite well within the 500GB backup space limit that my FTP server provides...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!