Backup folder

nick

Renowned Member
Mar 28, 2007
364
1
83
Hi All,

I have a question regarding backup folders; I have 3 PVE connected in cluster and now I try to create backup jobs on each machine into a specific folder. On master I can create any backup job in any folder/subfolder I want (Ex: VM1 I want to backup into /backup/VM1 and VM2 into /backup/VM2)

Now, on slaves I create /backup folder and inside I create specific folders for each machine that I intend to save it. For example on Slave 1 I have VM3 and I try to create the backup job in folder /backup/VM3 and all the time when I want to create the job says:

"Error: Destination directory '/backup/VM3' does not exist"

I can create backup jobs on slaves only on folder /backup

What can I do?
 
I came with a new observation: The backup folder is permitted only if the same folder exist on master

It's normal?
 
OK! Good to remember! Now, for who intend to create specific backup folders on slaves, create the same folders on master first! In this way, the backup job will be created!
 
I need to find if is possible and how to do it to automatically copy the backup folder from slave to master.

What I want:
- on slave, in backup is defined a job who save VM3 files on folder /backup/VM3.
- now, after the job end, to copy this folder on master on /backup/VM3

The procedure is now made manually; but I want to make this an automatically job.

Can be made it? Any suggestions?
 
i have done something similar.

i have publish one folder on each node below NFS (or maybe you want samba), that folder is automounted on startup on the other node at a new folder, The result is that each node has two folders, one remotely mounted by nfs and one published by nfs. Then you can cross the backups. Configure your backup to be done in the remote folder an enjoy this simple solution.
 
If I understand well, I need to create a second job folder and the results to be saved into a remote folder (NFS or samba)

From my point of view, I prefer to find a solution to copy only on master (not on external NFS servers - not on this moment) and to notify me via e-mail about the results!

Maybe can be used "rsync" in cluster...
 
If I understand well, I need to create a second job folder and the results to be saved into a remote folder (NFS or samba)

From my point of view, I prefer to find a solution to copy only on master (not on external NFS servers - not on this moment) and to notify me via e-mail about the results!

Maybe can be used "rsync" in cluster...


My remote folder (NFS or samba) can be on master.

This metod have a advantage over rsync on cron : you only need to define the folder that have the remote folder mounted in the backup task (GUI).


I have two folders mounted on each pair of node because i want to cross the backups between the nodes.
If you only want one share, then share a nfs folder on your master (it can be /vz/backups) and mount at the other/others node/s in the same folder ( /vz/backups). That´s all
 
personal I prefer to use FTP protocol; I make some test with different solutions and I post the results! Maybe will be useful informations...

If someone have better idea or maybe examples please post...
 
someone know the syntax for rsync to work with an Windows Shared Folder (AD) when I need to specify domain/user and password

I make the procedure (and script) for internal PVE cluster (from slaves to master) - will be posted soon ;)! It's a beta version...:D
 
Now, how I mount a Windows 2003 shared folder? I don't want to install something and to interact with PVE sevices...

I try smbmount but it's not installed!
 
OK, I come back with first conclusions about using rsync.

For people who are looking for more details I try first to describe the scenario; I intend to keep a backup in 3 different places.


  • We have 3 PVE servers (1 MAsters and 2 Salves)
  • Master have an extra High Capacity disk for backup
  • Slaves have only 1 disk (or 2 on RAID1)
Now, the backup places are:

1) Local disk for each Server
2) Extra disk mount on Master
3) A Windows 2003 AD mount on Master (placed in my scenarion in another building) - use this only if you have enogh bandwidth to synchronise the content.

Case 1) Backup on local disk on root partition - defined in PVE backup jobs - very easy to use and to define.

Case 2) For this we use rsync command defined in cron; this command will be set on Slaves to synchronise local backup folders with Extra Disk on Master.

The command is:

rsync -auvz -e ssh /BACKUP_DIR MASTER_IP:/MASTER_PATH --delete

Before let cron to execute the command, run this script manually because you need to import the ssh key (RSA key fingerprint) from Master to slave. On first run, the system will ask you to import permanently the ssh key, confirm with YES and wait until the script end

On near future I will post the script after I finish the part with rotation backup (keep only last 7 backups - defined by user)

Case 3) Synchronise Backup folder from extra disk mount on Master with a SMB Folder.


Step 1: Create the folder where we mount the shared folder (as is described in Proxmox Wiki)
mkdir /winshare
Step 2: Now, we modify the fstab file and add the line to connect to Windows 2003 Active Directory
nano /etc/fstab

//windows-or-samba-server-name/sharename /mnt/backup cifs username=yourusername,password=yoursecretpassword,domain=yourdomainname 0 0
Save and exit!

OBSERVATION: I recomand to use a standard user and DO NOT USE an administrator account! When you share the folder add in share permissions this user too.

Step 3: We test now the settings and manual connect the share folder
mount //windows-or-samba-server-name/sharename
You will see now in /winshare all content from Windows shared folder. If not, review your settings (Domain/User name or password)

Step 4: Add in cron the same command as on case 2. The only difference is that on master IP we put localhost (or 127.0.0.1)


That's it! Enjoy! I hope are usefull info.

I will update the post when the script it's done. For people who have a rotation backup procedure already done please help me to define the script!
 
Last edited:
dcalvache-

are you describing a situation where, in a cluster, any backups setup and run by the web gui, can be destined on one of the other machines of the cluster, via nfs?

this would save duplication of backups, right?

I'm going to try and set this up now with my rather limited skills.
 
Last edited:
But imagine if something happen with remote host...

I think it's more safe to have a local backup and syncronyse the backup folder with remote host! It the rsync don't end as you expect you have at least a local backup!
 
Hi,

i have done something similar.

i have publish one folder on each node below NFS (or maybe you want samba), that folder is automounted on startup on the other node at a new folder,

I've done something similar too...
But mounting the remote backup folder via fstab has one danger.
If the remote system goes down, the remote folder won't be automounted again after the remote machine comes up again. So the backup will be dropped to the local folder which is used as nfs-mount.
If this folder is on the root filesystem it may fill your root filesystem to 100% which will lead into trouble.

So I'm using autofs for the nfs-mount, this will re-mount the nfs-share in case the remote server will go down.

cheers,
Uwe
 
i've tried it and it seems to work well.

nick: don't you get an email anyway if the backup fails (not that i've worked out how to do that yet)

similarly, wouldn't a 'hard' nfs export in fstab eg:

192.168.11.4:/backup/proxmox1 /backup/proxmox1 nfs rw,hard,intr 0 0

produce an error causing your backup to fail?

if not, you could always set it to backup to a subfolder of the mount, which didn't exist behind the mount point, then it would fail then anyway?
 
I use this solution and works perfect for me! I make also a script and send me the results from a log file. In this way I keep all rsync activity into a log file and I receive mail with results.