Backup on a remote server

  • Thread starter Thread starter langloispy
  • Start date Start date
L

langloispy

Guest
Hello!

I have a client who wants to have a backup of his VMs on his server. To achieve that, I have create a mount point in fstab looking like this:

Code:
#sshfs#user@server.com:/remotepath /backup/client1 fuse user,auto,noatime 0 0
Then with PVE, I have setup a new backup storage that point on /backup/client1. Everything works fine until the remote server goes done. When it occurs, the backup is dropped locally on the host, which fills the / filesystem leading to "no space left on device" problem...

There are a couple solutions that could fix that problem. I would like to get your opinion on the question:

Option 1:
Change the permission of /backup/client1 to 555 and make the backup script runs with another user than root. That is my preferred solution since I found it simple. The problem is that I don't know how to make the backup script act as another user...

Option 2: Create a new tiny partition and mount it on /backup. Then mount the remote share on /backup/client1. If the share goes down, the /backup partition will be filled without effect on the / filesystem.

Option 3: Like option 2 but with a tmpfs

Option 4: Any other suggestion would be appreciated.

Thanks!
 
Ok thanks.

I am trying the option 3 with a tmpfs. This option is a little bit complex since I need to recreate the folder hierarchy on the tmpfs at each reboot and then mount the remote share. I need to add a script in the boot sequence that take charge of that task.

I need to do that because the system is in production and I can't mess the partition for now. In parallel, I am to build a pve cluster connected to a SAN. I will use the option 2 (tiny partition) on that infrastructure since it will be easier to mount via fstab. When that infrastructure will be ready, I will migrate our production PVE to that cluster and say bye bye to tmpfs...
 
There is a very simple solution: target the backups in a directory inside /backup/client1, ie /backup/client1/daily. If the /backup/client1 mount point is not mounted, inner directories will be missing, causing a quick failure of the backup process. Of course this works only if the backup process doesn't insist on creating the target folder.
 
The backup script creates the folder if it does not exists...
 
Just a follow-up on this thread...

I have create a script that mount a tmpfs under which I mount my remote drive. This setup does not works at 100%: If the remote connection goes down, the tmpfs will be fulled during the backup process and the root filesystem will be safe. (That's the good part). The problem is that the backup script does not detect that the disk is full and it never dies... So all other backup never occurs :( I tried to kill the script but the /mnt/vzsnap failed to be umount; the solution is to reboot the host :(

So 2 questions rises:
- is there a way to abort a backup cleanly? (I have tried prunepath like in the thread: http://forum.proxmox.com/threads/4454-Backup-of-VM-failed-(with-exit-code-5)?p=25128#post25128)
- is it possible for the backup script to detect that there is no space left and abort the backup process?

thanks!
(Great product by the way)
 
Code:
#sshfs#user@server.com:/remotepath /backup/client1 fuse user,auto,noatime 0 0

Just a note: adding the option "reconnect" in the fstab should help... I will try it and give you the results...

Code:
#sshfs#user@server.com:/remotepath /backup/client1 fuse user,auto,noatime[COLOR="red"],reconnect[/COLOR] 0 0