Proxmox Remote Vzdump

Jonpaulh

New Member
Nov 16, 2017
15
0
1
44
I currently have a 4 node cluster (pm1-pm4) which is working great, all of them have a 1TB hard drive.
I am looking at how I can automate my backups, unfortunately I only have 100GB of space on the local backup dir, some of my vm's are just over this therefore I am unable to back them up.

Currently on one node pm4 I have installed a second hard drive (1TB) and added this via the GUI as a directory mounted on /backup, this is working and it appears as expected. I have added to fstab etc.
When ssh'd into vm4 I can issue with command

Code:
root@pm4:/# vzdump 119 --dumpdir /backup --mode snapshot
INFO: starting new backup job: vzdump 119 --mode snapshot --dumpdir /backup
INFO: Starting Backup of VM 119 (qemu)
INFO: status = stopped
INFO: update VM 119: -lock backup
INFO: backup mode: stop

This works great as it is a local vm to the node pm4. However, when I try a VM form another node, the command completes instantly with 0 output.

Code:
root@pm4:/# vzdump 101 --dumpdir /backup --mode snapshot
root@pm4:/#

Ideally I want to be able to just issue the command like so and backup the necessary vm's all on this drive. I will run the scripts from pm4 and back all vms up one by one.

I have read that you can create a virtual drive that maps to the mount point on pm4 ie the directory I have created. Then by adding this I can do the backups from the nodes themselves and push them to the remote directory, I believe this involves creating an NFS.

I also see a post whereby someone is doing a vzdump using stdout and ssh here https://forum.proxmox.com/threads/vzdump-to-stdin-over-ssh.33039/. They use the following command.
Code:
vzdump 119 --stdout | ssh root@remote.server.com | pct restore --rootfs 20 119 -

I am wondering if I can do something similar and backup over SSH with vzdump to my second drive on the pm4 node. Can anyone point me in the right direction of how to achieve this either by using the vzdump command on pm4 to dump vm's on other nodes to the mount point of the second drive, or how I could use ssh to achieve this. The main aim is to have pm4 only have a drive and when using vzdump to not take any space up on the original node, ie I know I could backup to the host node and scp the file, but this does not help in the current situation where I don't have ample space on the host node. Also I want to make minimal changes to the other nodes ie no adding extra drives or devices such as an NFS.

If I can provide any other info let me know.
 
If you want to automate your Backups using the integrated scheduler (via the GUI), you will have to use a remote mounting solution. NFS is unencrypted, but provides good performance.

Otherwise, there are two simple solutions I see:

  • vzdump with --stdout simply writes out the tar archive that would otherwise be stored. You can redirect it to a file on a remote node:
Code:
vzdump <vmid> --stdout | ssh user@remotehost "cat - > backup.tar"
  • Alternatively, take a look at sshfs. It provides a simple way to mount remote folders via ssh, so you could mount your /backup folder on your remote node to all your other nodes and treat it as a kind of local storage. If you mark your "/backup" storage as shared and make sure it is mounted on all other nodes, you could theoretically even use the Backup GUI this way.
 
  • Like
Reactions: Jonpaulh
If you want to automate your Backups using the integrated scheduler (via the GUI), you will have to use a remote mounting solution. NFS is unencrypted, but provides good performance.

Otherwise, there are two simple solutions I see:

  • vzdump with --stdout simply writes out the tar archive that would otherwise be stored. You can redirect it to a file on a remote node:
Code:
vzdump <vmid> --stdout | ssh user@remotehost "cat - > backup.tar"
  • Alternatively, take a look at sshfs. It provides a simple way to mount remote folders via ssh, so you could mount your /backup folder on your remote node to all your other nodes and treat it as a kind of local storage. If you mark your "/backup" storage as shared and make sure it is mounted on all other nodes, you could theoretically even use the Backup GUI this way.

This is great info, I was getting a bit muddled with all the info I was reading. Thank you for giving me clear direction. I will give it a shot.
 
Using SSHFS worked for me, using the following:

#On pm4 for example
mkdir /mnt/backup

#Mount the drive
mount /dev/sda1 /mnt/backup

#Edit /etc/fstab and put in
/dev/sda1 /mnt/backup ext3 defaults,errors=remount-ro 0 1

#Now on all other nodes

#Copy deb file and Install
dpkg -i ./sshfs.deb
apt-get install -f

#Make mount point and setup sshfs
mkdir /mnt/backup
sshfs -o allow_other root@pm4:/mnt/backup /mnt/backup
#sudo sshfs -o allow_other,defer_permissions,IdentityFile=~/.ssh/id_rsa root@xxx.xxx.xxx.xxx:/ /mnt/droplet # use if you have identity file

#Make mount point persistent in fstab
sshfs#root@pm4:/mnt/backup /mnt/backup

You can then add the drive via the gui clicking data center and adding a directory storage using the path /mnt/backup and share this between all nodes
 
Using SSHFS worked for me, using the following:
<snip>
You can then add the drive via the gui clicking data center and adding a directory storage using the path /mnt/backup and share this between all nodes

Great suggestion, thank you for sharing - just used this method to deal with a couple of servers in an old cluster I wanted to decommission.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!