Proxmox 5.1 remote backup

Miguel

Member
Nov 27, 2017
44
0
11
48
Hi,

I am currently running several VMs in a node with SATA disks with Proxmox 4.4.

I have purchased two servers, one with SSD drives and another one with SATA drives, both running 5.1. The idea is to split some VMs between those two nodes.

I am planning to run 2 VMs on the server with SSD drives and since there is not much room in the server for backups I was thinking of setting up some sort of sychronization between those two Proxmox nodes for those two VMs and store the backup in the SATA drives Proxmox server.

I have searched around and I haven´t found many options apart of using ZFS and pve-zsync or use sshfs.

Those servers have Software RAID and I have read ZFS doesn´t perform well.

What are my options? The guest OS is CentOS, if that helps.

Regards,

Miguel
 
Hi,

ZFS is a good option if you try to spend some time to learn how to use/administrate. Also you need some basic knowledge about storage and linux. At your question, I can say that pve-zsync work / perform very well without any problem for me(several month of usage on different PMX clusters).
 
Thanks! I have read that ZFS is not recommended for storing VMs. Which documentation or articles you recommend me to get a grasp on ZFS?

ps: I am not planning to create a cluster, at least for now just to have a second Proxmox node where to backup these 2 VMs.
 
I have read that ZFS is not recommended for storing VMs
This is not true. Myself and many others guys are running VMs on zfs without problems.

Which documentation or articles you recommend me to get a grasp on ZFS
One of the best documentation is here:

https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/

I am not planning to create a cluster, at least for now just to have a second Proxmox node where to backup these 2 VMs.

I think is not the best solution even for your case. If you will have a 2 nodes cluster you will have this advantages:

- zfs syncronisation / replication
- you can move your VMs(all or only one of them) on the second node
- if your primary node is offline/broke, it is very simple to restore your VMs on the second node
- maybe after some time, you will add a 3rd node ..... is simple to add in the same cluster
- simple management(passwd, users, ACLs, firewall, and so on)
- or maybe you can activate the HA in the cluster in the future
- and many others small things
 
Many thanks for your inputs! So do you recommend to use ZFS on top of SoftRaid on top of SSD drives for the whole system?

Which disk format do you use on top of ZFS? qcow2? raw?
 
Hi Miguel,

Many thanks for your inputs! So do you recommend to use ZFS on top of SoftRaid on top of SSD drives for the whole system?

Which disk format do you use on top of ZFS? qcow2? raw?

- zfs must "see" directly any HDD/SSD without any others tools like [soft][hardware]Raid for best performance and decent safety
- raw is the best format on zfs

By the way the Proxmox wiki have some topics about zfs. Start to read it and good luck!
 
After reading some information, it is strongly recommended to have ECC RAM, unfortunately one of the servers doesn´t have that kind of RAM. Besides the documentation says 50% of RAM is devoted to the ARC cache. It seems a good system and probably I will use it in the future, but It seems I should have known all this before purchasing these servers...my fault!
 
Hi Miguel,

ECC Ram is good to use but is not a must to have. I use zfs for many years mostly on non ECC Ram.
What you read about 50 % RAM usage by zfs is the default and not something like must to have. I have some servers with zfs that use only 1-2 GB RAM. zfs can be tuned to use the amount of memory desired. As I see I can guess 2 GB RAM will be ok for your test case. You can also setup a minimum and maximum RAM for zfs only.

Give a try with zfs - most advanced storage system

Good luck Miguel!
 
Ok, I have installed ZFS at OVH (they have an installer with Proxmox 5.1). They create a rpool zfs volume. Within they create:

/rpool/data
/rpool/ROOT
/rpool/ROOT/pve-1

but I don´t see them when I run zpool list.

It´s a little bit confusing coming from file-based system where all images of VMs are within /var/lib/vz/images

I have added a Directory in the Storage GUI for Disk Image and Container in /rpool/data. I copied a VM from a non-ZFS Proxmox 4.4 server. This VM has a qcow2 disk that I placed within /var/lib/vz/images. Apparently this is not the way to do this and I got an error.

Then, within the VM hardware section I did "move" the disk to the /rpool/data directory and then It was converted to raw format and I could run the VM.

I haven´t found any guide and I feel a little bit lost, not sure if this is the right way of do things. I only have found this one:

https://nerdoncoffee.com/operating-...-server-and-hypervisor-using-proxmox-and-zfs/

Proxmox wiki is not very detailed for this use case. Do you know any other guide?

Maybe because it´s the only VM running, but considering I´m testing this on a server with 2 SATA drives, writing performance is 3-4x compared to the server It was running before.
 
I have set up ZFS on two nodes. I am googling around to get any tutorial to do what I want, which is to perform a daily sync and performing a backup in the second node.

I see people complaining about snapshots clean up but I don´t find any information of how to do it. If I don´t have much space left on node1 how can I avoid those snapshots break the sync for lack of space?
 
If you do not have enough free space you can not use zfs snapshots. For proxmox, you can use pve-zsync, or any other tool who can use zfs send receive.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!