Auto backup with NAS4Free storage

nevermind... I wouldnt waste all that time, zfs can make any sort of raid you need, no need to re-install.

https://www.zfsbuild.com/2010/06/03/howto-create-zfs-striped-vdev-pool/

Assuming sda is your main pve os drive... sdb-sdd are 3 other drives you have:
Code:
zpool create backuptank /dev/sdb /dev/sdc /dev/sdd -f
### verify the mountpoint:
zfs list

Once you have made your pool it should auto mount to a directory which will list when you type zfs list, in pve gui, add that as a Directory (not zfs), set as vzdump content, and max backups as needed:

upload_2017-6-10_22-2-3.png
 
Thanks. That really worked.

(ignore that disks 2, 3 and 4 are now nice zfs tanks currently)
Now I think I need to finish off the redundancy on the Disk1 Proxmox OS sda, which should have a RAID with disk2 and disk3, in case disk1 sda breaks.
I'm thinking to setup mdadm as RAID for Proxmox, but still researching how mdadm works.
Seems that I need Disk1 to install debian wheezy, to install mdadm onto.
Disk2 and disk3, then to create raid on. Renaming disk2 and disk3 and md0.
Md0, then installs Proxmox on (disks 2 and 3) running as RAID disks.
 
no, if you are thinking of that - again save yourself trouble, and re-install with zfs. the installer can raid it for you:
upload_2017-6-11_9-30-24.png

upload_2017-6-11_9-31-13.png
 
Ok, that worked. Only had RAID1. I would prefer RAID6 with the 4 disks, but maybe I'm missing something.

My understanding is that Proxmox OS is now running on RAID1 zd0 which is on four discs sda, sdb, sdc and sdd.

So, I transferred the VM.iso to /var/lib/bz/template/iso, however root@proxmox:/var/libz/vz/template/iso doesn't show the VM.iso file?
Code:
root@proxmox:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 111.8G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0 111.8G  0 part
└─sda9   8:9    0     8M  0 part
sdb      8:16   0 111.8G  0 disk
├─sdb1   8:17   0  1007K  0 part
├─sdb2   8:18   0 111.8G  0 part
└─sdb9   8:25   0     8M  0 part
sdc      8:32   0 111.8G  0 disk
├─sdc1   8:33   0  1007K  0 part
├─sdc2   8:34   0 111.8G  0 part
└─sdc9   8:41   0     8M  0 part
sdd      8:48   0 111.8G  0 disk
├─sdd1   8:49   0  1007K  0 part
├─sdd2   8:50   0 111.8G  0 part
└─sdd9   8:57   0     8M  0 part
sde      8:64   0 111.8G  0 disk
└─sde1   8:65   0 111.8G  0 part
zd0    230:0    0     8G  0 disk [SWAP]
 
Last edited:
Ok, fixed with steps:
Proxmox > Server View > Datacenter > proxmox > local > Content > Upload > Content: iso image > Select File… > vm.iso > Upload.
 
Not sure what problem you are referring to. Run "zfs list" to check the zpools and their mount points, and run "zpool status -v" to see the member disks in each pool. The default pve install will make 2 zfs pools, and a portion of it can be directly accessed storage for the file system, the other portion is for block devices only, meaning backups cannot be stored there, the available space is shared equally between the 2. ISOs get stored in /var/lib/vz/template/iso, you can manually put them there with scp or use the web gui as you found.

Any form of Raid5 (RAIDz1) or raid6 (RAIDz2) will reduce io for virtual machines, and even more so if you dont have enough RAM/CPU (16gb suggested by some), or dont have ssd (didnt see mention of drive type). Running RAIDz2 does not make sense unless you have a lot of drives, more drives increases your chances of a 2nd disk failure, so it is needed, otherwise you would be better off making use of as many drives as possible - unless you know for sure you will not need the extra space.

If the zfs is just for backups, speed probably will not matter as much, but if you are running Windows VMs on the zfs array, you may be disappointed with the raidz methods, VMs with no gui may work ok, you didnt mention the guest types, or the drive types - are those SSD? For VM guests, ssd is definitely best, but for backups, you would want spinning disks so you dont burn your ssd write limit.

If this were my install and windows guests were on there, I would setup 2 zfs mirrors and stripe them for a RAID10, if they were just linux guests, I would do a raidz1 (raid5) of 5 drives, more disks will give more speed+space, send backups to your 1tb drive, there is not much sense in mirroring a backup, some sort of striped raid is better for space. If you put both your hosts in a cluster, it is easy to share the storage between machines using the proxmox gui, sharing is on the storage page. You could also do snapshot backups between machines using pve-zsync command: https://pve.proxmox.com/wiki/PVE-zsync The cluster method is probably the simplest to get up and manage using conventional backups in the gui.

You might read a bit more on zfs:
http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
https://calomel.org/zfs_raid_speed_capacity.html
 
Hi, FYI, Proxmox on Linux SW Raid MDadmin works just fine, if you do install in proper sequence, ie,

- first install your bare metal host with Debian Jesse installer, minimal boot ISO to do the install is just fine.
- be very attentive to hints provided, https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie , regarding how you want LVM to be laid out on top of MD Devices. Note the DebJesse Wiki article does NOT talk about SW raid at all. So to do that install, you must be able to first do such a thing compentently as a baseline. If you can't then maybe Proxmox-on-ZFS with '2 disk mirror ZFS' is your better pick.
- really proxmox on Linux SW raid is very straightforward though, so long as Linux SW raid is not a problem for you to setup.

Don't try to jury-rig MDadm / linux sw raid onto proxmox install after the fact; this is more fuss and pain than doing it clean in the proper sequence.

Possibly the simplest config would be,

- 2 disks mirror for proxmox, put your LVM on top of this properly laid out so proxmox can work nicely as per wiki hints
- deal with your other disks later, disks 3 and 4 I guess, clean thing would be a simple raid volume and then mount it as a new data store 'local filesystem' - mount it as /data or something and add to proxmox; or if you really (!) are keen to maximize space and risk of failure, do a raid0 for non-redundant raid volume for holding your backups.


Tim
 
Ok, I think it's complete now as per my previously mentioned post.
Host proxmox OS on ZFS RAID1 on 4 SSD discs. Guests are all non windows.
1 SSD disc for storage.
1 external 1 TB hard disc. Freenas will rsync to pull the storage from Proxmox storage.

Hmm, Proxmox auto backup failed with error: TASK ERROR: mkdir /dev/sde: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 97.
I added storage to Disc5 at /dev/sde, then Backup automatically for VM failed.

Also, the 2 previous replies appear to miss the issue that transferring vm.iso file to /var/lib/vz/template/iso succeeded, but no file was actually in /var/lib/vz/template/iso. I think this is due to the ZFS RAID1 on disc1, 2, 3 and 4 somehow?

I think I need to create ZFS storage on disc5, so I can do snapshots and also automatic backup. The wiki says only ZFS can do snapshots.
 
Last edited:
Ok, I had to learn how to add a node. Backups now storing on disc 5, rather than the local Proxmox OS disc.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!