Just add an new ZFSpool in Storagetab. I think it is called "pve-1" . For easier adminisration you can also install the system on two SSDs and after then add an extra pool with your HDDs. I do this so on bigger systems. You can't save VMs directly on ZFS. Only on an dataset. For example:
Code:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10.4G 16.4G 144K /rpool
rpool/ROOT 6.85G 16.4G 144K /rpool/ROOT
rpool/ROOT/pve-1 6.85G 16.4G 2.95G /
rpool/ROOT/pve-1/vm-108-disk-1 3.50G 16.4G 3.50G -
rpool/swap 3.59G 20.0G 17.6M -
v-machines 3.09T 2.17T 104K /v-machines
v-machines/home 2.90T 2.17T 2.82T /v-machines/home
v-machines/subvol-109-disk-1 321M 7.69G 321M /v-machines/subvol-109-disk-1
v-machines/vm-100-disk-2 5.97G 2.17T 5.89G -
v-machines/vm-101-disk-1 15.5G 2.17T 14.9G -
v-machines/vm-102-disk-1 3.43G 2.17T 3.23G -
v-machines/vm-102-state-vor_grafischem_Paketinstaller 765M 2.17T 765M -
v-machines/vm-103-disk-2 35.1G 2.17T 34.4G -
v-machines/vm-104-disk-1 40.3G 2.18T 38.7G -
v-machines/vm-105-disk-1 6.46G 2.17T 6.46G -
v-machines/vm-106-disk-1 41.3G 2.18T 39.4G -
v-machines/vm-107-disk-1 40.3G 2.17T 39.1G -
v-machines/vm-110-disk-1 5.00G 2.17T 5.00G -
The pool from the proxmoxinstaller is "rpool". And the dataset what you can use to store vm's is "pve-1". In this case, v-machines is an extra pool with HDDs. It looks so:
Code:
zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
pool: v-machines
state: ONLINE
scan: resilvered 1.09T in 4h45m with 0 errors on Sat May 23 02:48:52 2015
config:
NAME STATE READ WRITE CKSUM
v-machines ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D0KRWP ONLINE 0 0 0
ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0343538 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D688XW ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D63WM0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0381420 ONLINE 0 0 0
ata-WDC_WD20EURS-63S48Y0_WD-WMAZA9381012 ONLINE 0 0 0
errors: No known data errors
The thing is, on ZFS every should be perfectly, same diskstype, an Pro or Enterprise, type SAS is recommended (Featurelist) , default on proxmox is 4k phy sectorsize. Every HDD musst have same sectorsize. A real SATA/SAS controller is needet, no Fakraid, no SATA with his own bios... On us test we had also kernelpanic with no real satacontrollers. Testet also with BSD/Nas4free/Freenas and solaris.
So when you have a problem with ZFS installation, the most thing is a wrong hardware. We have a lot of servers with zfs running. We used solaris or nas4free. Since a time we use for all us phy servers proxmox. With HW Raid with ZFS. Mixed what we need. And so some hardware is simply not compatible.
So what you need for backup. It is the same as your installation with single disk and softraid.