[SOLVED] pvesm add zfspool storage does not support content type 'backup'

proxman4

Member
Mar 23, 2020
26
2
23
26
Hi friends.

I struggle with a very basic task i'm pretty sure someone can give me a hint very quickly :

I need to use a pool as backup content to do some... backups :

i created the raid1 pool and the other wich get the backups :

Code:
zpool create -f -o ashift=12 opx-hdd mirror /dev/sdx /dev/sdy
zfs create opx-hdd/backups

I created a mountpoint :

Code:
mkdir /mnt/pve/opx-hdd/backups

Tried to set it up :

Code:
zfs set compression=on opx-hdd/backups
zfs set compression=lz4 opx-hdd/backups
zfs set mountpoint=/opx-hdd/backups opx-hdd/backups

now i got :

Code:
zfs list
NAME              USED  AVAIL     REFER  MOUNTPOINT
opx-hdd           756K  5.33T       96K  /opx-hdd
opx-hdd/backups    96K  5.33T       96K  /opx-hdd/backups

so i wanted to do something like this :

Code:
pvesm add zfspool opx-hdd -pool opx-hdd/backups -content backup
storage does not support content type 'backup'                                                                                                                                          
storage does not support content type 'none'

I tried to modify /etc/pve/storage.cfg :

Code:
zfspool: opx-hdd
        pool opx-hdd/backups
        mountpoint /opx-hdd/backups
        content backup

then :

Code:
pvesm scan zfs

Now i got this via webui :

Screenshot from 2022-07-26 16-55-05.png

Not super confortable with zfs i think i need a hint here.

In advance thanks.

EDITED : For clarification
 
Last edited:
You can't use a "zfspool"storage for backups. It only allows to store virtual VM/LXC disks. But what you can do is creating a "directory" storage pointing to the mountpoint of your dataset. When selecting vzdump as the content type for that directory storage you can store backups inside that dataset.
 
  • Like
Reactions: LnxBil
Hi !

Thanks @Dunuin for this clarification !

I'm not comfortable with Webui i often prefer the cli.

I did created the zfs pool and the mountpoint as i mentionned.

Now i created this entry in /etc/pve/storage.cfg

Code:
dir: opx-hdd-backups
        path /opx-hdd/backups
        content backup
        prune-backups keep-all=1
        shared 1

My cluster now detect the storage.

Thank you.
 
Last edited:
I noticed the pool is :

NAME USED AVAIL REFER MOUNTPOINT opx-hdd 920K 5.33T 104K /opx-hdd opx-hdd/backups 96K 5.33T 96K /opx-hdd/backups

though the node where the pool is detect it as :

Code:
Filesystem       Size  Used Avail Use% Mounted on
opx-hdd/backups  5.4T  128K  5.4T   1% /opx-hdd/backups

But other nodes of the cluster detect it as :

/dev/md2 58G 16G 40G 29% /


i don't get it .

Is it corosync or cluster management thing to update things between the nodes ?
 
Last edited:
Also don't forget to set "is_mountpoint=yes" for that directory storage. pvesm set YourStorageID --is_mountpoint yes or your backups might end up filling up the root filesystem in case the dataset isn't mounted correctly.

Did you setup ZFS replication? Setting a 'shared 1' won't tell PVE to do some magic to share that storage between nodes. It will only tell PVE to handle this storage as a shared storage. But that it actually gets a shared storage you have to do.on your own by pointing it to a shared filesystem like NFS/ceph or by using ZFS replication (which isn't really a shared storage, its just two synced local storages).
 
Last edited:
Again thanks @Dunuin for this very clear explanation.

I added the option to my storage.cfg

Indeed i didn't setup zfs replication cause other nodes don't have same disks capacity.

I'll manage to share it via nfs.