[SOLVED] pvesm add zfspool storage does not support content type 'backup'


Mar 23, 2020
Hi friends.

I struggle with a very basic task i'm pretty sure someone can give me a hint very quickly :

I need to use a pool as backup content to do some... backups :

i created the raid1 pool and the other wich get the backups :

zpool create -f -o ashift=12 opx-hdd mirror /dev/sdx /dev/sdy
zfs create opx-hdd/backups

I created a mountpoint :

mkdir /mnt/pve/opx-hdd/backups

Tried to set it up :

zfs set compression=on opx-hdd/backups
zfs set compression=lz4 opx-hdd/backups
zfs set mountpoint=/opx-hdd/backups opx-hdd/backups

now i got :

zfs list
opx-hdd           756K  5.33T       96K  /opx-hdd
opx-hdd/backups    96K  5.33T       96K  /opx-hdd/backups

so i wanted to do something like this :

pvesm add zfspool opx-hdd -pool opx-hdd/backups -content backup
storage does not support content type 'backup'                                                                                                                                          
storage does not support content type 'none'

I tried to modify /etc/pve/storage.cfg :

zfspool: opx-hdd
        pool opx-hdd/backups
        mountpoint /opx-hdd/backups
        content backup

then :

pvesm scan zfs

Now i got this via webui :

Screenshot from 2022-07-26 16-55-05.png

Not super confortable with zfs i think i need a hint here.

In advance thanks.

EDITED : For clarification
Last edited:
You can't use a "zfspool"storage for backups. It only allows to store virtual VM/LXC disks. But what you can do is creating a "directory" storage pointing to the mountpoint of your dataset. When selecting vzdump as the content type for that directory storage you can store backups inside that dataset.
  • Like
Reactions: LnxBil
Hi !

Thanks @Dunuin for this clarification !

I'm not comfortable with Webui i often prefer the cli.

I did created the zfs pool and the mountpoint as i mentionned.

Now i created this entry in /etc/pve/storage.cfg

dir: opx-hdd-backups
        path /opx-hdd/backups
        content backup
        prune-backups keep-all=1
        shared 1

My cluster now detect the storage.

Thank you.
Last edited:
I noticed the pool is :

NAME USED AVAIL REFER MOUNTPOINT opx-hdd 920K 5.33T 104K /opx-hdd opx-hdd/backups 96K 5.33T 96K /opx-hdd/backups

though the node where the pool is detect it as :

Filesystem       Size  Used Avail Use% Mounted on
opx-hdd/backups  5.4T  128K  5.4T   1% /opx-hdd/backups

But other nodes of the cluster detect it as :

/dev/md2 58G 16G 40G 29% /

i don't get it .

Is it corosync or cluster management thing to update things between the nodes ?
Last edited:
Also don't forget to set "is_mountpoint=yes" for that directory storage. pvesm set YourStorageID --is_mountpoint yes or your backups might end up filling up the root filesystem in case the dataset isn't mounted correctly.

Did you setup ZFS replication? Setting a 'shared 1' won't tell PVE to do some magic to share that storage between nodes. It will only tell PVE to handle this storage as a shared storage. But that it actually gets a shared storage you have to do.on your own by pointing it to a shared filesystem like NFS/ceph or by using ZFS replication (which isn't really a shared storage, its just two synced local storages).
Last edited:
Again thanks @Dunuin for this very clear explanation.

I added the option to my storage.cfg

Indeed i didn't setup zfs replication cause other nodes don't have same disks capacity.

I'll manage to share it via nfs.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!