Target audience: Beginners
There are no secret tips-n-tricks here, just basic knowledge.Precondition
- you have an intact PVE installation, be it a single computer or a cluster with several nodes; I am using the current 9.x release
- you have HDDs which may contain old data - that data will get erased here completely! The disks may have slightly different sizes
- you have free connectors to attach all new disks
Disclaimer
There is no way I could foresee each and every detail of a user’s system and there may be errors in my writing. Whatever you do with your system: you are responsible, not me. Make a backup first! (If possible..., we are talking about creating a new backup destination here --> chicken-and-egg problem.)Notes
- everything here is common knowledge; everything here should be documented in the reference documentation (#1) or in the wiki (#2); only the specific sequence in “walk-through”-style makes it worth being "postable"
- I will talk about adding two drives to be used as a ZFS mirror. Everything will work too if you have only one drive.
- a PBS, built upon a separate computer is recommended - however this FabU is not about that approach
- we are talking HDDs because that is what the target audience might have recycled from an old computer, everything here works with SSD too
- (nearly) everything is possibly through the CLI or alternatively the GUI - except noted
Hardware
I use a virtual PVE instance named “pnm” for the following. A cluster (of whatever size and shape) is probably not what a beginner has available at this stage of the journey. This is a fresh machine, without any additions I do usually add. And it has quirks: I have installed the PVE operating system with ZFS-on-ZFS on a single disk - both is a big no-no for any other use cases than teaching/learning. Being virtual is also the reason why my “physically” added disks are named “QEMU harddisk”.Step-by-step
1 - document which drives you currently have
- write down manufacturer, model name and serial number
- check which drives are currently active:
- lsblk -o+FSTYPE,MODEL | grep -v zd | tee ~/lsblk-pre-adding-drives.txt
Code:
root@pnm:~# lsblk -o+FSTYPE,MODEL | grep -v zd | tee ~/lsblk-pre-adding-drives.txt
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE MODEL
sda 8:0 0 32G 0 disk QEMU HARDDISK
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part vfat
└─sda3 8:3 0 31.5G 0 part zfs_member
sr0 11:0 1 1024M 0 rom QEMU DVD-ROM
- look at “Server View” --> “<yournodename>“ --> “Disks” and write down what you see (or print it)
2 - connect both drives
Turn off power, connect everything, turn on. The system should boot as usual. If not:- try to access a “boot menu”
- search for “boot problems”... as I won’t talk about it here
3 - learn which drives you actually added
CLI:- lsblk -o+FSTYPE,MODEL | grep -v zd | tee ~/lsblk-after-adding-drives.txt
Code:
root@pnm:~# lsblk -o+FSTYPE,MODEL | grep -v zd | tee ~/lsblk-after-adding-drives.txt
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE MODEL
sda 8:0 0 32G 0 disk QEMU HARDDISK
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part vfat
└─sda3 8:3 0 31.5G 0 part zfs_member
sdb 8:16 0 32G 0 disk QEMU HARDDISK
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 28G 0 part ntfs
sdc 8:32 0 32G 0 disk QEMU HARDDISK
├─sdc1 8:33 0 512M 0 part
├─sdc2 8:34 0 1K 0 part
└─sdc5 8:37 0 25G 0 part vfat
sr0 11:0 1 1024M 0 rom QEMU DVD-ROM
Code:
root@pnm:~# diff ~/lsblk-pre-adding-drives.txt ~/lsblk-after-adding-drives.txt
5a6,12
> sdb 8:16 0 32G 0 disk QEMU HARDDISK
> ├─sdb1 8:17 0 1G 0 part
> └─sdb2 8:18 0 28G 0 part ntfs
> sdc 8:32 0 32G 0 disk QEMU HARDDISK
> ├─sdc1 8:33 0 512M 0 part
> ├─sdc2 8:34 0 1K 0 part
> └─sdc5 8:37 0 25G 0 part vfat
GUI:
- look at “Dashboard” --> “Storage/Disks” and compare it with the version from above
4 - prepare those disks
CLI:- sgdisk --zap-all /dev/sdb # THIS IS DESTRUCTIVE - tripple check the device name! (#3)
Code:
root@pnm:~# sgdisk --zap-all /dev/sdb
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The result:
Code:
root@pnm:~# lsblk -o+FSTYPE,MODEL | grep -v zd
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE MODEL
sda 8:0 0 32G 0 disk QEMU HARDDISK
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part vfat
└─sda3 8:3 0 31.5G 0 part zfs_member
sdb 8:16 0 32G 0 disk QEMU HARDDISK
sdc 8:32 0 32G 0 disk QEMU HARDDISK
sr0 11:0 1 1024M 0 rom QEMU DVD-ROM
GUI:
- on the “Disks” view select one of sdb/sdc and click the button at the top: “Wipe Disk”; repeat for all added disks.

5 - create a new ZFS pool
CLI:We will NOT use “sdb”/“sdc” here. That is considered bad practice as those names may change dynamically, e.g. when adding/removing other PCI cards. Instead we look up the right identifier like this:
Code:
root@pnm:~# ls -Al /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Aug 27 09:37 ata-QEMU_DVD-ROM_QM00003 -> ../../sr0
lrwxrwxrwx 1 root root 9 Aug 27 09:37 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 27 09:37 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 27 09:37 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 27 09:37 scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Aug 27 09:45 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Aug 27 09:46 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 -> ../../sdc
Code:
root@pnm:~# zpool create backuppool mirror /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2
- “Disks” --> submenu “ZFS”; then the button “Create: ZFS” is visible --> click it and it opens the creation dialog
- You’ll see a table with both disks listed; if there are no “Device” entries in the table you did something wrong ;-)
- Name: “backuppool” - or whatever you want
- RAID Level: “Mirror” - or “Single Disk” if you have only one device ;-)
- Add Storage: [ ] - leave it checked if you want to store something else here; for OUR goal this is NOT necessary
- Device: select all drives
Code:
root@pnm:~# zpool list backuppool -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backuppool 31.5G 118K 31.5G - - 0% 0% 1.00x ONLINE -
mirror-0 31.5G 118K 31.5G - - 0% 0.00% - ONLINE
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 32.0G - - - - - - - ONLINE
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 32.0G - - - - - - - ONLINE
5.1 - if you have different sized disks...
... the GUI will tell you: “command '/sbin/zpool create -o 'ashift=12' backuppool mirror /dev/disk/by-id/scsi-0QEMU\_QEMU\_HARDDISK\_drive-scsi1 /dev/disk/by-id/scsi-0QEMU\_QEMU\_HARDDISK\_drive-scsi2' failed: exit code 1
”.The CLI fails also, but it tells you a way out:
Code:
root@pnm:~# zpool create backuppool mirror /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2
invalid vdev specification
use '-f' to override the following errors:
mirror contains devices of different sizes
Code:
root@pnm:~# zpool create -f backuppool mirror /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2
root@pnm:~# zpool list backuppool -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backuppool 31.5G 118K 31.5G - - 0% 0% 1.00x ONLINE -
mirror-0 31.5G 118K 31.5G - - 0% 0.00% - ONLINE
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 32.0G - - - - - - - ONLINE
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 40.0G - - - - - - - ONLINE
6 - create a dedicated data set
We could skip this. The previous step allows to create a directory inside the root-dataset of the new pool. It would be something like “/backuppool/myfolder”. But I really recommend to create a separate dataset as this allows to set independent parameters. (No, we are not actually doing that for now.) Different use cases inside of this single pool may want different settings. For this to be possible we separate “things” in datasets (or in ZVOLs if we have the need for a block device --> used for virtual disks of the VMs).CLI:
- zfs create backuppool/backupspace (#4)
Code:
root@pnm:~# zfs create backuppool/backupspace
root@pnm:~# zfs list -r backuppool
NAME USED AVAIL REFER MOUNTPOINT
backuppool 564K 30.5G 96K /backuppool
backuppool/backupspace 96K 30.5G 96K /backuppool/backupspace
- not available... as far as I know
7 - tell PVE we have new storage
Until now PVE-the-software does not “know” what we have prepared and what we want to do with it. (At least if you did not check the checkbox from above.) We need to create a new “Directory”-Storage. We can store different things in it but we will focus only on “Backup”.CLI:
- pvesm add dir mybackupstorage --path /backuppool/backupspace --content backup
Code:
root@pnm:~# pvesm add dir mybackupstore --path /backuppool/backupspace --content backup
GUI:
- “Datacenter” --> “Storage” --> drop-down menu at the top “Add” --> “Directory”
- "ID:" "mybackupstore"
- "Directory:" "/backuppool/backupspace"
- "Content:" "Backup" (DESELECT "Disk Image" for now)
- "Nodes:" "All" - we have only one node, in a cluster choose this local node explicitly as the others do not have access to these local disks!
- Tab "Backup Retentions:" skip for now
Result:
Code:
root@pnm:~# pvesm status
Name Type Status Total Used Available %
local dir active 29385344 123776 29261568 0.42%
local-zfs zfspool active 29571552 309968 29261584 1.05%
mybackupstore dir active 31997440 128 31997312 0.00%
8 - empty
Numbering ain't easy...9 - create a first backup, manually
GUI:- select any VM or Container --> Backup --> notice the drop-down menu “Storage: local”, which is wrong! Select “mybackupstore”! Click “Backup now”.
Code:
root@pnm:~# pvesm list mybackupstore
Volid Format Type Size VMID
mybackupstore:backup/vzdump-qemu-101-2025_08_27-14_22_36.vma.zst vma.zst backup 342187 101
Code:
root@pnm:~# ls -Al /backuppool/backupspace/dump/
total 340
-rw-r--r-- 1 root root 1129 Aug 27 14:22 vzdump-qemu-101-2025_08_27-14_22_36.log
-rw-r--r-- 1 root root 342187 Aug 27 14:22 vzdump-qemu-101-2025_08_27-14_22_36.vma.zst
-rw-r--r-- 1 root root 6 Aug 27 14:22 vzdump-qemu-101-2025_08_27-14_22_36.vma.zst.notes
10 - setup an automatic schedule
GUI:- "Datacenter" --> "Backup" --> "Add" ...
Addendum
Additional topics, not handled here
- removable storage - those are not supported here
- retention settings: per job and/or per storage (fall-back)
- power consumption / power saving
- external storage via NFS/CIFS/whatever
- Proxmox-Backup-Server - THE recommended tool for backups ;-)
References
- #1 - https://pve.proxmox.com/pve-docs/
- #2 - https://pve.proxmox.com/
- #3 - man sgdisk
- #4 - man zfs-create
Disclaimer
As stated at the beginning: I am not responsible for what happens if you follow this text. This is by far not an exhaustive description. If you see something stupid (or errors) please reply.Thanks
Thanks for reading
To find other articles like this search for “FabU” with “[x] Search titles only” = https://forum.proxmox.com/search/8543442/?q=FabU&c[title_only]=1
(( "FabU" has basically no meaning - I was just looking for a unique search term - and "Frequently answered by Udo" made sense - for me

Edited:
- for some small corrections - this article is just too long to be 100% from the beginning
- to add 5.1
- extending step 7 - and stumbling about missing step 8...?
Last edited: