[SOLVED] ZFS send/recv inside same pool

manuelkamp

Member
May 10, 2022
33
8
13
Hi and merry christmas to you all. I have a PBS 2.3-1 running for just over a year now. Of course I had to save on money, so it is equipped with spinning disks and not SSDs. But recently I bought SSDs to add them as a special device. I installed them today and added them as special device to the existing pool. I read about that I need to send/recv to "move" all the existing metadata onto the SSDs. But I have no clue how to do that. The pool uses currently 5.67 TB out of 23.83 TB, so I have enough space left to do a send/recv? What would the command look like? I just found sample commands to do this between different pools, but I have just one pool in the PBS? thanks!
 
1.) Enable Maintaince mode for your datastore so it won' get read/written to while moving its contents

2.) Create a new dataset as a new location for your datastore:
zfs create YourPool/NewDataset

3.) Create snapshot of the dataset (or pools root in case you didn't used a dataset as your datastore):
zfs snapshot -R YourPool/YourDatastoreDataset@move

4.) Copy data from to NewDataset dataset:
zfs send -R YourPool/YourDatastoreDataset@move | zfs recv YourPool/NewDataset

5.) Verify that you got a complete copy of your data:
zfs list -o space

6.) Edit your "/etc/proxmox-backup/datastore.cfg" so that your Datastore points to "/YourPool/NewDataset" instead of the old datastore location.

7.) disable maintaince mode, reboot and check if your backups are working

8.) destroy the snapshots:
zfs destroy YourPool/YourDatastoreDataset@move
zfs destroy YourPool/NewDataset@move

9.) in case everything is working, delete the folder of your old datastore (or do a zfs destroy YourPool/YourDatastoreDatase in case you used a dataset before just fopr that datastore)

In case you don't want the datstore to be at a new location you could skip the step with editing the datastore.cfg and copy the datastore to a temporary location and later copy everything backup. But not that great in my opinion, because you would need to move all that TBs of data an additional time.
 
Last edited: