Special Device on Existing PBS

Jan 2, 2018
2
0
41
30
Good Day All

I just need to confirm if it is possible to add a zfs special device to an existing PBS deployment? Once the special device is added, will PBS automatically move all metadata across?

I have only found info about adding a special device to a new PBS deployment.

Thanks
Jonathan
 
Hi,

by special device, do you mean the zfs special device? If so our manual actually describes how to add it to an existing pool [1].

This part is particularly relevant:

Adding a special device to an existing pool with RAID-1:
Code:
# zpool add <pool> special mirror <device1> <device2>

Please read the entire section carefully though as removing a special device from a pool isn't possible!

[1]: https://pbs.proxmox.com/docs/sysadmin.html#zfs-special-device
 
Last edited:
Once the special device is added, will PBS automatically move all metadata across?
no, only newly written metadata ends up on the special vdev(s). you can force that by rewriting all the datastore files during a maintenance window if you have the free space and time.
 
  • Like
Reactions: sterzy
no, only newly written metadata ends up on the special vdev(s). you can force that by rewriting all the datastore files during a maintenance window if you have the free space and time.
PBS services shouldn't be running. Then you could create a new datastore, move all data from your datastore to that new datastore and move it back to your datastore. Each dataset is its own filesystem, so when moving data between datasets it will have to rewrite all metadata so all metadata should be on the SSDs after doing that.
 
Only to make it sure: The new datastore needs to be on a separate/different dataset as the old datastore.

Then you could create a new datastore, move all data from your datastore to that new datastore and move it back to your datastore.

You do not necessarily need to move it back again to the initial datastore; provided that both datasets (and therefore in this case both datastores) are on the same zpool, no?
 
The thing is that the datastore is named in every backup job, so if you only move the backups away you'd have to change every of the jobs in pve.
But for filling the special devices alone moving once is sufficient, yes.
 
  • Like
Reactions: Neobin
The thing is that the datastore is named in every backup job, so if you only move the backups away you'd have to change every of the jobs in pve.

Good point, thanks. :)
Did not think of this with my not even a handful datastores/namespaces and backup jobs. :D
 
I think I was mistaken, though. In the backups you only name the storage, but in the storage definition there's the datastore. So only a single place to change, but still.
 
Yeah, and remote sync jobs would be a topic too, if utilized. So, as it is often the case: "It depends". :D
 
rewriting (as in, copying, then moving back) within a dataset should actually be enough, unless you use deduplication on ZFS which I'd not recommend ;)
 
Is it possible to add a 3rd device to an existing raid 1 special device.
I added 2x Nvme as raid 1 special devices in a pool but I want to add another Nvme to special devices.
Do I have to delete the whole pool to add an extra special device or can I add it to the current one.
 
Last edited:
Didn't tested it myself yet, but my guess would be that this is possible. At least the manual isn't talking about special vdev limitations for the zpool attach command.
 
Last edited:
Did tested it myself yet, but my guess would be that this is possible. At least the manual isn't talking about special vdev limitations for the zpool attach command.
can you tell me the proper command for that
 
See the "zpool attach" commands manual: https://openzfs.github.io/openzfs-docs/man/master/8/zpool-attach.8.html

So something like "zpool attach YourExistingPool YourExistingSpecialDevice newSpecialDevice" to turn a 2-disk special mirror into a 3-disk special mirror in case you want to increase reliability.

Or in case you want to turn a special mirror into a special striped mirror you would use the "zpool add" command to add more capacity.
 
Last edited:
See the "zpool attach" commands manual: https://openzfs.github.io/openzfs-docs/man/master/8/zpool-attach.8.html

So something like "zpool attach YourExistingPool YourExistingSpecialDevice newSpecialDevice" to turn a 2-disk special mirror into a 3-disk special mirror in case you want to increase reliability.

Or in case you want to turn a special mirror into a special striped mirror you would use the "zpool add" command to add more capacity.
1706169307142.png

so the current pool is zpool01 and those are the special devices with mirror-1. Can you tell me the command according to that.
The new Nvme ist /dev/nvme0n1
It has to be added as 3-disk special mirror
 
you need to pass in one of the existing special devices, e.g. "zpool attach POOL_NAME /dev/path/to/existing/special/device /dev/path/to/new/special/device". see the man page of "zpool-attach", which explains it:

Code:
 zpool attach [-fsw] [-o property=value] pool device new_device

[...] If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device.  If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on.

edit: and the usual tip - you can trivially experiment with such things inside a VM! use this opportunity if you are unsure - it's better to throw away a test VM than to recreate a production pool ;)
 
Last edited:
I recently added a special vdev to an existing pool, and after monitoring it for a couple of days and seeing ~10 GB of data on the special dev for a 20 TB datastore, I though "that's fine, all the data in the datastore will get re-written eventually and all the metadata will be on the special vdev".

I was wrong.

Yesterday I destroyed the datastore and re-created it with the special vdev (I have two servers, so I just synced data from the other one). After all the data had synced over, the special vdev had 81 GB allocated. Before destroying the old datastore, the special vdev had 57.8 GB.

So take it from me, every advice and documentation that says you really need to re-write the data to use the special vdev is right!
 
  • Like
Reactions: UdoB
How can I check whether the Special devices are storing the metadata correctly or at all
 
I got this Information from someone:

When you add a special vdev to an existing pool, the metadata for the existing data isn't migrated automatically to the special vdev. Only new metadata gets written to the special vdev.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Anyone know any solution for this??
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!