Sorry for necroing this thread. But is it true that a newly added vdev will not get old data kinda "rebelanced/resilvered" when you add one? So whats the best practice for increase a zfs-pool for a pbs for example, that has already multiple vdevs? Just wanted to have a confirmation that this is really the case that the new vdev will only be used for new data?
Would that also mean, the pools prev vdevs (for example 4 raidz2 vdevs) could get full, although there would be enough space available on the new added vdev(s)? Any way to force a redistribution?
can you give some insights or a confirmation and whats the best to do? Customer has already reeeeeeeeeeally much data on it, recreating is kinda pain for the customer. What would happen if we add the vdevs to the existing pool without recreating it? Whats the worst that could happen?
Edit 4: I wonder if Garbage Collection PBS would kinda redistribute the data over time, because old data gets deleted some time, new one would have been written to all devices then. Is this a correct assumption?
In ZFS, new data is written to the least full vdev.
If you add a vdev to an already existing pool, ZFS will write to the new vdev UNTILL it equals other vdevs in empty space.
The worst that could happen? concurrent IOPS that could be directed to different vdevs goes to the same vdev.
But, if you are adding the same vdev type and size as the existing ones, I don't see a problem.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.