ZFS vDev Expansion Available?

modem

New Member
May 15, 2023
10
0
1
Hopefully no one goes bonkers over this because I've seen it asked before, but I can't find a straight up answer. Is ZFS vDev expansion available in ProxMox yet? Or is this a feature that seems to be 'feature complete' in ZFS, but has that been made available to the various OS's that are using ZFS and thus meaning it won't be available in ProxMox until some later version?

My conundrum that I'm looking for a good solution for: I have a Dell PE r730 16-bay server for my home lab and I would like to get no less than 4TB SSD's (or even HDD's ... if available). But pricing is a challenge so I'd only be able to buy 4, maybe 5 at a time and create a RAIDz2. The plan would be to get another 4-5 drives and expand the vDev, and repeat as I save up a bit.

Obviously with a ZFS pool, it can be made up of multiple vDev's as well so one option would be every time I add drives, to create a new vDev and add it to the pool. But what are the performance penalties/gains for this? If vDev expansion isn't available, is it easier to just backup every VM, blow up the host/node and recreate?

I'm looking for input before I pull the trigger on buying drives for this newly planned project.
 
Nope. It might be in "Master" zfs code already, but you don't want to run a system on that. I've heard horror stories of ppl compiling Master from source and creating a pool, and then when they try to shift over to a "stable" release they can't import the pool due to feature mismatch.

You can keep adding same-level-raidzX and same-number-of-disks with the same size (or larger) to existing vdevs, to expand your free space and get a bit of I/O speedup.

e.g. 6x4TB RAIDZ2, run that for a while, then add + 6x4TB (or 6x6TB) RAIDZ2 would result in a 2-vdev pool. Your data will NOT rebalance automatically between the vdevs, unless the data is rewritten. Using larger disks will also not result in balanced I/O

Depending on how large your "main" pool is, it's possible to create ~100+TB backup pool for ~$3k and change these days.

https://www.reddit.com/r/DataHoarde...u_want_a_portable_100tb_backup_solution_with/
 
  • Like
Reactions: UdoB
At
Nope. It might be in "Master" zfs code already, but you don't want to run a system on that. I've heard horror stories of ppl compiling Master from source and creating a pool, and then when they try to shift over to a "stable" release they can't import the pool due to feature mismatch.

You can keep adding same-level-raidzX and same-number-of-disks with the same size (or larger) to existing vdevs, to expand your free space and get a bit of I/O speedup.

e.g. 6x4TB RAIDZ2, run that for a while, then add + 6x4TB (or 6x6TB) RAIDZ2 would result in a 2-vdev pool. Your data will NOT rebalance automatically between the vdevs, unless the data is rewritten. Using larger disks will also not result in balanced I/O

Depending on how large your "main" pool is, it's possible to create ~100+TB backup pool for ~$3k and change these days.

https://www.reddit.com/r/DataHoarde...u_want_a_portable_100tb_backup_solution_with/

Thanks, at least I have an answer that it's not in ProxMox yet. Not sure if it's coming to ProxMox? My goal would be to expand the single vdev which seems to be what TrueNAS has implemented on their system with ZFS.

This does open another question. I had read in passing that someone mentioned they could have a RAIDz2 vdev of 6 x 10TB. They they could remove one drive in that vdev with a 20TB and let the data rebuild. Essentially over time replacing each of the 10TB with 20TB drives until the entire vdev was replaced?
While that isn't the same as expanding the vdev with more drives, I do see where if 6 drives are purchased up front, the vdev capacity could grow.... if that is capable.

Thoughts?
 
> someone mentioned they could have a RAIDz2 vdev of 6 x 10TB. They they could remove one drive in that vdev with a 20TB and let the data rebuild. Essentially over time replacing each of the 10TB with 20TB drives until the entire vdev was replaced?

With today's really large drives (8TB and up) you'd arguably be better off buying a disk shelf and just create a new pool with the larger disk set. Then migrate your data over. Possibly continue using the old pool as a backup.

Otherwise you have to deal with 6x resilvers when you replace each disk 1:1 ; that takes a long time (you have to do it sequentially and wait for each resilver to finish before replacing the next disk) and generates a lot of I/O load + wear-and-tear.

Back when disks were ~4TB or less, it was more feasible.
 
> someone mentioned they could have a RAIDz2 vdev of 6 x 10TB. They they could remove one drive in that vdev with a 20TB and let the data rebuild. Essentially over time replacing each of the 10TB with 20TB drives until the entire vdev was replaced?

With today's really large drives (8TB and up) you'd arguably be better off buying a disk shelf and just create a new pool with the larger disk set. Then migrate your data over. Possibly continue using the old pool as a backup.

Otherwise you have to deal with 6x resilvers when you replace each disk 1:1 ; that takes a long time (you have to do it sequentially and wait for each resilver to finish before replacing the next disk) and generates a lot of I/O load + wear-and-tear.

Back when disks were ~4TB or less, it was more feasible.

Which the ZFS resilvering is no different than a traditional RAID card doing the job, high I/O and can/will cause a failure for any underlying hardware issues.

I'm really just putting a lot of thought into how I want to do a home lab. My goal is to use 3 servers as a clustered setup, a Dell R540 12-Bay, Dell R430 8-Bay, and a Dell 730xd 16-Bay. However I would only start out with probably the 730xd as I have close to 16 1.2TB Seagate SAS drives now that I can use. Then bring the R540 online and so forth.

I'll be running several game servers, but also a couple VM's for my small business (DC, accounting, RMM, etc). Then I would have two additional servers (Currently a Dell R340 4-Bay and Dell R320 8-Bay), one as purely a NAS (TrueNAS Scale) and the other one for backups.

I do realize this is total overkill, but I am wanting to learn real life application for ProxMox as I'm coming out of a VMWare/Hyper-V world.

In theory I could use the 3 main servers as my VM cluster with the TrueNAS having 4x12TB followed by the backup box with 8 x 4TB 2.5" SATA. The NAS would handle some backups as well as storage that is for playing around with the other server being a secondary backup.

Speaking of which, I'm still getting my feet wet with ZFS and am aiming to grow my understanding for backups with tools like sanoid and using VM level tool like Veeam. I even have a VPN tunnel to my parents house where I'd have a QNAP where I could dump backups too... bandwidth pending.