Replacing drive in ZFS

deltamikealpha

New Member
May 19, 2023
8
0
1
Afternoon folks!

In the last few days I've switched away from a pure TrueNAS server to Proxmox with an HBA, TrueNAS virtualised and a couple of other VMs. This is essentially a homelab install.

Because data retention was a thing I needed to do and I had quite a few drives in the TrueNAS box when it was bare metal - I used a 1TB SSD and 2x1TB HDDs that I had lying round, knowing that when my existing VMs were moved from TrueNAS I'd be able to use the other 1TB SSDs I'd got in there to update my pool to 3x1TB SSD.

This is a new install - I'm currently using a couple of hundred gig of data in my ZFS pool.

I've got to the point where I can release the SSDs from TrueNAS, and have done. But when I've come to replace the drive in the pool with zpool replace I'm getting a device too small error - which, technically it is .. by 40gb

The original Samsung SSD reports at 1.0TB, the SanDisk drives report at 960GB

Is there any way to set the volume size smaller, knowing I'm well within the total allowed space, to allow me to use existing drives, or is it just sod's law and I'll have to replace them?
 
You can only add same or bigger sized disks to an existing vdev. I would backup all the datasets/zvols on the pool using "zfs send" to some spare space (other disk or NAS), then destroy that pool, create a new pool (the 1TB disks are then limited to 960GB) and restore the datasets/zvols using "zfs receive".
 
You can only add same or bigger sized disks to an existing vdev. I would backup all the datasets/zvols on the pool using "zfs send" to some spare space (other disk or NAS), then destroy that pool, create a new pool (the 1TB disks are then limited to 960GB) and restore the datasets/zvols using "zfs receive".
Balls - was hoping there'd be some magic fix for something so close. Ordered a couple of new SSDs and will use the old TB drives as extra storage. Had some funnies with zfs send and receive, and whilst it's essentially homelab I've put a fair few hours into migration!

Thanks for the reply.