Hello everyone.
I am running a pve 5.3-11
It works well so at this point let it stay. Last time my upgrade didn't go well and I had to rebuild whole thing.
And my lxc's did not transferd so rebuild those. A pain.
That said however, my question is on zfs.
My setup is simple.
The system is loaded on 2 SSD in zfs mirror. The whole system. Than I have a zfs mirror pool for all containers on HDD.
And I also have several zfs mirror pools for all my data.
The proxmox server acts as vm host and data server via mapped volumes to lxc machines. Most of my data is media so it is shared via emby or jellyfin vm.
Last week one of the system SSDs in rpool failed.
Original disk's were 120gb.
I got several replacements that are 240gb.
I swapped the failed already and all seams to be working but I have a question. Since I now have disk's of different sizes in the pool, I want to replace the second disk to but is it better to not expand the pool size and leave the rest of the SSD to allow for wear management?
That is if the space is not used for anything will the drive use it for wear leveling , hence prolong the drives live till failure. Or it is not going to work and I might as well expand the pool to full size of the disks.
Also what is the best way to test bootabiluty of the setup after disk swap?
I mean I swapped one disk and system rebooted ok, but how do I know that it used new drive and not booted from the old one?
Thanks. Vl.
Last week one of
I am running a pve 5.3-11
It works well so at this point let it stay. Last time my upgrade didn't go well and I had to rebuild whole thing.
And my lxc's did not transferd so rebuild those. A pain.
That said however, my question is on zfs.
My setup is simple.
The system is loaded on 2 SSD in zfs mirror. The whole system. Than I have a zfs mirror pool for all containers on HDD.
And I also have several zfs mirror pools for all my data.
The proxmox server acts as vm host and data server via mapped volumes to lxc machines. Most of my data is media so it is shared via emby or jellyfin vm.
Last week one of the system SSDs in rpool failed.
Original disk's were 120gb.
I got several replacements that are 240gb.
I swapped the failed already and all seams to be working but I have a question. Since I now have disk's of different sizes in the pool, I want to replace the second disk to but is it better to not expand the pool size and leave the rest of the SSD to allow for wear management?
That is if the space is not used for anything will the drive use it for wear leveling , hence prolong the drives live till failure. Or it is not going to work and I might as well expand the pool to full size of the disks.
Also what is the best way to test bootabiluty of the setup after disk swap?
I mean I swapped one disk and system rebooted ok, but how do I know that it used new drive and not booted from the old one?
Thanks. Vl.
Last week one of