Split ZFS mirror rpool into 2 single disks

aessing

New Member
May 17, 2022
4
0
1
Hi all,

in my homelab I have a small server with 2 SSDs building the ZFS rpool (mirror) for PVE and guests.
Is there a way to split this pool into two single disk pools? One with OS/PVE and guests and one guests only.
So, i want to completely remove mirroring.

Cheers

Andre
 
Detach one of the disk from the mirror. Wipe the disk and create a new zpool. Add that zpool to Storages and move the disks of the VMs to the new storage.
 
  • Like
Reactions: aessing
Also keep in mind that single disk pools won't be able to repair corrupted data. So there won't be any bit rot protection anymore.
If you don't need other ZFS features it might be better performing to just use LVM-Thin for the new VM/LXC storage.
 
Last edited:
Also keep in mind that single disk pools won't be able to repair corrupted data. So there won't be any bit rot protection anymore.
If you don't need other ZFS features it might be better performing to just use LVM-Thin for the new VM/LXC storage.
Thanks @Dunuin, thanks @leesteken.

When I detach the disk from the mirror... does the pool know it isn't a mirror anymore? Or do I have to reconfigure something?

Do I have any feature loss using LVM-THIN? Compression? Dedup?

Thanks in advance
Andre
 
Thanks @Dunuin, thanks @leesteken.

When I detach the disk from the mirror... does the pool know it isn't a mirror anymore? Or do I have to reconfigure something?
Yes, otherwise it is still a ZFS mirror in degraded state because ZFS thinks there is a mirror member missing (but it still would continue running fine).
Do I have any feature loss using LVM-THIN? Compression? Dedup?
Jup, no compression, no dedup, no replication. Snapshots still work.
But benefit would be less overhead so your SSD might wear 2-3 times less with better performance.
 
  • Like
Reactions: aessing
Great, thanks so much @Dunuin, I will switch to LVM-THIN tomorrow. I guess reliability (long life) and better performance counts more than a bit dedup and compression for me. I had a lot of performance issues with the ZFS writing many small files inside a VM.
 
Great, thanks so much @Dunuin, I will switch to LVM-THIN tomorrow. I guess reliability (long life) and better performance counts more than a bit dedup and compression for me. I had a lot of performance issues with the ZFS writing many small files inside a VM.
Most people also don't use ZFS deduplication because it costs too much. For most workloads the deduplication won't save that much space (more usefull if you got a DB that stores alot of the same entries) and enbled deduplication needs alot more RAM. For each TB of deduplicated storage you should have around 4GB of additional RAM for the deduplication tables. So not using deduplication for a 8TB HDD would save you 32GB of RAM. Then you have to decide if the space saving of the deduplication is worth the additional RAM usage.
 
Last edited:
  • Like
Reactions: aessing
I guess reliability (long life) and better performance counts more than a bit dedup and compression for me.
Well..., I do use ZFS especially because of the reliability.

Maybe there are other definitions like your "long life", but for me reliability is the promise that I will get the same data from the storage system in the future which I had written to it in the past.

Without a checksumming filesystem there is no bitrot protection. And worse: depending on the specific error there is no bitrot detection. So once you have (unrecognized) damaged files they will keep damaged - in each and every new backup you create in the future.

Yes, the chance for an UCE (unrecoverable error inside a disk) is as low as 10^-15 (or less) and the chance for unrecognized data corruption is again much lower - but for me it is important to avoid that area.

(Note that the above statement does not talk about redundancy and the capability of self-repair.)

Just my 2€¢...
 
  • Like
Reactions: aessing
Well..., I do use ZFS especially because of the reliability.

Maybe there are other definitions like your "long life", but for me reliability is the promise that I will get the same data from the storage system in the future which I had written to it in the past.

Without a checksumming filesystem there is no bitrot protection. And worse: depending on the specific error there is no bitrot detection. So once you have (unrecognized) damaged files they will keep damaged - in each and every new backup you create in the future.

Yes, the chance for an UCE (unrecoverable error inside a disk) is as low as 10^-15 (or less) and the chance for unrecognized data corruption is again much lower - but for me it is important to avoid that area.

(Note that the above statement does not talk about redundancy and the capability of self-repair.)

Just my 2€¢...
Thanks @UdoB

As this is a homelab and not enterprise, I define long life with, my components keep running for a long time without me throwing money at it to buy new hardware. As I learned above from @Dunuin, without multiple disks in ZFS, there is no bit rot detection. So that is in my case, no plus for ZFS.
Also EXT4 and XFS are out for years and of course something could happen... but also ZFS failed before, so... Murphy could be everywhere.

Actually I'm in the phase of deciding about ext4 or xfs for the host. Than I will go with LVM-THIN because of snapshot support
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!