Making ZFS mirror from 2x 4TB disks and 1x 8TB disk

restor5

New Member
Aug 14, 2024
1
0
1
Hello I was wondering if it is possible to create a ZFS mirror from 2x 4TB disks and 1x 8TB disk.
Is it possible to group the two 4TB disks in order to match them with the 8TB disk into a mirror, if yes how?

Thanks in advance.
 
You can partition any disk and make vdevs from those partitions. So, technically, nothing keeps you to partition the 8 TByte disk with two separate 4 TByte partitions (say, /dev/sda1 and /dev/sda2) and then either use the 4 TByte disks in whole or also partitioned as either /dev/sdb and /dev/sdc or /dev/sdb1 and /dev/sdc1.

So, after creating 4 equally sized partitions, you can:

zpool create testpool mirror /dev/sda1 /dev/sdb1
zpool add testpool mirror /dev/sda2 /dev/sdc1

Obviously, do not create a mirror over /dev/sda1 and /dev/sda2, i.e. on the same disk!

Also, it would be better if you used the partition UUID to reference the partitions (/dev/disk/by-partuuid/*) instead of logical device names (/dev/sd*). You can find the UUIDs via "lsblk -o NAME,PARTUUID".
 
The short answer is NO, don't do that, it's a terrible idea and will put your data at risk. Trying to run zfs like that is beyond jackleg.

And no I won't go into more detail, bc it's a terrible idea.

Buy a proper 8TB disk to mirror the original, or limit your mirror to 4TB of space on the 8TB.

You could create an additional e.g. XFS partition on it using the remaining space, but you would have I/O contention on simultaneous access and slow both mounts down. And you'd have to delete the XFS if/when you replace the 4TB drives with an 8TB to properly resize the pool.
 
Enlighten us by elaborating on why that is a terrible idea, because I am either too dumb to grasp it or you are wrong.

Mirroring of 2 disks can tolerate the failure of one disk. If it were 4 physical disks, you would also choose striped mirrors as well, which gives the same fault tolerance. In the specific case of 8+4+4, there could be even two disks (i.e. sdb and sdc) failing without any data loss, so I fail to see what you mean. Obviously, if sda fails, the fault tolerance is still one disk, so not worse than with 2 or 4 disks.

The only thing you would not get is a speedup by only one stripe of 4+4, but you would not get that with two 8 TByte disks, either.

So, I challenge you (mostly because I want to learn): Why is that a terrible idea?
 
Last edited:
There isn't any data risk at all. It will just be terrible slow. While the data will be distributed to the two smaller HDDs (which is good and fast) ZFS has to write two times on the same big HDD in different partitions. This will slow down the whole raid significantly...
 
Last edited:
  • Like
Reactions: restor5
How would that be even figuratively comparable?

If you have any good reasons how the described 4+4+8 configuration was any less failure-resistent than a 8+8 (or a 4+4+4+4), then please educate us.
 
Enlighten us by elaborating on why that is a terrible idea, because I am either too dumb to grasp it or you are wrong.

Mirroring of 2 disks can tolerate the failure of one disk. If it were 4 physical disks, you would also choose striped mirrors as well, which gives the same fault tolerance. In the specific case of 8+4+4, there could be even two disks (i.e. sdb and sdc) failing without any data loss, so I fail to see what you mean. Obviously, if sda fails, the fault tolerance is still one disk, so not worse than with 2 or 4 disks.

The only thing you would not get is a speedup by only one stripe of 4+4, but you would not get that with two 8 TByte disks, either.

So, I challenge you (mostly because I want to learn): Why is that a terrible idea?

I don't normally waste my time on fools, but a simple search would give you the answer - so LMGTFY. Someone with more time and patience can ELI5 bc IT'S A TERRIBLE FKG IDEA.

https://www.reddit.com/r/zfs/comments/85nf1y/zfs_with_different_size_disks/
 
Forgive me for not trusting Reddit as the definitive source for technical questions and thinking for myself.

Well - if you actually had read all that, you would have noticed that none of the suggested schemes there are even remotely what I suggested. None of them used partitioning at all. These guys talk about fiddling around with whole disks and rearranging that into a zpool. That, of course will not work.

I still wait to hear how the failure of any one physical disk in my suggested setup can lead to a failure of the zpool and thus "will put your data at risk" more than a standard ZFS mirror would. Burden of proof is on you - until that, I take it as "everyone has a right to their own opinion".

But I guess you will not waste any more of your time on fools like me. :rolleyes:
 
Last edited:
LOL what reality do you live in where "the burden of proof is on me"? I'm trying to warn OP against doing something stupid.

Entitled much? FA&FO - I don't care.
 
#2 is right, not recommended, but if you know what you do, why not.
make ZFS setup stickers then paste on disks as reminder.
 
I could be wrong but to have a 4TB + 4TB strip mirror to 8TB is like have a 4TB nvme drive mirrored to 4TB hard drive ?
 
No, what i suggest is having 2 stripes of mirrors with 4TB partitions, each of which resides on one 4TB physical disk and one half of the 8 TB physical disk. Thus, any failing disk affects only side of each of the mirrors.
So you get 8TB of net storage like you would with 4x4 or 2x8 (modulo Performance).
 
Last edited:
The only problem is if I have right is , on two of the drives all the data would be sort of contiguous , so it could read from one drive then the other , but on the big drive the head would have keep switching between the partitions , so I guess the performance would depend on the HDs cache ?
 
Probably, ZFS will try to distribute data over different vdevs, such that the two partitions on the 8 TB Disk will be used in turns, causing seeks. This would not matter on SSDs. I think the size of the Write Blocks would matter, so to minimize the seeks, I would use a larger ashift than usual.
 
Probably, ZFS will try to distribute data over different vdevs, such that the two partitions on the 8 TB Disk will be used in turns, causing seeks. This would not matter on SSDs. I think the size of the Write Blocks would matter, so to minimize the seeks, I would use a larger ashift than usual.
I would A/ get a extra 4tb drive then you can have a raidx1 8tb or B/ get a extra 8tb so you can have 4tb + 8tb mirrors , but I guess that wasn't the question asked .
 
With Linux you have the freedom and the capability to do a very lot of things, including weird ones.

Yes, you can (for example) use the LVM to create a Volume Group consisting of two 4 GB disks as Physical Volumes to get a 8 GB Logical Volume. And yes, a "zpool create mypool mirror xxx yyy" would accept that.

I did not test this I never would. But please: do so! And please report back your experience after the first disk failures have occurred. (Not speaking of performance as that is probably secondary...)

But really... not everything what's possibly is a good idea!

To access the internet you can run https://en.wikipedia.org/wiki/IP_over_Avian_Carriers (https://datatracker.ietf.org/doc/html/rfc1149) but I would nobody expect to do so :)
 
  • Like
Reactions: restor5
With Linux you have the freedom and the capability to do a very lot of things, including weird ones.

Yes, you can (for example) use the LVM to create a Volume Group consisting of two 4 GB disks as Physical Volumes to get a 8 GB Logical Volume. And yes, a "zpool create mypool mirror xxx yyy" would accept that.

I did not test this I never would. But please: do so! And please report back your experience after the first disk failures have occurred. (Not speaking of performance as that is probably secondary...)

But really... not everything what's possibly is a good idea!

To access the internet you can run https://en.wikipedia.org/wiki/IP_over_Avian_Carriers (https://datatracker.ietf.org/doc/html/rfc1149) but I would nobody expect to do so :)
Trying to think how many years ago it was 20 - 25 year , I was into Ham packet radio 70cm / 2m / 4m which is slightly faster than IP over carrier pigeon , but I think the internet killed packet radio too.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!