Accidentally set up ZFS as just mirrors instead of stripe of mirrors - how best to fix?

nnraymond

New Member
Mar 7, 2022
15
1
3
48
Normally when I set up ZFS in Proxmox I do this to create a stripe of mirrors:

Code:
zpool create -f -o ashift=12 r10zpool mirror wwn-0x50000398db881f5d wwn-0x50000398db881f67 mirror wwn-0x50000398db881f5a wwn-0x50000398db881f5e

However on a recent server I accidentally forgot the second "mirror" which has resulted in just four drives in a mirror, like this:

Code:
# zpool status
  pool: r10zpool
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        r10zpool                    ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x50000398db881f60  ONLINE       0     0     0
            wwn-0x50000398db881f61  ONLINE       0     0     0
            wwn-0x50000398db881f5b  ONLINE       0     0     0
            wwn-0x50000398db881eb2  ONLINE       0     0     0

errors: No known data errors
# df
Filesystem            1K-blocks     Used  Available Use% Mounted on
udev                   32700500        0   32700500   0% /dev
tmpfs                   6546656     2280    6544376   1% /run
/dev/mapper/pve-root   98497780  3003248   90444984   4% /
tmpfs                  32733268    46800   32686468   1% /dev/shm
tmpfs                      5120        0       5120   0% /run/lock
/dev/nvme0n1p2           523248      328     522920   1% /boot/efi
r10zpool             1821019264      128 1821019136   1% /r10zpool
r10zpool/isos        1845985408 24966272 1821019136   2% /r10zpool/isos
r10zpool/vmdata      1821019264      128 1821019136   1% /r10zpool/vmdata
/dev/fuse                131072       16     131056   1% /etc/pve
tmpfs                   6546652        0    6546652   0% /run/user/0

Whereas this is how I usually set things up:

Code:
# zpool status
  pool: r10zpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 01:12:15 with 0 errors on Sun May  8 01:36:18 2022
config:

        NAME                        STATE     READ WRITE CKSUM
        r10zpool                    ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x50000398db881f62  ONLINE       0     0     0
            wwn-0x50000398db881f5c  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            wwn-0x50000398db881f68  ONLINE       0     0     0
            wwn-0x50000398db881f64  ONLINE       0     0     0

errors: No known data errors
# df
Filesystem            1K-blocks     Used  Available Use% Mounted on
udev                   32715592        0   32715592   0% /dev
tmpfs                   6554696   665560    5889136  11% /run
/dev/mapper/pve-root   98559220  2862524   90647148   4% /
tmpfs                  32773464    56160   32717304   1% /dev/shm
tmpfs                      5120        0       5120   0% /run/lock
tmpfs                  32773464        0   32773464   0% /sys/fs/cgroup
/dev/nvme0n1p2           523248      312     522936   1% /boot/efi
r10zpool             3313898368      128 3313898240   1% /r10zpool
r10zpool/isos        3348502272 34604032 3313898240   2% /r10zpool/isos
r10zpool/vmdata      3313898368      128 3313898240   1% /r10zpool/vmdata
/dev/fuse                 30720       20      30700   1% /etc/pve
tmpfs                   6554692        0    6554692   0% /run/user/0

Is there a way for me to transform this mirrored set of drives into a stripe of mirrors without any data loss? I have created one VM in the ZFS partition, and I don't want to re-create it, so if I can't transform the ZFS with the data intact, since the boot partition has enough space I assume I can just shut down the VM, move all the files off the ZFS partition, re-create it, then move the files back.

What is the best/easiest way to get things set up the way I want?
 
You can detach two of the drives from the current mirror. Then add one drive to the stripe and then mirror that drive with the second one.
Remove the last two drives:
zpool detach r10zpool /dev/disk/by-id/wwn-0x50000398db881f5b zpool detach r10zpool /dev/disk/by-id/wwn-0x50000398db881eb2
Add a vdev as a stripe:
zpool attach r10zpool /dev/disk/by-id/wwn-0x50000398db881f5b
Attach a mirror drive:
zpool attach r10zpool /dev/disk/by-id/wwn-0x50000398db881f5b /dev/disk/by-id/wwn-0x50000398db881eb2
Please double check that I did not make any mistakes. Maybe you need to use wipefs if ZFS thinks the detached drives are still part of a pool but I do not expect so.

EDIT: Note that the existing data will remain on the first pair of mirrors and not be balanced over the stripe. but new writes will.
 
Last edited:
  • Like
Reactions: nnraymond and bobmc
Add a vdev as a stripe:
zpool attach r10zpool /dev/disk/by-id/wwn-0x50000398db881f5b

I am pretty sure you meant to write zpool add. :D :)

After detaching [1] two of the four drives as leesteken said, you can add [2] these two back to the pool as an additional mirrored vdev in one go with:
zpool add <YourPool> mirror <YourFreeDisk1> <YourFreeDisk2>

[1] https://docs.oracle.com/en/operatin...ching-and-detaching-devices-storage-pool.html
[2] https://docs.oracle.com/en/operatin...4/manage-zfs/adding-devices-storage-pool.html
 
I am pretty sure you meant to write zpool add. :D :)

After detaching [1] two of the four drives as leesteken said, you can add [2] these two back to the pool as an additional mirrored vdev in one go with:
zpool add <YourPool> mirror <YourFreeDisk1> <YourFreeDisk2>

[1] https://docs.oracle.com/en/operatin...ching-and-detaching-devices-storage-pool.html
[2] https://docs.oracle.com/en/operatin...4/manage-zfs/adding-devices-storage-pool.html
Yes, the command is zpool add and it worked great adding both disks back in at the same time using mirror in the same command. Thanks very much to both of you, my ZFS is now set up the way I originally intended, and with no system downtime!
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!