Convert ZFS Raid 0 to 1

kamzata

Renowned Member
Jan 21, 2011
219
9
83
Italy
First of all, how can I check what kind of ZFS raid I'm using?

I just run:
Bash:
root@srv001:/backups# zpool status -v
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME                                               STATE     READ WRITE CKSUM
    rpool                                              ONLINE       0     0     0
      nvme-eui.e8238fa6bf530001001b448b46ee3a39-part3  ONLINE       0     0     0
      nvme-eui.e8238fa6bf530001001b448b46ee35bf        ONLINE       0     0     0

errors: No known data errors

...but I cannot understand what type is. Is it a raid0?

Then, how can I convert ZFS raid0 to a mirror type?
 
First of all, how can I check what kind of ZFS raid I'm using?

I just run:
Bash:
root@srv001:/backups# zpool status -v
  pool: rpool
state: ONLINE
  scan: none requested
config:

    NAME                                               STATE     READ WRITE CKSUM
    rpool                                              ONLINE       0     0     0
      nvme-eui.e8238fa6bf530001001b448b46ee3a39-part3  ONLINE       0     0     0
      nvme-eui.e8238fa6bf530001001b448b46ee35bf        ONLINE       0     0     0

errors: No known data errors

...but I cannot understand what type is. Is it a raid0?

Then, how can I convert ZFS raid0 to a mirror type?

Yes its not mentioning anything so its a raid0 (otherwise it will tell you "mirror", "raidz" etc. below pool name)

You will have to recreate it as mirror.

Are there already files on there or can it be detroyed ?

If there are already files snapshot the vdev, and send it to another pool for backup.

Any vm's / container based on the storage need to be shut down

Then recreate your mirror and receive the files back to it.
 
I really didn't know how this is escaped to me. I remember I paid attention to select RaidZ (mirror) but I'm clearly wrong. Anyway...

Yeah, there's Proxmox installed on it. Since these are the first times I'm using ZFS, could you tell me step by step how can I achieve the complete migration?

I suppose I will need also to update initramfs and grub, right?
 
Last edited:
I really didn't know how this is escaped to me. I remember I paid attention to select RaidZ (mirror) but I'm clearly wrong. Anyway...

Yeah, there's Proxmox installed on it. Since these are the first times I'm using ZFS, could you tell me step by step how can I achieve the complete migration?

I suppose I will need also to update initramfs and grub, right?

Oh i missed that its your proxmox root.


In this case you will have to do a whole clean install.

Migrating from raid0 to raid1 and setting up efi disks so you can boot from either will be to much work.

The proxmox installer does that for you, just make sure to select raid1 not raid0 at install.


Proxmox zfs uses systemd-boot so you only have to update initramfs, grub is not used at all.
 
Fyi this is what zpool status should look like for a mirror:

Code:
 pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:24:15 with 0 errors on Sun May 10 00:48:19 2020
config:

        NAME            STATE     READ WRITE CKSUM
        rpool           ONLINE       0     0     0
          mirror-0      ONLINE       0     0     0
            cryptroot1  ONLINE       0     0     0
            cryptroot2  ONLINE       0     0     0
 
  • Like
Reactions: kamzata
Since the disks are new and since there's any critical service running on it, I think I will leave the stripe. In the worst case the drives failure, I'll go offline 1 day (time to replace the damage disk and reinstalling Proxmox itself). I still ask to myself how I could make this mistake... thanks for your help!
 
Just. for curiosity... do you know how much faster is a stripe of 2 Nvme drives compared to a mirror's one? Is it a noticeable difference?

This post is a bit old but still I would like to share my experience.

ZFS RAID0 with loads of spare RAM across two NVMe SSDs gives you about 1.5 times the performance of one NVMe. So it definitely does not scale that well. ZFS is not meant to be super fast but rather super resilient.

Also using multiple NVMe disks in a striping configuration often fails due to PCIe bandwidth restrictions and can get you worse performance compared to a single NVMe. This is especially the case when using consumer grade HW (Mainboard and CPU) with a mere 16-20 PCIe lanes. Often one NVMe slot is going through the chipset and one is directly linked to the CPU meaning that you could actually slow down you whole machine if you actually stress your RAID0 configuration.

Even with server grade hardware and no PCIe bottleneck you can easily choke your CPU due to ZFS nature.

tl;dr do not use ZFS for NVMe / SATA SSD striping if you are looking for improved performance.
 
  • Like
Reactions: 99percento

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!