zfs raid mirror 1 to raid 10

chalan

Member
Mar 16, 2015
119
4
16
hello, i have two exact disk in raid 1 mirror zfs pool

root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 2,21M in 0h0m with 0 errors on Sat Jul 16 20:32:11 2016
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD10EFRX-68JCSN0_WD-WMC1U6546808-part2 ONLINE 0 0 0
ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J2021886-part2 ONLINE 0 0 0

i would like to add two another exact same disks and make raid 10 is it possible to do it withou reinstalling the whole pve? thank you
 
Last edited:
Code:
zpool add rpool mirror /dev/disk/by-id/disk1 /dev/disk/by-id/disk2

Should do it.
 
sorry i have raid 1 mirror not raid 0 stripped as i write to the header... is the process the same? thank you
 
sorry i have raid 1 mirror not raid 0 stripped as i write to the header... is the process the same? thank you
It's apparent what you have since you posted your zfs status output. If you do what I suggested you'll end up with a stripe of mirrors (~raid10).
 
  • Like
Reactions: fireon
thank you i will try... is it necessery to write grub to the new drives, so they will be bootable? or the sync will copy it from the original mirrored drives?
 
If you installed using the 2 drives in the old mirror, then no grub install is necessary. No balancing will happen, your array will be unbalanced which evens out on the long term.
 
what i need to do is set a new proxmox server with just 2disks in raid 1 or raid 0. than put the old and new together (nodes) and migrate all vms from old to new. after that i want to take out the two disks from old server and put them in the new server and add them to existing raid 1 (or raid 0) and make raid 10 array. is it possible to do that like this?

and whatyou mean by unbalanced array?
 
You can migrate everything over before adding the 2 disks from the old server, then add the 2 disks to the new server as hinted previously.

Google for the other question.
 
ok, i install proxmox 5 on the new server with zfs raid1 2disk. now i have this

root@pve-klenova:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors

isnt it better or safer to have there disks with there ids? like on the old one?
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 2,21M in 0h0m with 0 errors on Sat Jul 16 20:32:11 2016
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD10EFRX-68JCSN0_WD-WMC1U6546808-part2 ONLINE 0 0 0
ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J2021886-part2 ONLINE 0 0 0

if so how can i change this in the zpool? thank you...
 
You can migrate everything over before adding the 2 disks from the old server, then add the 2 disks to the new server as hinted previously.

Google for the other question.

my last question... will be the array the same (when i first install proxmox with 2disk only and raid1 (mirror) and after that i will add two more disks for raid 10) the same i would add 4disk at the beginning and make proxmox install with raid 10?
 
Technically, AFAIK, yes. The installer uses just 2 disks for boot partitions. The rest of the disks are used only for data in the pool. Practically, you start with no data if you install with a "RAID10" so when adding data and VMs you don't end up with an unbalanced pool. When you add the 2nd mirror vdev later to create the striped set, data will be only on one mirror thus the pool will be unbalanced.
 
and what is better balanced or unbalanced pool? my target is to double the r/w speed and be safe when one or two of the disk fails...
 
ok, i install proxmox 5 on the new server with zfs raid1 2disk. now i have this

root@pve-klenova:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0

errors: No known data errors

isnt it better or safer to have there disks with there ids? like on the old one?
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 2,21M in 0h0m with 0 errors on Sat Jul 16 20:32:11 2016
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD10EFRX-68JCSN0_WD-WMC1U6546808-part2 ONLINE 0 0 0
ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J2021886-part2 ONLINE 0 0 0

if so how can i change this in the zpool? thank you...

wass looking at this just the other day.
it is best to do this on a new setup with minimal data as it is safer and less time for resilvering.

so , you know how to get the UUIDs for the sda2 and sdb2. you need UUIDs for partitions and only this two partitions not the whole disks.

once you get that.
use
zpool detach myPool sda2
zpool attach myPool UUID for

wait for resilvering to finish
and do the second one.

the reason you need to do it this way is that it is an active main pool so you can not export / import by uuid.
on a data pool that can be taken offline, you can just export and "zpool import myPool -d /dev/disk/by-id"
 
zpool detach myPool sda2
zpool attach myPool UUID for

wait for resilvering to finish
No. Do not do that. Makes no sense. Instead do it like this here: https://forum.proxmox.com/threads/zfs-raid-disks.34099/#post-167015

On another note, naturally, a balanced pool is "better", at least a normal, unbothered pool stays as-is in its lifetime and is always balanced. IIRC ZoL already has the patches that enables balanced reads (please correct me if I'm wrong here) so it helps performance too, besides the fact that striping really only works when the records are spanning multiple vdevs such as in a striped mirror set.
 
so zfs raid 10 is not working like normal raid10? normal raid 10 imho create a mirror that is stripped, im i right? zfs raid 10 creates two mirrored vdevs with no stripping so no performance boost?
 
They are similar but a ZFS pool is much more than "regular" RAID10, though they share the same high-level topology. You do get similar "performance boost" in ZFS, too.
 
helloo im confused :) i put 4 wd red 1TB sata disk in my server, installed proxmox 5 with zfs raid 10 and i have

Code:
root@pve-klenova:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             9.25G  1.75T    96K  /rpool
rpool/ROOT         764M  1.75T    96K  /rpool/ROOT
rpool/ROOT/pve-1   763M  1.75T   763M  /
rpool/data          96K  1.75T    96K  /rpool/data
rpool/swap        8.50G  1.76T    64K  -
root@pve-klenova:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  1.81T   765M  1.81T         -     0%     0%  1.00x  ONLINE  -
root@pve-klenova:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdb2    ONLINE       0     0     0
        sdc2    ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sdg     ONLINE       0     0     0

errors: No known data errors

raid 10 with 2disks shoud IMHO be 2TB capacity (2 disk in mirror and 2 disk stripped), im i wrong?
 
Last edited:
my bad sorry... isnt it possible to use qcow images in new proxmox 5? i have made backup of /var/lib/vz/images from old proxmox 3.x and now i want it to use it in new 5... do i have to create new VMs and than somehow import/convert qcow2 images to zfs? can you provide me step by step how to make it pleae? thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!