Basic ZFS configuration (I think)

svendsen

Renowned Member
Apr 18, 2012
61
1
73
Hi team,

During a new PE install I decided to try out ZFS with RAID 10 across my 4 disks.
During install I selected ZFS and all 4 disks, and install completed and server rebooted. Everything seemed to work, great! :)

However, after playing around with zfs list/status and zpool list/status my OCD came to surface.. as the 4 disks are not partioned identically.

Code:
root@proxmox:~# zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool      10.9T  1.27G  10.9T        -         -     0%     0%  1.00x    ONLINE  -
  mirror   5.46T   210M  5.46T        -         -     0%  0.00%      -  ONLINE
    sda3       -      -      -        -         -      -      -      -  ONLINE
    sdb3       -      -      -        -         -      -      -      -  ONLINE
  mirror   5.45T  1.07G  5.45T        -         -     0%  0.01%      -  ONLINE
    sdc        -      -      -        -         -      -      -      -  ONLINE
    sdd        -      -      -        -         -      -      -      -  ONLINE

Shows that sda and sdb is mirroring between partitions and sdc and sdd is mirroring whole disks.

I noticed that sda and sdb have also got 2 boot partions. sdc and sdd has not.

Code:
Device       Start         End     Sectors  Size Type
/dev/sda1       34        2047        2014 1007K BIOS boot
/dev/sda2     2048     1050623     1048576  512M EFI System
/dev/sda3  1050624 11721045134 11719994511  5.5T Solaris /usr & Apple ZFS

Code:
Device           Start         End     Sectors  Size Type
/dev/sdc1         2048 11721027583 11721025536  5.5T Solaris /usr & Apple ZFS
/dev/sdc9  11721027584 11721043967       16384    8M Solaris reserved 1

Now questions:

1) Is this expected? Or should I recreate the mirror and/or reinstall PE?
In real life it should be enough with the boot partitions on sda+sdb as if I loose both drives, I would loose my mirror+data anyway

2) Would it be better to shrink the rpool and have several zpools? This is just a test/dev server, so no big deal... just curious if there would be something special to consider.
 
Hi team,

During a new PE install I decided to try out ZFS with RAID 10 across my 4 disks.
During install I selected ZFS and all 4 disks, and install completed and server rebooted. Everything seemed to work, great! :)

However, after playing around with zfs list/status and zpool list/status my OCD came to surface.. as the 4 disks are not partioned identically.

Code:
root@proxmox:~# zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool      10.9T  1.27G  10.9T        -         -     0%     0%  1.00x    ONLINE  -
  mirror   5.46T   210M  5.46T        -         -     0%  0.00%      -  ONLINE
    sda3       -      -      -        -         -      -      -      -  ONLINE
    sdb3       -      -      -        -         -      -      -      -  ONLINE
  mirror   5.45T  1.07G  5.45T        -         -     0%  0.01%      -  ONLINE
    sdc        -      -      -        -         -      -      -      -  ONLINE
    sdd        -      -      -        -         -      -      -      -  ONLINE

Shows that sda and sdb is mirroring between partitions and sdc and sdd is mirroring whole disks.

I noticed that sda and sdb have also got 2 boot partions. sdc and sdd has not.

Code:
Device       Start         End     Sectors  Size Type
/dev/sda1       34        2047        2014 1007K BIOS boot
/dev/sda2     2048     1050623     1048576  512M EFI System
/dev/sda3  1050624 11721045134 11719994511  5.5T Solaris /usr & Apple ZFS

Code:
Device           Start         End     Sectors  Size Type
/dev/sdc1         2048 11721027583 11721025536  5.5T Solaris /usr & Apple ZFS
/dev/sdc9  11721027584 11721043967       16384    8M Solaris reserved 1

Now questions:

1) Is this expected? Or should I recreate the mirror and/or reinstall PE?
In real life it should be enough with the boot partitions on sda+sdb as if I loose both drives, I would loose my mirror+data anyway

expected, exactly for that reason ;)

2) Would it be better to shrink the rpool and have several zpools? This is just a test/dev server, so no big deal... just curious if there would be something special to consider.

some people would do that (or even not use ZFS on / at all). if you have the disks (and slots), it makes sense. if your constrained, putting it on the same disk is fine as well.