1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

[SOLVED] zfs Raid 10 with different hard drives layout

Discussion in 'Proxmox VE: Installation and configuration' started by maxprox, Dec 7, 2017.

  1. maxprox

    maxprox Member

    Joined:
    Aug 23, 2011
    Messages:
    233
    Likes Received:
    7
    Hi,
    yesterday I installed a new Proxmox 5.1 system with 4 hard drives in ZFS Raid 10
    4x SATA disks direct on the Mainboard.
    Now I get a different disk layout:
    2 disk with two partitions, 2 disk with three partitions

    These four disks were already in a ZFS system before, but I tried to clean the disks with labelclear and wipefs


    Code:
    root@kovaprox:~# fdisk -l    
    Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: D9D2F816-5566-4BEE-9A86-E93CE38CD055
    
    Device          Start        End    Sectors   Size Type
    /dev/sda1          34       2047       2014  1007K BIOS boot
    /dev/sda2        2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
    /dev/sda9  1953508750 1953525134      16385     8M Solaris reserved 1
    
    
    Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: E4BB4FB9-D0F2-40C2-9E2A-BC36A1E4D936
    
    Device          Start        End    Sectors   Size Type
    /dev/sdb1          34       2047       2014  1007K BIOS boot
    /dev/sdb2        2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
    /dev/sdb9  1953508750 1953525134      16385     8M Solaris reserved 1
    
    
    Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 8CF97A5F-F9EA-5042-B48A-9F17F25DF3BB
    
    Device          Start        End    Sectors   Size Type
    /dev/sdc1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
    /dev/sdc9  1953507328 1953523711      16384     8M Solaris reserved 1
    
    
    Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: F00B293B-D780-F34F-A0D3-C1F39F3CE8DD
    
    Device          Start        End    Sectors   Size Type
    /dev/sdd1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
    /dev/sdd9  1953507328 1953523711      16384     8M Solaris reserved 1
    zpool status shows a difference as well:

    Code:
    root@kovaprox:~# zpool status
      pool: rpool
     state: ONLINE
      scan: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              mirror-0  ONLINE       0     0     0
                sda2    ONLINE       0     0     0
                sdb2    ONLINE       0     0     0
              mirror-1  ONLINE       0     0     0
                sdc     ONLINE       0     0     0
                sdd     ONLINE       0     0     0
    
    errors: No known data errors
    My question is, if that is a correct layout?
    Or whether my disks cleanup from old ZFS metadata
    did not work?

    regards,
    maxprox
     
  2. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    2,521
    Likes Received:
    343
    looks good - the PVE installer only configures the first redundant vdev as bootable if you do raid 10, so only those disks get the bios boot partition. for raid0, raid1 or raidz all disks are partitioned to be bootable (because they are all part of the first vdev or top level vdevs themselves).
     
  3. maxprox

    maxprox Member

    Joined:
    Aug 23, 2011
    Messages:
    233
    Likes Received:
    7
    Hello fabian,

    thank you for the answer,
    thats good to know!
    Today I installed a Hetzner (german hoster) root Server with it's remote Konsole called LARA, also with ZFS Raid10
    with the same result. Also goot to know is not to use fdisk but parted for align-check for example ...

    best regards,
    maxprox
     
  4. chalan

    chalan Member

    Joined:
    Mar 16, 2015
    Messages:
    79
    Likes Received:
    0
    maxprox can you please post pveperf? i have also 4 disks connected directly to MB sata2 but have TOTALY bad performance. I have 4x WD RED 5400-72000 1TB disks and this is my performance so i am corrious about yours...

    root@pve-klenova:~# pveperf
    CPU BOGOMIPS: 38401.52
    REGEX/SECOND: 454639
    HD SIZE: 681.16 GB (rpool/ROOT/pve-1)
    FSYNCS/SECOND: 47.82
    DNS EXT: 26.17 ms
    DNS INT: 20.14 ms (elson.sk)

    its a shame NO VM STARTED...

    root@pve-klenova:~# zpool status
    pool: rpool
    state: ONLINE
    scan: scrub repaired 0 in 16h41m with 0 errors on Sun Nov 12 17:05:48 2017
    config:

    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J2021886-part2 ONLINE 0 0 0
    ata-WDC_WD10EFRX-68JCSN0_WD-WMC1U6546808-part2 ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J2AK75T9 ONLINE 0 0 0
    ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J1JE0SFR ONLINE 0 0 0

    errors: No known data errors

    32GB RAM

    with VMs started :(

    root@pve-klenova:~# pveperf
    CPU BOGOMIPS: 38401.52
    REGEX/SECOND: 457523
    HD SIZE: 681.16 GB (rpool/ROOT/pve-1)
    FSYNCS/SECOND: 14.49
    DNS EXT: 62.92 ms
    DNS INT: 56.78 ms (elson.sk)
     
    #4 chalan, Dec 9, 2017
    Last edited: Dec 9, 2017

Share This Page