quick question about ZFS disks in a pool?

Discussion in 'Proxmox VE: Installation and configuration' started by killmasta93, May 30, 2019.

  1. killmasta93

    killmasta93 Member

    Joined:
    Aug 13, 2017
    Messages:
    438
    Likes Received:
    16
    Hi,
    I was wondering if someone could explain to me about something different with raid 10 vs raid 1 in a zfs.
    when i check the disks lsblk i see this

    Code:
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda      8:0    0 931.5G  0 disk
    |-sda1   8:1    0  1007K  0 part
    |-sda2   8:2    0   512M  0 part
    `-sda3   8:3    0   931G  0 part
    sdb      8:16   0 931.5G  0 disk
    |-sdb1   8:17   0  1007K  0 part
    |-sdb2   8:18   0   512M  0 part
    `-sdb3   8:19   0   931G  0 part
    sdc      8:32   0 931.5G  0 disk
    |-sdc1   8:33   0 931.5G  0 part
    `-sdc9   8:41   0     8M  0 part
    sdd      8:48   0 931.5G  0 disk
    |-sdd1   8:49   0 931.5G  0 part
    `-sdd9   8:57   0     8M  0 part
    
    as for a raid 1 shows 3 partitions on both. but my question is how come sda and sdb are different from the others?
    so if sdc gets damaged and i need to replace would i still need to install the grub? and just replicate from sdd
     
  2. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,271
    Likes Received:
    118
    Please post the output of `zpool status` - that usually shows a better picture of what is going on.

    My guess is that sda and sdb are the first mirror of your raid 10 - and those contain a partition for efi and a bios-boot partition
    the others have 2 disks because this is what zfs does with them if your provide a whole disk.

    hope this helps!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. killmasta93

    killmasta93 Member

    Joined:
    Aug 13, 2017
    Messages:
    438
    Likes Received:
    16
    Thanks for the reply, this is my zpool
    Code:
    root@prometheus4:~# zpool status -v
      pool: rpool
     state: ONLINE
      scan: scrub repaired 0B in 4h37m with 0 errors on Fri May 31 23:47:39 2019
    config:
    
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
    
    errors: No known data errors
    
    So lets say sdc disk dies out and i need to replace the disk i would follow the same procedure to replace the disk including install grub?
     
  4. Nemesiz

    Nemesiz Active Member

    Joined:
    Jan 16, 2009
    Messages:
    673
    Likes Received:
    42
    OMG

    sda2 - 512M
    sdb2 - 512M

    sdc - 931.5G
    sdd - 931.5G

    Is it really the pool size like this?
     
  5. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,271
    Likes Received:
    118
    How did you create the pool?
    just checked setting up a RAID10 with zfs and the pve-5.4-1 iso - the first mirror-vdev contains 'sda3' and 'sdb3' (both being large partitions)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice