question about expanding ZFS mirror raid 10

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi
I was wondering if someone could shed some light, Currently i have a raid 10 with 8 disks and want to expand to 16 disks
which currently i have 4 mirror disks, My question is it as simple as this?

Code:
zpool add rpool mirror sdi sdj mirror sdk sdl mirror sdm sdn mirror sdo sdp

or would i need to replicate each mirror to each disks?

when i tried i see the partition differently

Code:
sda      8:0    0   30G  0 disk
|-sda1   8:1    0 1007K  0 part
|-sda2   8:2    0  512M  0 part
`-sda3   8:3    0 29.5G  0 part
sdb      8:16   0   30G  0 disk
|-sdb1   8:17   0 1007K  0 part
|-sdb2   8:18   0  512M  0 part
`-sdb3   8:19   0 29.5G  0 part
sdc      8:32   0   30G  0 disk
|-sdc1   8:33   0 1007K  0 part
|-sdc2   8:34   0  512M  0 part
`-sdc3   8:35   0 29.5G  0 part
sdd      8:48   0   30G  0 disk
|-sdd1   8:49   0 1007K  0 part
|-sdd2   8:50   0  512M  0 part
`-sdd3   8:51   0 29.5G  0 part
sde      8:64   0   30G  0 disk
|-sde1   8:65   0 1007K  0 part
|-sde2   8:66   0  512M  0 part
`-sde3   8:67   0 29.5G  0 part
sdf      8:80   0   30G  0 disk
|-sdf1   8:81   0 1007K  0 part
|-sdf2   8:82   0  512M  0 part
`-sdf3   8:83   0 29.5G  0 part
sdg      8:96   0   30G  0 disk
|-sdg1   8:97   0 1007K  0 part
|-sdg2   8:98   0  512M  0 part
`-sdg3   8:99   0 29.5G  0 part
sdh      8:112  0   30G  0 disk
|-sdh1   8:113  0 1007K  0 part
|-sdh2   8:114  0  512M  0 part
`-sdh3   8:115  0 29.5G  0 part
sdi      8:128  0   30G  0 disk
|-sdi1   8:129  0   30G  0 part
`-sdi9   8:137  0    8M  0 part
sdj      8:144  0   30G  0 disk
|-sdj1   8:145  0   30G  0 part
`-sdj9   8:153  0    8M  0 part
sdk      8:160  0   30G  0 disk
|-sdk1   8:161  0   30G  0 part
`-sdk9   8:169  0    8M  0 part
sdl      8:176  0   30G  0 disk
|-sdl1   8:177  0   30G  0 part
`-sdl9   8:185  0    8M  0 part
sdm      8:192  0   30G  0 disk
|-sdm1   8:193  0   30G  0 part
`-sdm9   8:201  0    8M  0 part
sdn      8:208  0   30G  0 disk
|-sdn1   8:209  0   30G  0 part
`-sdn9   8:217  0    8M  0 part
sdo      8:224  0   30G  0 disk
|-sdo1   8:225  0   30G  0 part
`-sdo9   8:233  0    8M  0 part
sdp      8:240  0   30G  0 disk
|-sdp1   8:241  0   30G  0 part
`-sdp9   8:249  0    8M  0 part

Thank you
 
thanks for the reply, but is it normal that the partitions are differently? it seems that only has 2 partitions which i assume it does not have grub installed in those?
 
i dont know why there are 3 partitions. can u post the disk mount points or more infos.
 
thanks for the reply, but is it normal that the partitions are differently? it seems that only has 2 partitions which i assume it does not have grub installed in those?
Hi,
PVE creates an additional 512M EFI partition to install kernel and initramfs on each disk. Also, I would recommend to use /dev/disk/by-id symlinks instead of disk names, as disk names are not guaranteed to be the same after hardware changes, the kernel simply enumerates them.
So you probably want to partition your disks first, the attach the partitions via id name to the zpool. Probably best would be to recreate the pool via the WebUI, as you cannot remove the vdevs anymore after having them added to the pool.

Edit: I overlooked that this is your rpool, then you cannot recreate without reinstallation. Also note, that you only have redundancy over vdevs, not the whole pool. So if any vdev fails (two disks in the same mirror), then all your data is gone.
 
Last edited:
thanks for the reply, this is what i did
so i copied from another disks to each new disk

Code:
sgdisk /dev/disk/by-id/ata-VBOX_HARDDISK_VBe59e42f0-c63cf959 -R /dev/disk/by-id/ata-VBOX_HARDDISK_VBfcf0fe1c-84d4034e
Code:
sgdisk -G /dev/disk/by-id/ata-VBOX_HARDDISK_VBfcf0fe1c-84d4034e

then i ran this on each disk, this was an example
Code:
proxmox-boot-tool format /dev/sdj2
proxmox-boot-tool init /dev/sdj2
proxmox-boot-tool refresh

Code:
root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
13D5-4153 is configured with: grub
159E-5778 is configured with: grub
159F-35A5 is configured with: grub
15A0-BCA0 is configured with: grub
15A1-9C82 is configured with: grub
15A2-558C is configured with: grub
15A3-2D1F is configured with: grub
15B3-6668 is configured with: grub
252D-A303 is configured with: grub
252E-3E3D is configured with: grub
252E-9693 is configured with: grub
2530-001E is configured with: grub
2530-92C1 is configured with: grub
2531-02DF is configured with: grub
2531-8384 is configured with: grub
2532-1F6B is configured with: grub


Code:
root@pve:~# zpool status
  pool: rpool
 state: ONLINE
config:

    NAME                                             STATE     READ WRITE CKSUM
    rpool                                            ONLINE       0     0     0
      mirror-0                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB5bdc558e-8f32fbcb-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBd892e6d2-fbb158ba-part3  ONLINE       0     0     0
      mirror-1                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBf6642885-b379f439-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBc84f3d51-fec82765-part3  ONLINE       0     0     0
      mirror-2                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBc759ea03-5338743b-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB529e210c-6ffc52e2-part3  ONLINE       0     0     0
      mirror-3                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB63ab82eb-52481fe3-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBe59e42f0-c63cf959-part3  ONLINE       0     0     0
      mirror-4                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBaa3dd845-054fea19-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBfcf0fe1c-84d4034e-part3  ONLINE       0     0     0
      mirror-5                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB7b8e54ab-4705de66-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB204efe88-0ddd0941-part3  ONLINE       0     0     0
      mirror-6                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB9427cb64-7a382d63-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB7bda2cd9-f63dcf43-part3  ONLINE       0     0     0
      mirror-7                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB4922769f-f106e384-part3  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBcce63854-ed33688b-part3  ONLINE       0     0     0



Code:
root@pve:~# zfs list
NAME               USED  AVAIL     REFER  MOUNTPOINT
rpool             1.03G   224G       96K  /rpool
rpool/ROOT        1.03G   224G       96K  /rpool/ROOT
rpool/ROOT/pve-1  1.03G   224G     1.03G  /
rpool/data          96K   224G       96K  /rpool/data

And i think that seemed to worked, but not sure if i missed something?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!