Strange partition scheme at install, zfs raid10 on Proxmox 5.0

tsarya

Active Member
Sep 15, 2017
23
4
43
45
Hello,

I am experiencing an odd issue trying to install Proxmox 5.0 on a HP ProLiant D360p Gen8 server with an additional LSI 9207-8i HBA (in AHCI mode) with 4x1TB 2.5" WD black drives.
Media is: proxmox-ve_5.0-5ab26bc-5.iso

During installation, I select zfs RAID10, in order to have a striped mirror. Normally all 4 disks should be partitioned in the same way.

After install, running 'zpool status' gives me this:
Code:
# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors

Also, I see this:
Code:
# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0 931.5G  0 part
└─sda9   8:9    0     8M  0 part
sdb      8:16   0 931.5G  0 disk
├─sdb1   8:17   0  1007K  0 part
├─sdb2   8:18   0 931.5G  0 part
└─sdb9   8:25   0     8M  0 part
sdc      8:32   0 931.5G  0 disk
├─sdc1   8:33   0 931.5G  0 part
└─sdc9   8:41   0     8M  0 part
sdd      8:48   0 931.5G  0 disk
├─sdd1   8:49   0 931.5G  0 part
└─sdd9   8:57   0     8M  0 part
sr0     11:0    1  1024M  0 rom
zd0    230:0    0     8G  0 disk [SWAP]

Any ideas to why that happens?

BTW, in a 2 disk zfs mirror configuration, this situation does not happen, the disks are partitioned in the same way.
 
Last edited:
  • Like
Reactions: chrone
What exactly do you think is strange? The first 2 (bootable) disks contains an additional partition for the grub boot loader. Partition number 9 is a special/reserved zfs partition. Partition 2 is the zfs data partition.
 
What exactly do you think is strange? The first 2 (bootable) disks contains an additional partition for the grub boot loader. Partition number 9 is a special/reserved zfs partition. Partition 2 is the zfs data partition.
I believe tsarya is referring to how the sda and sdb set are partitioned into 3, with a bootloader partition, zfs data, and the reserved partition, while the sdc and sdd only have two partitions, missing what looks like the bootloader partition, and why this occurs.
 
  • Like
Reactions: tsarya
Hi,

I can only guess/speculate here:

- when you create zfs storage, the kernel, maybe try to ask the BIOS about what are the disk that are able to boot (and your BIOS tell that only 2 of them are capable to boot)
- so if your BIOS have only first disk boot and second disk boot, zfs/kernel/whatever will use only this 2 disks

Another ideea is to install Proxmox only as mirror, then add the second pair of disks. Copy then the GPT layout from the first 2 disks (who are in zfs mirror - see the wiki). And then extend/convert your zfs mirror in a striped mirror (raid 10). The last step is tu install grub on the last 2 disks.
 
  • Like
Reactions: chrone
adgenet, you are correct.

The reason I thought it was strange is because I am comparing it with the FreeBSD default installation where all drives are partitioned in the same way and boot code is installed on all drives, and the 2 VDEVs are looking exactly the same.
Anyway, I assumed similar partitioning scheme should be done in Proxmox.

guletz, I do not want to do that, because I will have 2 VDEVs populated with unequal amount of data which theoretically hits on performance.

Well, if this is how it is setup on Proxmox, I am perfectly fine with it as long as it works as expected. :)

Thanks to all for the prompt reaction!
 
  • Like
Reactions: chrone
guletz, I do not want to do that, because I will have 2 VDEVs populated with unequal amount of data which theoretically hits on performance.

Well, if this is how it is setup on Proxmox, I am perfectly fine with it as long as it works as expected. :)

Thanks to all for the prompt reaction!


At the beginning it is true. And it is affected only proxmox instalation. But any other new write operation will be striped on the both mirrors. And any VM that is made after will be striped. What you say will affect only OS part, who will have anyway a low impact in zfs pool.
 
I'm still new to the world of ZFS but like tsarya mentioned I'm confused on why you would not want a bootable partition on all the drives in all vdev ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!