Hi,
yesterday I installed a new Proxmox 5.1 system with 4 hard drives in ZFS Raid 10
4x SATA disks direct on the Mainboard.
Now I get a different disk layout:
2 disk with two partitions, 2 disk with three partitions
These four disks were already in a ZFS system before, but I tried to clean the disks with labelclear and wipefs
zpool status shows a difference as well:
My question is, if that is a correct layout?
Or whether my disks cleanup from old ZFS metadata
did not work?
regards,
maxprox
yesterday I installed a new Proxmox 5.1 system with 4 hard drives in ZFS Raid 10
4x SATA disks direct on the Mainboard.
Now I get a different disk layout:
2 disk with two partitions, 2 disk with three partitions
These four disks were already in a ZFS system before, but I tried to clean the disks with labelclear and wipefs
Code:
root@kovaprox:~# fdisk -l
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D9D2F816-5566-4BEE-9A86-E93CE38CD055
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
/dev/sda9 1953508750 1953525134 16385 8M Solaris reserved 1
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E4BB4FB9-D0F2-40C2-9E2A-BC36A1E4D936
Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
/dev/sdb9 1953508750 1953525134 16385 8M Solaris reserved 1
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8CF97A5F-F9EA-5042-B48A-9F17F25DF3BB
Device Start End Sectors Size Type
/dev/sdc1 2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdc9 1953507328 1953523711 16384 8M Solaris reserved 1
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F00B293B-D780-F34F-A0D3-C1F39F3CE8DD
Device Start End Sectors Size Type
/dev/sdd1 2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdd9 1953507328 1953523711 16384 8M Solaris reserved 1
zpool status shows a difference as well:
Code:
root@kovaprox:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
My question is, if that is a correct layout?
Or whether my disks cleanup from old ZFS metadata
did not work?
regards,
maxprox