The different names for disks may lead to confusion. I show some different names to specify the "right" device.
Why is this important? Well..., the obvious thing is that “zpool create” will destroy the old content. You will lose any data stored on it. And the “classic” names like “sda” may change after adding (or removing) hardware to your setup - leading to boot problems. These names are not static, they do not point to the same device under all circumstances. “By-id” will.
ZFS can consume any block storage device. You may specify an already existing partition on a disk, for example. If you give it a brand-new disk it will create a GPT with some small headroom at the end of the disk and use the first partition.
I assigned different sizes just because... I can - and it makes them recognizable easily.
The usually recommended way to specify a disk is "by-id":
For demonstration I create a GPT and some irrelevant partitions on “sde”. The result looks like this:
Only now, with a GPT, the disk is visible (as the only one) here:
The other effect is that there are partition tables now:
For “sdb” you may note that it is listed as “sdb” while it obviously uses “sdb1”.
Just for completeness: this is a single four-way mirror and it has - of course - a usable capacity below the smallest device:
End of demo
If you use an specific installer to create your setup there are more variants to come. For example the PVE installer creates one more partition (to be able to boot from that disk) and prepares partition 3 to be used for the ZFS pool:
Fresh start with used disks
I recommend to erase everything from a used disk first. (In this demo I run “zpool destroy demopool” first.) My “sde” has six partitions, see above. The simplest approach is to just remove all partitioning information:
Note that there is no “Enter y to confirm” or similar. This command is dangerous!
Note that this removed only the partition table. The previously stored data is still there! If you want to sell a disk you may really, really want to overwrite the actual data, not only the partition table. There are dedicated tools for this, but for me a simple “dd if=/dev/zero of=/dev/sde bs=1M status=progress” is sufficient.
Tampering with partition tables is always dangerous. Make sure that data loss is just not possible by having tested(!) backups.
Have fun!
I put this under my “FabU”-label although it was not really “frequently answered” - just because... I like it this way ;-)
Why is this important? Well..., the obvious thing is that “zpool create” will destroy the old content. You will lose any data stored on it. And the “classic” names like “sda” may change after adding (or removing) hardware to your setup - leading to boot problems. These names are not static, they do not point to the same device under all circumstances. “By-id” will.
ZFS can consume any block storage device. You may specify an already existing partition on a disk, for example. If you give it a brand-new disk it will create a GPT with some small headroom at the end of the disk and use the first partition.
Demo
I have a test-system which is using only the single "sda" for the OS. Now I add some (four) brand-new disks:
Code:
root@pnm:~# lsblk /dev/sd[b-e]
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 12G 0 disk
sdc 8:32 0 13G 0 disk
sdd 8:48 0 14G 0 disk
sde 8:64 0 15G 0 disk
The usually recommended way to specify a disk is "by-id":
Code:
root@pnm:~# ls -Al /dev/disk/by-id | grep sd[b-e]
lrwxrwxrwx 1 root root 9 Apr 14 08:30 scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 14 08:30 scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 14 08:30 scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 -> ../../sdd
lrwxrwxrwx 1 root root 9 Apr 14 08:30 scsi-0QEMU_QEMU_HARDDISK_drive-scsi4 -> ../../sde
Code:
root@pnm:~# lsblk /dev/sd[b-e]
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 12G 0 disk
sdc 8:32 0 13G 0 disk
sdd 8:48 0 14G 0 disk
sde 8:64 0 15G 0 disk
├─sde1 8:65 0 1G 0 part
├─sde2 8:66 0 1G 0 part
├─sde3 8:67 0 1G 0 part
├─sde4 8:68 0 1G 0 part
├─sde5 8:69 0 1G 0 part
└─sde6 8:70 0 10G 0 part
Code:
root@pnm:~# ls -Al /dev/disk/by-partuuid/ | grep sd[b-e]
lrwxrwxrwx 1 root root 10 Apr 14 08:48 28549ac7-f4a2-45c6-8bb9-8faa8087aee2 -> ../../sde5
lrwxrwxrwx 1 root root 10 Apr 14 08:48 4ba108e3-198a-4d60-8990-c429ced7a828 -> ../../sde1
lrwxrwxrwx 1 root root 10 Apr 14 08:48 6009e315-b451-4d84-bd33-a5f3eefefab7 -> ../../sde4
lrwxrwxrwx 1 root root 10 Apr 14 08:48 88031de9-ad88-4412-96cb-e43c538d0dd1 -> ../../sde3
lrwxrwxrwx 1 root root 10 Apr 14 08:48 aade1e50-248e-47a7-88f5-1196bc98846b -> ../../sde6
lrwxrwxrwx 1 root root 10 Apr 14 08:48 b6e34b25-65a4-4c13-939f-3b716f9604ab -> ../../sde2
Create a new pool
Now to the interesting part: I can create a new pool (or generally: a new vdev) with these "different" devices:- sdb = directly addressed
- sdc = addressed as "scsi-0QEMU_QEMU_HARDDISK_drive-scsi2"
- sdd = addressed by "path" as "pci-0000:01:05.0-scsi-0:0:0:4 "
- sde6 = a specific, pre-existing partition
Code:
root@pnm:~# zpool create demopool mirror -f /dev/sdb /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/disk/by-partuuid/aade1e50-248e-47a7-88f5-1196bc98846b
root@pnm:~# zpool list -v demopool
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
demopool 9.50G 130K 9.50G - - 0% 0% 1.00x ONLINE -
mirror-0 9.50G 130K 9.50G - - 0% 0.00% - ONLINE -
sdb 12.0G - - - - - - - ONLINE -
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 13.0G - - - - - - - ONLINE -
pci-0000:01:04.0-scsi-0:0:0:3 14.0G - - - - - - - ONLINE -
aade1e50-248e-47a7-88f5-1196bc98846b 10.0G - - - - - - - ONLINE -
The other effect is that there are partition tables now:
Code:
root@pnm:~# lsblk /dev/sd[b-e]
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdb 8:16 0 12G 0 disk
├─sdb1 8:17 0 12G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 13G 0 disk
├─sdc1 8:33 0 13G 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 14G 0 disk
├─sdd1 8:49 0 14G 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 15G 0 disk
├─sde1 8:65 0 1G 0 part
├─sde2 8:66 0 1G 0 part
├─sde3 8:67 0 1G 0 part
├─sde4 8:68 0 1G 0 part
├─sde5 8:69 0 1G 0 part
└─sde6 8:70 0 10G 0 part
Just for completeness: this is a single four-way mirror and it has - of course - a usable capacity below the smallest device:
Code:
root@pnm:~# zfs list demopool
NAME USED AVAIL REFER MOUNTPOINT
demopool 130K 9.20G 24K /demopool
End of demo
Addendum
Other usual layoutsIf you use an specific installer to create your setup there are more variants to come. For example the PVE installer creates one more partition (to be able to boot from that disk) and prepares partition 3 to be used for the ZFS pool:
Code:
root@pnm:~# zpool list -v rpool
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 31G 2.90G 28.1G - - 6% 9% 1.00x ONLINE -
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 31.5G 2.90G 28.1G - - 6% 9.34% - ONLINE -
root@pnm:~# lsblk -f /dev/sda
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
├─sda1
├─sda2 vfat FAT32 1AFA-C799
└─sda3 zfs_member 5000 rpool 3297002212863483091
Fresh start with used disks
I recommend to erase everything from a used disk first. (In this demo I run “zpool destroy demopool” first.) My “sde” has six partitions, see above. The simplest approach is to just remove all partitioning information:
Code:
root@pnm:~# sgdisk -Z /dev/sde
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Code:
root@pnm:~# lsblk /dev/sde
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sde 8:64 0 15G 0 disk
Tampering with partition tables is always dangerous. Make sure that data loss is just not possible by having tested(!) backups.
Have fun!
I put this under my “FabU”-label although it was not really “frequently answered” - just because... I like it this way ;-)
- some more FabU: https://forum.proxmox.com/search/9687650/?q=fabu&c[title_only]=1
Last edited: