Hi everyone,
Has anybody bumped into this before? I have two unused 2TB SSDs which I would like to configure them to run as ZFS in a RAID1 configuration. The problem is that I can't create the ZFS disk with a specific name. I've also tried to switch the disks with new ones but it makes no difference.
Thanks in advance,
Bogdan M.
Has anybody bumped into this before? I have two unused 2TB SSDs which I would like to configure them to run as ZFS in a RAID1 configuration. The problem is that I can't create the ZFS disk with a specific name. I've also tried to switch the disks with new ones but it makes no difference.
Code:
root@pve-node-18:/etc/pve# wipefs -a /dev/sde
/dev/sde: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sde: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sde: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sde: calling ioctl to re-read partition table: Success
root@pve-node-18:/etc/pve# wipefs -a /dev/sdf
/dev/sdf: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdf: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdf: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdf: calling ioctl to re-read partition table: Success
root@pve-node-18:/etc/pve# zpool create -f -o 'ashift=12' backup_node_18 mirror sde sdf
mountpoint '/backup_node_18' exists and is not empty
use '-m' option to provide a different default
root@pve-node-18:/etc/pve# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
VM_NODE_18 1.81T 53.0G 1.76T - - 0% 2% 1.00x ONLINE -
rpool 110G 4.18G 106G - - 4% 3% 1.00x ONLINE -
root@pve-node-18:/etc/pve# zfs list
NAME USED AVAIL REFER MOUNTPOINT
VM_NODE_18 206G 1.55T 96K /VM_NODE_18
VM_NODE_18/vm-121-disk-0 51.6G 1.59T 14.3G -
VM_NODE_18/vm-30823-disk-0 51.6G 1.58T 26.7G -
VM_NODE_18/vm-30823-disk-1 20.6G 1.57T 6.99G -
VM_NODE_18/vm-30823-disk-2 20.6G 1.57T 285M -
VM_NODE_18/vm-30823-disk-3 20.6G 1.57T 2.95G -
VM_NODE_18/vm-30823-disk-4 20.6G 1.57T 1.26G -
VM_NODE_18/vm-30823-disk-5 20.6G 1.57T 490M -
rpool 4.17G 102G 104K /rpool
rpool/ROOT 4.12G 102G 96K /rpool/ROOT
rpool/ROOT/pve-1 4.12G 102G 4.12G /
rpool/data 96K 102G 96K /rpool/data
Thanks in advance,
Bogdan M.