That's odd, whether -by-id, or -by-partlabel, those are symlinks after all. I don't remember this behaviour specifically with symlinks from before. Either the device is there or it is not. I will admit I never use /dev/sd* because those can change, in fact sometimes you see someone asking here how to "fix" that after the fact, when they find out.
Yeah. Do feel the proxmox pros could elaborate on ZFS wiki that just says <device>
And the fact the default ZFS install rpool is also build using /dev/sda , /dev/sdb
So a noob like me would go for also using /dev/sd...
Yeah, if you get normal behavior with by-partlabel then it must be something with local ZFS Proxmox?
You use TrueNas for ZFS + iSCSI or NFS shared storage for Proxmox?
Upon re-attaching disk4 partition it did immediately show up in /dev/disk/by-partlabel/zfs-disk4
Got it back via - zpool online zfs-raid10 /dev/disk/by-partlabel/zfs-disk4
(Even when pool build via /dev/sd{letter} I observed ZFS would NOT auto online, least not by default)
But for disk to disappear yet ZFS say pool is healthy doesn't give me confidence that local Proxmox is happy with using by-partlabel.
Tried it again this time removing both disk4 + disk5 partitions. Can see they are gone here yet pool says a false "heathy"
Code:
root@LAB-SMPM-GRUB:/dev/disk/by-partlabel# lsblk -o +PARTLABEL
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS PARTLABEL
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 19.5G 0 part
sdb 8:16 0 20G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 19.5G 0 part
sdc 8:32 0 10G 0 disk
├─sdc1 8:33 0 10G 0 part zfs-disk1
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 10G 0 disk
├─sdd1 8:49 0 10G 0 part zfs-disk2
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 10G 0 disk
├─sde1 8:65 0 10G 0 part zfs-disk3
└─sde9 8:73 0 8M 0 part
sr0 11:0 1 1024M 0 rom
root@LAB-SMPM-GRUB:/dev/disk/by-partlabel# zpool status
pool: rpool
state: ONLINE
scan: resilvered 6.18M in 00:00:00 with 0 errors on Sun Jul 21 01:26:19 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
pool: zfs-raid10
state: ONLINE
scan: resilvered 120K in 00:00:00 with 0 errors on Tue Sep 24 13:09:18 2024
remove: Removal of vdev 3 copied 76K in 0h0m, completed on Mon Sep 23 15:50:37 2024
792 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
zfs-raid10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
zfs-disk1 ONLINE 0 0 0
zfs-disk2 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
zfs-disk3 ONLINE 0 0 0
zfs-disk4 ONLINE 0 0 0
spares
zfs-disk5 AVAIL
errors: No known data errors
Last edited: