The PVE ZFS GUI could definitely benefit from some work in this area.
Which one? He asked about replacing a vdev, cloning partition table, regenerating UUIDs, boot-tool and now logging.
The PVE ZFS GUI could definitely benefit from some work in this area.
Yes: https://bugzilla.proxmox.com/show_bug.cgi?id=3289I'm sure stuff like this is already filed
everyone knows the zfs gui is incomplete.
I also think this should be added to the webUI. Stopped counting the cases exactly like this one where I needed to explain people how to revert the wrong "zpool replace" and do it the proper way according to wiki with cloning the partitions and bootloader.
And they often even fail to follow the wiki article with using wrong disks or partitions for the placeholders.
I'm sure stuff like this is already filed. I have another feature request with tons of +1's going on 5 years old.
it took less than a day to implement in my feasibility testing. it's never getting done, I'm not wasting any more time on obvious feature enrichment. everyone knows the zfs gui is incomplete.
Sometimes yes...paying for a subscription doesn't guarantee that the person is experienced in working with partitions, ZFS or Linux CLI in general.But are they PVE subscribers?
don't ask people to file feature requests then. they aren't getting done.
those who can just make their own.
Then it' a cost/benefit. I suspect the few times this reaches support the load is so small (also easy to dispense link to a wiki) that it's literally less cost than to implement whole new interface on one particular storage type only in what is not a storage appliance at all. I mean, last time I pointed out how HA stack was lacking with ZFS replications, Dietmar told me I am using it wrong (as in, not shared storage), Aaron that it's all good actually. Sometimes I wonder how the worklfow is, i.e. who picks up the forum / bz / decide prio / put onto roadmap ... it feels like nothing like that is in place, it's more driven by support cases I can imagine - best idea would be when one sees kernel modules that were added due to "user request", strange things like JFS support, etc. That's where the time goes then. Not homelabs.Sometimes yes...
You might want to have a look at:
Code:journalctl -u zfs-zed # it's not like they know what your /dev/... was though so maybe you want to show it with lsblk -o+SERIAL # better yet have a look at SMART output smartctl -a /dev/... # or for nvmes even better yet apt install nvme-cli nvme error-log -e 255 /dev/nvme...
ZFS has detected that a device was removed.
impact: Fault tolerance of the pool may be compromised.
eid: 6
class: statechange
state: REMOVED
host: bs
time: 2024-02-17 07:51:44+0500
vpath: /dev/disk/by-id/nvme-nvme.126f-4d4e32333039353132473030383333-51323030304d4e2035313247-00000001-part3
vguid: 0x65FD66A420471F69
pool: rpool (0x4010E6931FD3CBB7)