ZFS Pool Detail - Result Verification Failed (400)

sgtpepperaut

Member
Aug 30, 2020
25
3
23
44
This is on a brand new dedicated root server with new nvmes and a fresh pool. I have changed the zarc max size but other than that not done any changes afaik.
This screen loads forever. Any ideas? thx

1615661880163.png

root@srv3:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.9-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-7
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

root@srv3:~# pvesm status
Name Type Status Total Used Available %
box2 dir active 104050153 24562998 79487155 23.61%
local dir active 455538816 52290560 403248256 11.48%
local-zfs zfspool active 429143832 25895568 403248264 6.03%
mrplex dir active 1100710663792 1199036016 1099511627776 0.11%
root@srv3:~#
 
Last edited:
run/start a scrub and afterwards, reload the page:

> zpool scrub rpool
 
is this safe to run?
running zpool upgrade enables all available features in the current version of zfs (2.0.3) of the zpool - this means that you will not be able to import the pool with an older version of zfs (kernel module and userspace) - For PVE 6.x that means that you will not be able to boot any kernel older than pve-kernel-5.4.86-1.

Additionally enabling and using certain features will render the rpool not importable by grub (root on ZFS installs on EFI systems are booted with systemd-boot since PVE 5.4 - so if the system is booted with UEFI and is freshly setup you probably should be fine)


Check the reference documentation on how to find out how your system is booted:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used
 
running zpool upgrade enables all available features in the current version of zfs (2.0.3) of the zpool - this means that you will not be able to import the pool with an older version of zfs (kernel module and userspace) - For PVE 6.x that means that you will not be able to boot any kernel older than pve-kernel-5.4.86-1.

Additionally enabling and using certain features will render the rpool not importable by grub (root on ZFS installs on EFI systems are booted with systemd-boot since PVE 5.4 - so if the system is booted with UEFI and is freshly setup you probably should be fine)


Check the reference documentation on how to find out how your system is booted:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used

Is it safe to run zpool upgrade with boot entries are like this?

Code:
Boot0000* Linux Boot Manager    HD(2,GPT,d241997c-0e62-4ee3-bf6f-b2494461a31e,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)
Boot0001* Linux Boot Manager    HD(2,GPT,3b733969-d5d5-4550-a012-023c4796850c,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)
Boot000A* UEFI OS    HD(2,GPT,d241997c-0e62-4ee3-bf6f-b2494461a31e,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
Boot000B* UEFI OS    HD(2,GPT,3b733969-d5d5-4550-a012-023c4796850c,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO

Would I need to run anything else after upgrading to fix boot entries?