[SOLVED] error when opening status page of zfs: "got zpool status without config key"

Feb 20, 2021
88
10
13
when i go to administration -> storage / disks -> ZFS -> select pool, click on detail i get this error message.


how can this be fixed without recreating the pool? i guess it fails because i've manually created the zfs pool and something is missing.

proxmox Backup Server 1.0-8 is used.
 
Hi,

this was a bug in our API schema, where an optional value was marked as always required, it's fixed in git but the fix is not yet packaged.

You can run a ZFS scrub to make it go away (which normally happens automatically the second weekend in a month):
Bash:
zpool scrub POOLNAME
 
zpool scrub backup didn't help. i still get that error, output of zpool status on that node:
Code:
zpool status backup
  pool: backup
 state: ONLINE
  scan: scrub repaired 0B in 00:06:34 with 0 errors on Sun Feb 28 18:38:41 2021
remove: Removal of vdev 0 copied 282G in 0h9m, completed on Fri Feb  5 17:00:21 2021
    776K memory used for removed device mappings
config:

    NAME          STATE     READ WRITE CKSUM
    backup        ONLINE       0     0     0
      nvme1n1     ONLINE       0     0     0
      nvme0n1     ONLINE       0     0     0

errors: No known data errors
 
Oh, sorry, I mistook this for Proxmox VE not Proxmox Backup Server.

This is a parser bug there, seems to be related with the long, wrapped
remove: Removal of vdev 0 copied 282G in 0h9m, completed on Fri Feb 5 17:00:21 2021
776K memory used for removed device mappings

line, which is not that usual.

Anyway, there's nothing wrong with the pool, so no need for recreation - we'll look into this the upcoming week.
 
Hi again!

Could you please post the output of the following command as I need to be ensure what specific characters your command output contains:
Bash:
zpool status -p -P backup | base64
 
Code:
zpool status -p -P backup | base64
ICBwb29sOiBiYWNrdXAKIHN0YXRlOiBPTkxJTkUKICBzY2FuOiBzY3J1YiByZXBhaXJlZCAwQiBp
biAwMDowNjozNCB3aXRoIDAgZXJyb3JzIG9uIFN1biBGZWIgMjggMTg6Mzg6NDEgMjAyMQpyZW1v
dmU6IFJlbW92YWwgb2YgdmRldiAwIGNvcGllZCAyODJHIGluIDBoOW0sIGNvbXBsZXRlZCBvbiBG
cmkgRmViICA1IDE3OjAwOjIxIDIwMjEKICAgIDc3NksgbWVtb3J5IHVzZWQgZm9yIHJlbW92ZWQg
ZGV2aWNlIG1hcHBpbmdzCmNvbmZpZzoKCglOQU1FICAgICAgICAgICAgICBTVEFURSAgICAgUkVB
RCBXUklURSBDS1NVTQoJYmFja3VwICAgICAgICAgICAgT05MSU5FICAgICAgIDAgICAgIDAgICAg
IDAKCSAgL2Rldi9udm1lMW4xcDEgIE9OTElORSAgICAgICAwICAgICAwICAgICAwCgkgIC9kZXYv
bnZtZTBuMXAxICBPTkxJTkUgICAgICAgMCAgICAgMCAgICAgMAoKZXJyb3JzOiBObyBrbm93biBk
YXRhIGVycm9ycwo=
 
FYI, this was actually a "bug" (in the widest sense) in ZFS zpool status printing code:
https://github.com/openzfs/zfs/pull/11674

We'll address this by backporting that patch to our ZFS packages with the next release.
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!