[SOLVED] Failed to initialize libzfs library

jtgamble

Member
Jan 25, 2022
2
0
6
40
Hello,

Last night I had to shutdown my server during a power outage. Prior to shutting down, everything was working as normal. I was able to do a safe shutdown before the UPS ran out of juice. This morning I turned everything back on, and since then I have been receiving a "Failed to initialize libzfs library" error. My 2 zpools will not initilaize, and trying any ZFS related command spits out that same error message (zpool status, zfs version....).

Proxmox VE version: 8.3.4 - this is an install that started in Version 5.something years ago.
I have removed/reinstalled zfsutils-linux

lsblk output

Code:
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                             8:0    0 558.9G  0 disk
├─sda1                          8:1    0  1007K  0 part
├─sda2                          8:2    0   512M  0 part /boot/efi
└─sda3                          8:3    0 558.4G  0 part
  ├─pve-swap                  252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                  252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta            252:4    0   4.4G  0 lvm 
  │ └─pve-data-tpool          252:6    0 429.6G  0 lvm 
  │   ├─pve-data              252:7    0 429.6G  1 lvm 
  │   ├─pve-vm--101--disk--0  252:8    0     4M  0 lvm 
  │   ├─pve-vm--102--disk--0  252:9    0     4M  0 lvm 
  │   ├─pve-vm--103--disk--0  252:10   0     4M  0 lvm 
  │   ├─pve-vm--103--disk--1  252:11   0    32G  0 lvm 
  │   ├─pve-vm--100--disk--0  252:12   0    64G  0 lvm 
  │   ├─pve-vm--104--disk--0  252:13   0   100G  0 lvm 
  │   ├─pve-vm--101--disk--1  252:14   0   120G  0 lvm 
  │   └─pve-vm--105--disk--1  252:15   0    24G  0 lvm 
  └─pve-data_tdata            252:5    0 429.6G  0 lvm 
    └─pve-data-tpool          252:6    0 429.6G  0 lvm 
      ├─pve-data              252:7    0 429.6G  1 lvm 
      ├─pve-vm--101--disk--0  252:8    0     4M  0 lvm 
      ├─pve-vm--102--disk--0  252:9    0     4M  0 lvm 
      ├─pve-vm--103--disk--0  252:10   0     4M  0 lvm 
      ├─pve-vm--103--disk--1  252:11   0    32G  0 lvm 
      ├─pve-vm--100--disk--0  252:12   0    64G  0 lvm 
      ├─pve-vm--104--disk--0  252:13   0   100G  0 lvm 
      ├─pve-vm--101--disk--1  252:14   0   120G  0 lvm 
      └─pve-vm--105--disk--1  252:15   0    24G  0 lvm 
sdb                             8:16   0 558.9G  0 disk
└─sdb1                          8:17   0 558.9G  0 part
  ├─newdrive-vm--101--disk--0 252:2    0    51G  0 lvm 
  └─newdrive-vm--102--disk--0 252:3    0   500G  0 lvm 
sdc                             8:32   0   9.1T  0 disk
├─sdc1                          8:33   0   9.1T  0 part
└─sdc9                          8:41   0     8M  0 part
sdd                             8:48   0   9.1T  0 disk
├─sdd1                          8:49   0   9.1T  0 part
└─sdd9                          8:57   0     8M  0 part
sde                             8:64   0   9.1T  0 disk
├─sde1                          8:65   0   9.1T  0 part
└─sde9                          8:73   0     8M  0 part
sdf                             8:80   0   9.1T  0 disk
├─sdf1                          8:81   0   9.1T  0 part
└─sdf9                          8:89   0     8M  0 part
sdg                             8:96   0  14.6T  0 disk
├─sdg1                          8:97   0  14.6T  0 part
└─sdg9                          8:105  0     8M  0 part
sdh                             8:112  0  14.6T  0 disk
├─sdh1                          8:113  0  14.6T  0 part
└─sdh9                          8:121  0     8M  0 part
sdi                             8:128  0  14.6T  0 disk
├─sdi1                          8:129  0  14.6T  0 part
└─sdi9                          8:137  0     8M  0 part
sdj                             8:144  0  14.6T  0 disk
├─sdj1                          8:145  0  14.6T  0 part
└─sdj9                          8:153  0     8M  0 part
sdk                             8:160  0  14.6T  0 disk
├─sdk1                          8:161  0  14.6T  0 part
└─sdk9                          8:169  0     8M  0 part
sdl                             8:176  0   9.1T  0 disk
├─sdl1                          8:177  0   9.1T  0 part
└─sdl9                          8:185  0     8M  0 part

Any help or suggestions are welcome.
 
I'm....well I'm not smart sometimes.

Actually clicked into system logs and saw this: 31,457,280' invalid for parameter zfs_arc_max'

I remember (LONG ago) messing with that because of some low memory errors. I do not remember entering a number with commas. I edited it to the equivalent of 8GB in /etc/modprobe.d/zfs.conf, saved the file, and everything immediately came back online.