[SOLVED] proxmox not mounting zfs correctly at boot

rakali

Active Member
Jan 2, 2020
42
5
28
42
i ran apt upgrade -y this morning. since then my system is all broken.

my zfs storage does not mount on boot and when i do mount it with zfs mount tank, my subvol's are empty. if i do for example tank/subvol-103-disk-0 then my data appears correctly.

I found this thread which offered the following advice:

Code:
mv  /etc/zfs/zpool.cache  /etc/zfs/zpool.cache-
systemctl enable zfs-import-scan.service             

init 6

and upon reboot, my pool is mounted, however my subvols are not.

rebooting again and it does not mount on boot.

how can i fix this?

Code:
# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-1-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-4
pve-kernel-helper: 6.1-4
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-4
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
sorry for the double posting. i think the spam filter deleted my post and then a staff member reinstated? not sure.

this ended up being fixed by forcing a new zpool.cache, so i guess it was a "corrupted" cache as some people suggest?

anyway, i did get this working again through a number of steps. after moving the zpool cache and looking at systemd logs i ultimately i did the following, though i am unsure if all were necessary.

Code:
mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache-

systemctl enable zfs-import-scan.service

systemctl enable zfs-import.target

init 6

zfs mount tank

rm -rf /tank/subvol*

zpool set cachefile=/etc/zfs/zpool.cache tank

update-initramfs -u -k all

init 6
 
  • Like
Reactions: naftu and matrix
rm -rf /tank/subvol*

don't do this ! (should the pool be mounted correctly this would delete all containers on it!)

usually only 2 commands (and a reboot are necessary):
zpool set cachefile=/etc/zfs/zpool.cache tank
update-initramfs -u -k all
 
  • Like
Reactions: naftu and oguz