[SOLVED] zfs pool will not mount at reboot (since upgrade to 6)

Tommmii

Well-Known Member
Jun 11, 2019
62
12
48
53
Hello all,

I really would love to get to the bottom of this.
I had posted in another thread, but the symptoms no longer match, creating a separate thread seems the proper way to ask for help troubleshooting.

This is my detailed post in the other thread :
https://forum.proxmox.com/threads/zfs-mounting-problems.23680/#post-261750

Basically, the zfs mount point is empty, yet it does not get mounted at reboot.
After reboot, manually doing "zfs mount -a" does mount.
Why would the pool not mount at reboot ?

Any input is greatly appreciated.
Thx!
 
Hi,

after a upgrade and reboot i am facing the same problem.

proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.18-1-pve: 5.0.18-2
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

Regards
 
We ran into something similar a a couple of years ago , i forget the solution off had - we have it printed out and next to the servers at work.

I will be going in to work in a few hours and will reply with that info.

in the mean time send a picture of what the console displays when the boot does not occur.

PS: I see now you mentioned in 1ST post that the directory is empty...
 
Last edited:
Thank you very much for helping out.
Some people are saying i should just hose the server & reinstall proxmox from scratch.
I really hate doing that, it will not deliver any knowledge about what is actually going wrong with this machine.
 
see if you can post a pic of the console at point of failed boot
I'm not sure if i understand what you mean...
The host boots fully.
PVE starts up properly.
Containers/VM's do not start, because zfs pool does not mount automatically.
The zfs pool is not an rpool, root is not mounted on the zfs pool.

The console just displays the usual welcome banner, telling me what the connection URL is for the proxmox webif.
 
# do this: make sure to put name of the zpool
Code:
zpool export  (name of the pool )

zpool import ( name of the pool )
then send dmesg

if import fails try

Code:
zpool  import -F  ( name of the pool)
 
Code:
root@pve:/etc/default# zpool export zfs-pool
umount: /zfs-pool/iso: target is busy.
cannot unmount '/zfs-pool/iso': umount failed
root@pve:/etc/default# lsof +D /zfs-pool/iso
COMMAND  PID USER   FD   TYPE DEVICE  SIZE/OFF NODE NAME
kvm     3669 root   15r   REG   0,56 350224384   11 /zfs-pool/iso/template/iso/debian-10.0.0-amd64-netinst.iso
root@pve:/etc/default#

humm...wonder why that iso file is being accessed
 
try
Code:
zpool  import -f zfs-pool

if that does not work do this:

at each of your kvm's set the cdroms to use no media. reboot and try again.
 
indeed there was 1 VM that had a CDrom device with that iso. I "ejected" the cdrom.

Code:
root@pve:/etc/default# zpool export zfs-pool
root@pve:/etc/default# zpool import zfs-pool
cannot import 'zfs-pool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
root@pve:/etc/default# zfs mount -a
root@pve:/etc/default#
 
for more ideas see ' man zpool '
you could try

Code:
zpool export -f zfs-pool

also this line from what you sent would concern me: cannot import 'zfs-pool': a pool with that name already exists.
 
Last edited:
from the below...I would conclude that "zpool import zfs-pool" is trying to import the pool twice ?????

Code:
root@pve:/etc/default# zpool export zfs-pool
root@pve:/etc/default#
root@pve:/etc/default#
root@pve:/etc/default# zpool status
  pool: zfs-pool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 00:43:00 with 0 errors on Sun Aug 11 01:07:02 2019
config:
        NAME                                            STATE     READ WRITE CKSUM
        zfs-pool                                        ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-WDC_WD5000AAKX-22ERMA0_WD-WCC2EM444978  ONLINE       0     0     0
            ata-WDC_WD20EARS-00S8B1_WD-WCAVY2578012     ONLINE       0     0     0
            wwn-0x50014ee104b00526                      ONLINE       0     0     0
            wwn-0x50014ee25c49524b                      ONLINE       0     0     0
        logs
          wwn-0x50026b778216c91b-part4                  ONLINE       0     0     0
        cache
          wwn-0x50026b778216c91b-part5                  ONLINE       0     0     0
errors: No known data errors
root@pve:/etc/default# zpool export zfs-pool
root@pve:/etc/default# zpool status
no pools available
root@pve:/etc/default#
root@pve:/etc/default#
root@pve:/etc/default#
root@pve:/etc/default# zpool import zfs-pool
cannot import 'zfs-pool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
root@pve:/etc/default# zpool status
  pool: zfs-pool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 00:43:00 with 0 errors on Sun Aug 11 01:07:02 2019
config:
        NAME                                            STATE     READ WRITE CKSUM
        zfs-pool                                        ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-WDC_WD5000AAKX-22ERMA0_WD-WCC2EM444978  ONLINE       0     0     0
            ata-WDC_WD20EARS-00S8B1_WD-WCAVY2578012     ONLINE       0     0     0
            wwn-0x50014ee104b00526                      ONLINE       0     0     0
            wwn-0x50014ee25c49524b                      ONLINE       0     0     0
        logs
          wwn-0x50026b778216c91b-part4                  ONLINE       0     0     0
        cache
          wwn-0x50026b778216c91b-part5                  ONLINE       0     0     0
errors: No known data errors
root@pve:/etc/default#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!