from the journalctl output (https://paste.ubuntu.com/p/QGZzdt7PQb/)
i did see this :
-- The job identifier is 49.
Aug 11 15:48:52 pve kernel: EXT4-fs (sde1): mounted filesystem with ordered data mode. Opts: (null)
Aug 11 15:48:53 pve zpool[822]: invalid or corrupt cache file contents: invalid...
after i do a "zfs mount -a" , there is no output to the start commands, and the VM/LXC start normal.
After a reboot & prior to "zfs mount -a", nothing starts & I end up with errors like
Job for pve-container@100.service failed because the control process exited with error code.
See "systemctl...
root@pve:/# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs-pool 928G 638G 290G - - 33% 68% 1.00x ONLINE -
root@pve:/# zfs list
NAME USED AVAIL REFER...
one final question : the mount point for the pool is "/zfs-pool", should that directory actually exist or is "zfs mount -a" supposed to create that directory ?
oops, our messages crossed eachother :
https://paste.ubuntu.com/p/Gz9wgkcP9x/
from the below...I would conclude that "zpool import zfs-pool" is trying to import the pool twice ?????
root@pve:/etc/default# zpool export zfs-pool
root@pve:/etc/default#
root@pve:/etc/default#
root@pve:/etc/default# zpool status
pool: zfs-pool
state: ONLINE
status: Some supported...
indeed there was 1 VM that had a CDrom device with that iso. I "ejected" the cdrom.
root@pve:/etc/default# zpool export zfs-pool
root@pve:/etc/default# zpool import zfs-pool
cannot import 'zfs-pool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give...
all the pool details are specified in the link to my original post. ( https://forum.proxmox.com/threads/zfs-mounting-problems.23680/#post-261750 )
dmesg :
https://paste.ubuntu.com/p/dPC85Yp5QR/
I'm not sure if i understand what you mean...
The host boots fully.
PVE starts up properly.
Containers/VM's do not start, because zfs pool does not mount automatically.
The zfs pool is not an rpool, root is not mounted on the zfs pool.
The console just displays the usual welcome banner, telling...
Thank you very much for helping out.
Some people are saying i should just hose the server & reinstall proxmox from scratch.
I really hate doing that, it will not deliver any knowledge about what is actually going wrong with this machine.
Hello all,
I really would love to get to the bottom of this.
I had posted in another thread, but the symptoms no longer match, creating a separate thread seems the proper way to ask for help troubleshooting.
This is my detailed post in the other thread ...
no, i didn't... but my issue has evolved. I cleaned up the zfs moint point, and it is now empty. Unfortunately & strangely, the pool _still_ won't mount at reboot.
However, it now _does_ mount with a manual issue of command "zfs mount -a" (without the Overlay parameter)
...go figure...
until someone more knowledgable answers : that particular file is not of much importance for your install, i believe it is responsible for displaying the welcome message on the console after reboot (smt like : you can now connect to the webinterface at https://192.168.0.1:8006 ).
Fine to keep...
yikes...that's above my capabilities.
Is this already pushed in upgrades ?
And...why would the pool not mount at reboot? I know there's a fix for when the pool is on root, but this isn't the case here .
indeed, at first i thought it didn't mount because it there was a dir structure in the mountpoint (/zfs-pool), which i tried to overlay with ""zfs mount -O -a".
But now, I have emptied mountpoint, and it still won't auto mount, but it does mount with a simple "zfs mount -a" from console after...
nope...
the directory structure still gets created before the zfs pool get mounted.
Back to the drawing board for me i guess...
Doesn't work either :
dir: zfs-iso
path /zfs-pool/iso
content iso,vztmpl
shared 0
is_mountpoint 1
mkdir 0
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.