Fresh test machine with PVE 8.0.3
Zpool name = zfs1, data pool, not the boot pool.
While Proxomox was booting up, it got a few seconds delay and showed a warning message.
[FAILED] Failed to start zfs-import@zfs1.service - Import ZFS pool zfs1.
But I can see the zfs1 pool in the Storage and it can be used as usual. So detach the drive and destroy the Zpool, nothing changes, fails during boot up, and works just normally.
In CLI, it looks normal.
zpool status
pool: zfs1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zfs1 ONLINE 0 0 0
nvme-MTFDHBA256TCK-1AS1AABHA_______UHPVN01N4CFVDI ONLINE 0 0 0
errors: No known data errors
systemctl status zfs-import@zfs1
× zfs-import@zfs1.service - Import ZFS pool zfs1
Loaded: loaded (/lib/systemd/system/zfs-import@.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Wed 2023-09-20 17:28:26 NZST; 43s ago
Docs: man:zpool(8)
Process: 517 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none zfs1 (code=exited, status=1/FAI>
Main PID: 517 (code=exited, status=1/FAILURE)
CPU: 14ms
Sep 20 17:28:25 pve2 systemd[1]: Starting zfs-import@zfs1.service - Import ZFS pool zfs1...
Sep 20 17:28:26 pve2 zpool[517]: cannot import 'zfs1': no such pool available
Sep 20 17:28:26 pve2 systemd[1]: zfs-import@zfs1.service: Main process exited, code=exited, status=1/FAILURE
Sep 20 17:28:26 pve2 systemd[1]: zfs-import@zfs1.service: Failed with result 'exit-code'.
Sep 20 17:28:26 pve2 systemd[1]: Failed to start zfs-import@zfs1.service - Import ZFS pool zfs1.
But it shows up OK in zpool status?
I end up disabling it by:
systemctl disable zfs-import@zfs1.service
I'm not sure if the trick above is fooling myself or a bug in the latest v.8.0.3, please help.
Zpool name = zfs1, data pool, not the boot pool.
While Proxomox was booting up, it got a few seconds delay and showed a warning message.
[FAILED] Failed to start zfs-import@zfs1.service - Import ZFS pool zfs1.
But I can see the zfs1 pool in the Storage and it can be used as usual. So detach the drive and destroy the Zpool, nothing changes, fails during boot up, and works just normally.
In CLI, it looks normal.
zpool status
pool: zfs1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zfs1 ONLINE 0 0 0
nvme-MTFDHBA256TCK-1AS1AABHA_______UHPVN01N4CFVDI ONLINE 0 0 0
errors: No known data errors
systemctl status zfs-import@zfs1
× zfs-import@zfs1.service - Import ZFS pool zfs1
Loaded: loaded (/lib/systemd/system/zfs-import@.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Wed 2023-09-20 17:28:26 NZST; 43s ago
Docs: man:zpool(8)
Process: 517 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none zfs1 (code=exited, status=1/FAI>
Main PID: 517 (code=exited, status=1/FAILURE)
CPU: 14ms
Sep 20 17:28:25 pve2 systemd[1]: Starting zfs-import@zfs1.service - Import ZFS pool zfs1...
Sep 20 17:28:26 pve2 zpool[517]: cannot import 'zfs1': no such pool available
Sep 20 17:28:26 pve2 systemd[1]: zfs-import@zfs1.service: Main process exited, code=exited, status=1/FAILURE
Sep 20 17:28:26 pve2 systemd[1]: zfs-import@zfs1.service: Failed with result 'exit-code'.
Sep 20 17:28:26 pve2 systemd[1]: Failed to start zfs-import@zfs1.service - Import ZFS pool zfs1.
But it shows up OK in zpool status?
I end up disabling it by:
systemctl disable zfs-import@zfs1.service
I'm not sure if the trick above is fooling myself or a bug in the latest v.8.0.3, please help.