Hi All,
I have a fresh install of Proxmox 7.1 which I created a zpool on a seperate nvme drive called "foo". After some more initial config I noticed on boot I get the message:
I went down the rabbit hole of setting root delays to 10 seconds with no effect.
Finally I decided to just make a new pool. So after deleting the pool in the 'node' GUI I rebooted and noticed the message was still there. Found the pool still exists in the 'datacenter' area of the GUI and deleted it there, message still existed after a reboot.
I then tried to dd the start and finish of the disk that had teh pool, message still appearing.
I then deleted the zfs cache in /etc/zfs/, rebooted and rebuilt the cache. Message STILL appearing.
output of zdb -C
After each of the steps above I also did "proxmox-boot-tool refresh" and "update-initramfs -c -k all" (I'm definitely not using a BIOS bootloader).
Anyone have any ideas? I'm at the point where I'm currently doing a dd on the entire disk that had the pool.
Cheers
I have a fresh install of Proxmox 7.1 which I created a zpool on a seperate nvme drive called "foo". After some more initial config I noticed on boot I get the message:
Feb 19 15:14:43 pve systemd[1]: Starting Import ZFS pools by cache file...
Feb 19 15:14:43 pve systemd[1]: Condition check resulted in Import ZFS pools by device scanning being skipped.
Feb 19 15:14:43 pve systemd[1]: Starting Import ZFS pool foo...
Feb 19 15:14:43 pve systemd[1]: Finished Helper to synchronize boot up for ifupdown.
Feb 19 15:14:43 pve zpool[887]: no pools available to import
Feb 19 15:14:43 pve systemd[1]: Finished Import ZFS pools by cache file.
Feb 19 15:14:43 pve zpool[888]: cannot import 'foo': no such pool available
Feb 19 15:14:43 pve systemd[1]: zfs-import@nvmesmall.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 15:14:43 pve systemd[1]: zfs-import@nvmesmall.service: Failed with result 'exit-code'.
Feb 19 15:14:43 pve systemd[1]: Failed to start Import ZFS pool foo.
Feb 19 15:14:43 pve systemd[1]: Reached target ZFS pool import target.
Feb 19 15:14:43 pve systemd[1]: Starting Mount ZFS filesystems...
I went down the rabbit hole of setting root delays to 10 seconds with no effect.
Finally I decided to just make a new pool. So after deleting the pool in the 'node' GUI I rebooted and noticed the message was still there. Found the pool still exists in the 'datacenter' area of the GUI and deleted it there, message still existed after a reboot.
I then tried to dd the start and finish of the disk that had teh pool, message still appearing.
I then deleted the zfs cache in /etc/zfs/, rebooted and rebuilt the cache. Message STILL appearing.
output of zdb -C
rpool:
version: 5000
name: 'rpool'
state: 0
txg: 1462
pool_guid: 11969062794918774439
errata: 0
hostid: 889101361
hostname: '(none)'
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 11969062794918774439
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2283660474341630277
path: '/dev/disk/by-id/ata-INTEL_SSDSC2KW256G8_BTLA80510EYT256CGN-part3'
whole_disk: 0
metaslab_array: 256
metaslab_shift: 31
ashift: 12
asize: 255517786112
is_log: 0
create_txg: 4
com.delphix:vdev_zap_leaf: 129
com.delphix:vdev_zap_top: 130
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
After each of the steps above I also did "proxmox-boot-tool refresh" and "update-initramfs -c -k all" (I'm definitely not using a BIOS bootloader).
Anyone have any ideas? I'm at the point where I'm currently doing a dd on the entire disk that had the pool.
Cheers