Hi
When I startup, I receive the Mesage: "[FAILED] Failed to start zfs-import_port ZFS pools by device scanning."
I can check the Service with the command:
The result is:
When I check with
I'm not sure, what this exactly means.
My Proxmox PVE seems to run despite this message fine.
At least, I didn't recognized any failures or errors.
But I still want to understand the problem and if possible, resolve it.
I have the the details for my machine here:
Proxmox VE
Version 8.2.7
Installed on a Minis Forum MS-01 Machine.
I have installed one NVME Samsung SSD 970 EVO Plus 1TB as system disk, where Proxmox is installed on. Formated as ZFS.
And also a second SATA Samsung SSD 870 QVO 8TB which is installed on a Delock SATA III PCIe storage controller.
The 8TB SSD is reserved for my TrueNAS Scale VM. I made a passthrough for this SSD for the VM with the command
This is the output from
When I startup, I receive the Mesage: "[FAILED] Failed to start zfs-import_port ZFS pools by device scanning."
I can check the Service with the command:
systemctl --failed
The result is:
Bash:
UNIT LOAD ACTIVE SUB DESCRIPTION
● zfs-import-scan.service loaded failed failed Import ZFS pools by device scanning
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
1 loaded units listed.
When I check with
zpool status
, I see this result:
Bash:
╰─○ zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:01:04 with 0 errors on Sun Sep 8 00:25:05 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.0025385511b22493-part3 ONLINE 0 0 0
errors: No known data errors
I'm not sure, what this exactly means.
My Proxmox PVE seems to run despite this message fine.
At least, I didn't recognized any failures or errors.
But I still want to understand the problem and if possible, resolve it.
I have the the details for my machine here:
Proxmox VE
Version 8.2.7
Installed on a Minis Forum MS-01 Machine.
I have installed one NVME Samsung SSD 970 EVO Plus 1TB as system disk, where Proxmox is installed on. Formated as ZFS.
And also a second SATA Samsung SSD 870 QVO 8TB which is installed on a Delock SATA III PCIe storage controller.
The 8TB SSD is reserved for my TrueNAS Scale VM. I made a passthrough for this SSD for the VM with the command
qm set 120 -scsi2 /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5SSNF0WA00303W,serial=S5SSNF0WA00303W
. Could it be, that this passthrough is maybe related to my issue? Just guessing.This is the output from
lsblk -o NAME,MODEL,SIZE
Bash:
NAME MODEL SIZE TRAN
sda Samsung SSD 870 QVO 8TB 7.3T sata
├─sda1 2G
└─sda2 7.3T
zd0 32G
├─zd0p1 512M
├─zd0p2 488M
└─zd0p3 31G
zd16 32G
├─zd16p1 512M
└─zd16p2 31.5G
zd32 128G
├─zd32p1 100M
├─zd32p2 16M
└─zd32p3 127.9G
zd48 4M
zd64 32G
├─zd64p1 512M
├─zd64p2 488M
└─zd64p3 31G
zd80 32G
├─zd80p1 1M
├─zd80p2 512M
└─zd80p3 31.5G
zd96 16G
├─zd96p1 512M
├─zd96p2 488M
└─zd96p3 15G
zd112 1M
zd128 1M
zd144 1M
zd160 1M
zd176 1M
zd192 1M
nvme0n1 Samsung SSD 970 EVO Plus 1TB 931.5G nvme
├─nvme0n1p1 1007K nvme
├─nvme0n1p2 1G nvme
└─nvme0n1p3 930.5G nvme
Last edited: