Failed to start Import ZFS pools by cache file

fpausp

Renowned Member
Aug 31, 2010
633
43
93
Austria near Vienna
Nach einem reboot is mein ZFS-pool futsch...

Beim booten sieht man diese Fehlermeldung:
Code:
Failed to start Import ZFS pools by cache file

Code:
root@pvestor01:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-02-05 22:13:06 CET; 4min 29s ago
     Docs: man:zpool(8)
  Process: 2153 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
 Main PID: 2153 (code=exited, status=1/FAILURE)

Feb 05 22:13:04 pvestor01 systemd[1]: Starting Import ZFS pools by cache file...
Feb 05 22:13:06 pvestor01 zpool[2153]: cannot import 'data': no such pool or dataset
Feb 05 22:13:06 pvestor01 zpool[2153]:         Destroy and re-create the pool from
Feb 05 22:13:06 pvestor01 zpool[2153]:         a backup source.
Feb 05 22:13:06 pvestor01 systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Feb 05 22:13:06 pvestor01 systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Feb 05 22:13:06 pvestor01 systemd[1]: Failed to start Import ZFS pools by cache file.

Irgendjemand eine Idee wie ich wieder an meine Daten komm?
 
Hi,
\
und was bekommst du wenn du den pool manuell importierst?

Code:
zpool import data
 
What these tow commands are telling you

Code:
lsblk -if
zpool import
 
Ich denke ich hab die Ursache gefunden. Angefangen hat alles mit dieser Fehlermeldung beim booten:

Code:
kfd kfd: error getting iommu info. is the iommu enabled?

Ich hab danach im BIOS iommu auf enabled gesetzt... Warum dann plötlich keine Platten mehr zu sehen waren ist mir schleierhaft...

Habs jetzt wieder deaktiviert und siehe da die Platten sind wieder zu sehen:

Code:
root@pvestor01:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 111.8G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
└─sda3               8:3    0 111.3G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  27.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  59.7G  0 lvm
  └─pve-data_tdata 253:3    0  59.7G  0 lvm
    └─pve-data     253:4    0  59.7G  0 lvm
sdb                  8:16   0   2.7T  0 disk
├─sdb1               8:17   0   2.7T  0 part
└─sdb9               8:25   0     8M  0 part
sdc                  8:32   0   2.7T  0 disk
├─sdc1               8:33   0   2.7T  0 part
└─sdc9               8:41   0     8M  0 part
sdd                  8:48   0   2.7T  0 disk
├─sdd1               8:49   0   2.7T  0 part
└─sdd9               8:57   0     8M  0 part
sde                  8:64   0   2.7T  0 disk
├─sde1               8:65   0   2.7T  0 part
└─sde9               8:73   0     8M  0 part
zd0                230:0    0    50G  0 disk
└─zd0p1            230:1    0    50G  0 part

Mich würde interessieren wie das zusammenhängt?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!