Hi
After a power failure I can not access my Proxmox using its WebGUI anymore.
I could install the Proxmox again but there are a couple of VMs (PfSense, TVHeadend, etc.) that I do not remember all the configurations and/or setup steps so I really need to recover my current setup!
The problem seems similar with the one described here, that is, initially in the beginning of the boot I had a lot of messages "Can't process LV pve/vm-...: thin target support missing from kernel?", but with the solution proposed I get now this instead:
	
	
	
		
But I still can not access the Proxmox WebGUI.
Can someone please help me?
Here are some info that could help:
	
	
	
		
	
	
	
		
So the problem seems to be "invalid or corrupt cache file contents: invalid or missing cache file"
All the data seems still be present in the nvme disk:
	
	
	
		
As you can see I still have access to the server using ssh despite Proxmox being in "emergency mode" (I just need the start the sshd service again)!
 
Any help would be welcome.
				
			After a power failure I can not access my Proxmox using its WebGUI anymore.

I could install the Proxmox again but there are a couple of VMs (PfSense, TVHeadend, etc.) that I do not remember all the configurations and/or setup steps so I really need to recover my current setup!
The problem seems similar with the one described here, that is, initially in the beginning of the boot I had a lot of messages "Can't process LV pve/vm-...: thin target support missing from kernel?", but with the solution proposed I get now this instead:
		Code:
	
	-- The start-up result is done.
Sep 20 06:28:33 pve dmeventd[1190]: dmeventd ready for processing.
Sep 20 06:28:33 pve lvm[1190]: Monitoring thin pool pve-data-tpool.
Sep 20 06:28:33 pve lvm[1181]:   20 logical volume(s) in volume group "pve" nowBut I still can not access the Proxmox WebGUI.
Can someone please help me?
Here are some info that could help:
		Code:
	
	root@pve:~# pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-21
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
		Code:
	
	root@pve:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-09-20 05:24:57 WEST; 37min ago
     Docs: man:zpool(8)
Main PID: 1204 (code=exited, status=1/FAILURE)
      CPU: 2ms
Sep 20 05:24:57 pve systemd[1]: Starting Import ZFS pools by cache file...
Sep 20 05:24:57 pve zpool[1204]: invalid or corrupt cache file contents: invalid or missing cache file
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Sep 20 05:24:57 pve systemd[1]: Failed to start Import ZFS pools by cache file.
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Unit entered failed state.
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.So the problem seems to be "invalid or corrupt cache file contents: invalid or missing cache file"
All the data seems still be present in the nvme disk:
		Code:
	
	root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   1.8T  0 disk
├─sda1                         8:1    0 186.3G  0 part /mnt/hdd/iot
└─sda2                         8:2    0 186.3G  0 part /mnt/hdd/nas
sdb                            8:16   0 111.8G  0 disk
└─sdb1                         8:17   0 111.8G  0 part
sdc                            8:32   0   3.7T  0 disk
├─sdc1                         8:33   0     2G  0 part
└─sdc2                         8:34   0   3.7T  0 part
sdd                            8:48   0   3.7T  0 disk
├─sdd1                         8:49   0     2G  0 part
└─sdd2                         8:50   0   3.7T  0 part
sde                            8:64   0 232.9G  0 disk
├─sde1                         8:65   0  1007K  0 part
├─sde2                         8:66   0   512M  0 part
└─sde3                         8:67   0 232.4G  0 part
sdf                            8:80   0   3.7T  0 disk
├─sdf1                         8:81   0     2G  0 part
└─sdf2                         8:82   0   3.7T  0 part
nvme0n1                      259:0    0 465.8G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part
└─nvme0n1p3                  259:3    0 465.3G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   3.5G  0 lvm
  │ └─pve-data-tpool         253:4    0 338.4G  0 lvm
  │   ├─pve-data             253:5    0 338.4G  0 lvm
  │   ├─pve-vm--121--disk--0 253:6    0     8G  0 lvm
  │   ├─pve-vm--121--disk--1 253:7    0     8G  0 lvm
  │   ├─pve-vm--121--disk--2 253:8    0     8G  0 lvm
  │   ├─pve-vm--123--disk--0 253:9    0    16G  0 lvm
  │   ├─pve-vm--100--disk--0 253:10   0    32G  0 lvm
  │   ├─pve-vm--105--disk--0 253:11   0    32G  0 lvm
  │   ├─pve-vm--106--disk--0 253:12   0     4M  0 lvm
  │   ├─pve-vm--106--disk--1 253:13   0    32G  0 lvm
  │   ├─pve-vm--101--disk--0 253:14   0    32G  0 lvm
  │   ├─pve-vm--103--disk--0 253:15   0     4M  0 lvm
  │   ├─pve-vm--103--disk--1 253:16   0    32G  0 lvm
  │   ├─pve-vm--102--disk--0 253:17   0    32G  0 lvm
  │   ├─pve-vm--113--disk--0 253:18   0     4M  0 lvm
  │   ├─pve-vm--113--disk--1 253:19   0    32G  0 lvm
  │   ├─pve-vm--104--disk--0 253:20   0    32G  0 lvm
  │   ├─pve-vm--112--disk--0 253:21   0    32G  0 lvm
  │   └─pve-vm--122--disk--0 253:22   0     8G  0 lvm
  └─pve-data_tdata           253:3    0 338.4G  0 lvm
    └─pve-data-tpool         253:4    0 338.4G  0 lvm
      ├─pve-data             253:5    0 338.4G  0 lvm
      ├─pve-vm--121--disk--0 253:6    0     8G  0 lvm
      ├─pve-vm--121--disk--1 253:7    0     8G  0 lvm
      ├─pve-vm--121--disk--2 253:8    0     8G  0 lvm
      ├─pve-vm--123--disk--0 253:9    0    16G  0 lvm
      ├─pve-vm--100--disk--0 253:10   0    32G  0 lvm
      ├─pve-vm--105--disk--0 253:11   0    32G  0 lvm
      ├─pve-vm--106--disk--0 253:12   0     4M  0 lvm
      ├─pve-vm--106--disk--1 253:13   0    32G  0 lvm
      ├─pve-vm--101--disk--0 253:14   0    32G  0 lvm
      ├─pve-vm--103--disk--0 253:15   0     4M  0 lvm
      ├─pve-vm--103--disk--1 253:16   0    32G  0 lvm
      ├─pve-vm--102--disk--0 253:17   0    32G  0 lvm
      ├─pve-vm--113--disk--0 253:18   0     4M  0 lvm
      ├─pve-vm--113--disk--1 253:19   0    32G  0 lvm
      ├─pve-vm--104--disk--0 253:20   0    32G  0 lvm
      ├─pve-vm--112--disk--0 253:21   0    32G  0 lvm
      └─pve-vm--122--disk--0 253:22   0     8G  0 lvmAs you can see I still have access to the server using ssh despite Proxmox being in "emergency mode" (I just need the start the sshd service again)!
Any help would be welcome.
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								 
	 
	 
 
		
