[SOLVED] Failed to start import ZFS pools by cache file

Helio Mendonça

Active Member
Apr 10, 2019
73
6
28
Hi
After a power failure I can not access my Proxmox using its WebGUI anymore. :(
I could install the Proxmox again but there are a couple of VMs (PfSense, TVHeadend, etc.) that I do not remember all the configurations and/or setup steps so I really need to recover my current setup!
The problem seems similar with the one described here, that is, initially in the beginning of the boot I had a lot of messages "Can't process LV pve/vm-...: thin target support missing from kernel?", but with the solution proposed I get now this instead:

Code:
-- The start-up result is done.
Sep 20 06:28:33 pve dmeventd[1190]: dmeventd ready for processing.
Sep 20 06:28:33 pve lvm[1190]: Monitoring thin pool pve-data-tpool.
Sep 20 06:28:33 pve lvm[1181]:   20 logical volume(s) in volume group "pve" now

But I still can not access the Proxmox WebGUI.
Can someone please help me?

Here are some info that could help:

Code:
root@pve:~# pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-21
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

Code:
root@pve:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-09-20 05:24:57 WEST; 37min ago
     Docs: man:zpool(8)
Main PID: 1204 (code=exited, status=1/FAILURE)
      CPU: 2ms

Sep 20 05:24:57 pve systemd[1]: Starting Import ZFS pools by cache file...
Sep 20 05:24:57 pve zpool[1204]: invalid or corrupt cache file contents: invalid or missing cache file
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Sep 20 05:24:57 pve systemd[1]: Failed to start Import ZFS pools by cache file.
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Unit entered failed state.
Sep 20 05:24:57 pve systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.

So the problem seems to be "invalid or corrupt cache file contents: invalid or missing cache file"

All the data seems still be present in the nvme disk:
Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   1.8T  0 disk
├─sda1                         8:1    0 186.3G  0 part /mnt/hdd/iot
└─sda2                         8:2    0 186.3G  0 part /mnt/hdd/nas
sdb                            8:16   0 111.8G  0 disk
└─sdb1                         8:17   0 111.8G  0 part
sdc                            8:32   0   3.7T  0 disk
├─sdc1                         8:33   0     2G  0 part
└─sdc2                         8:34   0   3.7T  0 part
sdd                            8:48   0   3.7T  0 disk
├─sdd1                         8:49   0     2G  0 part
└─sdd2                         8:50   0   3.7T  0 part
sde                            8:64   0 232.9G  0 disk
├─sde1                         8:65   0  1007K  0 part
├─sde2                         8:66   0   512M  0 part
└─sde3                         8:67   0 232.4G  0 part
sdf                            8:80   0   3.7T  0 disk
├─sdf1                         8:81   0     2G  0 part
└─sdf2                         8:82   0   3.7T  0 part
nvme0n1                      259:0    0 465.8G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part
└─nvme0n1p3                  259:3    0 465.3G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   3.5G  0 lvm
  │ └─pve-data-tpool         253:4    0 338.4G  0 lvm
  │   ├─pve-data             253:5    0 338.4G  0 lvm
  │   ├─pve-vm--121--disk--0 253:6    0     8G  0 lvm
  │   ├─pve-vm--121--disk--1 253:7    0     8G  0 lvm
  │   ├─pve-vm--121--disk--2 253:8    0     8G  0 lvm
  │   ├─pve-vm--123--disk--0 253:9    0    16G  0 lvm
  │   ├─pve-vm--100--disk--0 253:10   0    32G  0 lvm
  │   ├─pve-vm--105--disk--0 253:11   0    32G  0 lvm
  │   ├─pve-vm--106--disk--0 253:12   0     4M  0 lvm
  │   ├─pve-vm--106--disk--1 253:13   0    32G  0 lvm
  │   ├─pve-vm--101--disk--0 253:14   0    32G  0 lvm
  │   ├─pve-vm--103--disk--0 253:15   0     4M  0 lvm
  │   ├─pve-vm--103--disk--1 253:16   0    32G  0 lvm
  │   ├─pve-vm--102--disk--0 253:17   0    32G  0 lvm
  │   ├─pve-vm--113--disk--0 253:18   0     4M  0 lvm
  │   ├─pve-vm--113--disk--1 253:19   0    32G  0 lvm
  │   ├─pve-vm--104--disk--0 253:20   0    32G  0 lvm
  │   ├─pve-vm--112--disk--0 253:21   0    32G  0 lvm
  │   └─pve-vm--122--disk--0 253:22   0     8G  0 lvm
  └─pve-data_tdata           253:3    0 338.4G  0 lvm
    └─pve-data-tpool         253:4    0 338.4G  0 lvm
      ├─pve-data             253:5    0 338.4G  0 lvm
      ├─pve-vm--121--disk--0 253:6    0     8G  0 lvm
      ├─pve-vm--121--disk--1 253:7    0     8G  0 lvm
      ├─pve-vm--121--disk--2 253:8    0     8G  0 lvm
      ├─pve-vm--123--disk--0 253:9    0    16G  0 lvm
      ├─pve-vm--100--disk--0 253:10   0    32G  0 lvm
      ├─pve-vm--105--disk--0 253:11   0    32G  0 lvm
      ├─pve-vm--106--disk--0 253:12   0     4M  0 lvm
      ├─pve-vm--106--disk--1 253:13   0    32G  0 lvm
      ├─pve-vm--101--disk--0 253:14   0    32G  0 lvm
      ├─pve-vm--103--disk--0 253:15   0     4M  0 lvm
      ├─pve-vm--103--disk--1 253:16   0    32G  0 lvm
      ├─pve-vm--102--disk--0 253:17   0    32G  0 lvm
      ├─pve-vm--113--disk--0 253:18   0     4M  0 lvm
      ├─pve-vm--113--disk--1 253:19   0    32G  0 lvm
      ├─pve-vm--104--disk--0 253:20   0    32G  0 lvm
      ├─pve-vm--112--disk--0 253:21   0    32G  0 lvm
      └─pve-vm--122--disk--0 253:22   0     8G  0 lvm

As you can see I still have access to the server using ssh despite Proxmox being in "emergency mode" (I just need the start the sshd service again)!

Any help would be welcome.
 
Last edited:
Please, do not be too hard on me! :(
I know I should have Backups!!!
Its funny, that in these last days I was precisely around a Duplicati docker container (running in one of my Proxmox VMs) to help me on that.
But "Fate" is a B*tch and punished me.
Can someone help me gaining back the access to Proxmox WebGUI (even temporarily) so I can make the VM backups and install a new Proxmox server?
Or maybe be able to make the backups from the cli so I could save them and use them in a the new Proxmox installation?
Any help would be welcome.
Thanks
 
Last edited:
Relax nothing to worry about. Are u using zfs for the media drives ? because your rpool is ext4 lvm based.

Please post the output of "zpool import" it should list all your pools.

After that for each of your zfs pools do the following replace mypool with poolname:
zpool import -f mypool
zpool set cachefile=none mypool
zpool set cachefile=/etc/zfs/zpool.cache mypool

reboot the server after all pools have been imported
 
  • Like
Reactions: Helio Mendonça
Relax nothing to worry about. Are u using zfs for the media drives ? because your rpool is ext4 lvm based.

Please post the output of "zpool import" it should list all your pools.

After that for each of your zfs pools do the following replace mypool with poolname:
zpool import -f mypool
zpool set cachefile=none mypool
zpool set cachefile=/etc/zfs/zpool.cache mypool

reboot the server after all pools have been imported

First of all, MANY THANKS for your help!

Here it is:

Code:
root@pve:~# zpool import
   pool: myraidz
     id: 9795843411844402075
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://zfsonlinux.org/msg/ZFS-8000-EY
 config:

        myraidz     UNAVAIL  unsupported feature(s)
          raidz1-0  ONLINE
            sdf2    ONLINE
            sdd2    ONLINE
            sdc2    ONLINE
        cache
          sdb1

Code:
root@pve:~# zpool import -f myraidz
This pool uses the following feature(s) not supported by this system:
        com.delphix:spacemap_v2 (Space maps representing large segments are more efficient.)
All unsupported features are only required for writing to the pool.
The pool can be imported using '-o readonly=on'.
cannot import 'myraidz': unsupported version or feature

Let me explain all my disks:

Code:
root@pve:~# lsblk --output NAME,FSTYPE,LABEL,SIZE
NAME                         FSTYPE      LABEL     SIZE
sda                                                1.8T
├─sda1                       ext4        data    186.3G
└─sda2                       ext4        mynas   186.3G
sdb                          zfs_member          111.8G
└─sdb1                       zfs_member          111.8G
sdc                          zfs_member            3.7T
├─sdc1                       zfs_member              2G
└─sdc2                       zfs_member  myraidz   3.7T
sdd                          zfs_member            3.7T
├─sdd1                       zfs_member              2G
└─sdd2                       zfs_member  myraidz   3.7T
sde                                              232.9G
├─sde1                                            1007K
├─sde2                                             512M
└─sde3                                           232.4G
sdf                          zfs_member            3.7T
├─sdf1                       zfs_member              2G
└─sdf2                       zfs_member  myraidz   3.7T
nvme0n1                                          465.8G
├─nvme0n1p1                                       1007K
├─nvme0n1p2                  vfat                  512M
└─nvme0n1p3                  LVM2_member         465.3G
  ├─pve-swap                 swap                    8G
  ├─pve-root                 ext4                   96G
  ├─pve-data_tmeta                                 3.5G
  │ └─pve-data-tpool                             338.4G
  │   ├─pve-data                                 338.4G
  │   ├─pve-vm--121--disk--0 ext4                    8G
  │   ├─pve-vm--121--disk--1 ext4                    8G
  │   ├─pve-vm--121--disk--2 ext4                    8G
  │   ├─pve-vm--123--disk--0 ext4                   16G
  │   ├─pve-vm--100--disk--0                        32G
  │   ├─pve-vm--105--disk--0                        32G
  │   ├─pve-vm--106--disk--0                         4M
  │   ├─pve-vm--106--disk--1                        32G
  │   ├─pve-vm--101--disk--0                        32G
  │   ├─pve-vm--103--disk--0                         4M
  │   ├─pve-vm--103--disk--1                        32G
  │   ├─pve-vm--102--disk--0                        32G
  │   ├─pve-vm--113--disk--0                         4M
  │   ├─pve-vm--113--disk--1                        32G
  │   ├─pve-vm--104--disk--0                        32G
  │   ├─pve-vm--112--disk--0                        32G
  │   └─pve-vm--122--disk--0 ext4                    8G
  └─pve-data_tdata                               338.4G
    └─pve-data-tpool                             338.4G
      ├─pve-data                                 338.4G
      ├─pve-vm--121--disk--0 ext4                    8G
      ├─pve-vm--121--disk--1 ext4                    8G
      ├─pve-vm--121--disk--2 ext4                    8G
      ├─pve-vm--123--disk--0 ext4                   16G
      ├─pve-vm--100--disk--0                        32G
      ├─pve-vm--105--disk--0                        32G
      ├─pve-vm--106--disk--0                         4M
      ├─pve-vm--106--disk--1                        32G
      ├─pve-vm--101--disk--0                        32G
      ├─pve-vm--103--disk--0                         4M
      ├─pve-vm--103--disk--1                        32G
      ├─pve-vm--102--disk--0                        32G
      ├─pve-vm--113--disk--0                         4M
      ├─pve-vm--113--disk--1                        32G
      ├─pve-vm--104--disk--0                        32G
      ├─pve-vm--112--disk--0                        32G
      └─pve-vm--122--disk--0 ext4                    8G

Note that the myraidz is related with 3 4TB HDDs (sdc, sdd, sdf) + a cache SSD (sdb) that I am using in a FreeNAS VM I had in Proxmox.

I believe the VMs are all in the NVMe:

Code:
root@pve:~# lvs -a
  LV                              VG  Attr       LSize   Pool Origin                  Data%  Meta%  Move Log Cpy%Sync Convert
  data                            pve twi-aotz-- 338.36g                              43.51  2.10
  [data_tdata]                    pve Twi-ao---- 338.36g                       
  [data_tmeta]                    pve ewi-ao----   3.45g                       
  [lvol0_pmspare]                 pve ewi-------   3.45g                       
  root                            pve -wi-ao----  96.00g                       
  snap_vm-121-disk-2_preTvHeadend pve Vri---tz-k   8.00g data vm-121-disk-2     
  snap_vm-122-disk-0_Asterisk_OK  pve Vri---tz-k   8.00g data                   
  snap_vm-122-disk-0_FreePBX_OK   pve Vri---tz-k   8.00g data                   
  snap_vm-122-disk-0_test         pve Vri---tz-k   8.00g data                   
  snap_vm-123-disk-0_clean        pve Vri---tz-k  16.00g data vm-123-disk-0     
  snap_vm-123-disk-0_posData      pve Vri---tz-k  16.00g data vm-123-disk-0     
  snap_vm-123-disk-0_preData      pve Vri---tz-k  16.00g data vm-123-disk-0     
  swap                            pve -wi-ao----   8.00g                       
  vm-100-disk-0                   pve Vwi-a-tz--  32.00g data                         100.00
  vm-101-disk-0                   pve Vwi-a-tz--  32.00g data                         12.90
  vm-102-disk-0                   pve Vwi-a-tz--  32.00g data                         36.09
  vm-103-disk-0                   pve Vwi-a-tz--   4.00m data                         0.00
  vm-103-disk-1                   pve Vwi-a-tz--  32.00g data                         23.51
  vm-104-disk-0                   pve Vwi-a-tz--  32.00g data                         28.21
  vm-105-disk-0                   pve Vwi-a-tz--  32.00g data                         88.68
  vm-106-disk-0                   pve Vwi-a-tz--   4.00m data                         0.00
  vm-106-disk-1                   pve Vwi-a-tz--  32.00g data                         25.82
  vm-112-disk-0                   pve Vwi-a-tz--  32.00g data                         34.14
  vm-113-disk-0                   pve Vwi-a-tz--   4.00m data                         0.00
  vm-113-disk-1                   pve Vwi-a-tz--  32.00g data                         10.60
  vm-121-disk-0                   pve Vwi-a-tz--   8.00g data                         10.21
  vm-121-disk-1                   pve Vwi-a-tz--   8.00g data                         10.21
  vm-121-disk-2                   pve Vwi-a-tz--   8.00g data                         70.76
  vm-122-disk-0                   pve Vwi-a-tz--   8.00g data snap_vm-122-disk-0_test 74.47
  vm-123-disk-0                   pve Vwi-a-tz--  16.00g data                         99.51
 
Okay thats totally another matter.

"in a FreeNAS VM I had in Proxmox " i guess you still have that freenas vm ?

This seems like passtrough broke, the disks should be passed trough to the freenas vm and be mounted there.

Somehow the zfs cache on proxmox got corrupted.

You can try the following:
mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.backup
reboot

The cache file should not matter for proxmox as i dont see any other zfs pool that is used directly by proxmox.

If this does not fix it i would disable zfs import scan and cache service on proxmox next.

When was the last time you rebooted the node ? Afaik there were multiple threads on here with breaking changes in combination with pass trough.

Please post the output of "systemctl status zfs-import-scan.service" and "systemctl status zfs-import-cache.service" after reboot.
 
  • Like
Reactions: Helio Mendonça
Okay thats totally another matter.

"in a FreeNAS VM I had in Proxmox " i guess you still have that freenas vm ?

Yes, the FreeNAS VM is one of the Proxmox VMs, but that I can not access now!

mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.backup
reboot

The Proxmox still goes to "emergency mode"! :(

When was the last time you rebooted the node ? Afaik there were multiple threads on here with breaking changes in combination with pass trough.

As I explained in the first post this problem happened after a reboot caused by a power failure.
I though that this could be caused after making several snapshots of a VM, I read in this forum that in early versions of Proxmox this procedures can go beyond the limts of the disk (nvme in my case) and disallow the correct boot the next time it happens (in my case after the power failure).

Please post the output of "systemctl status zfs-import-scan.service" and "systemctl status zfs-import-cache.service" after reboot.

Code:
root@pve:~# systemctl status zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; vendor
   Active: inactive (dead)
     Docs: man:zpool(8)

root@pve:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor
   Active: inactive (dead)
Condition: start condition failed at Sun 2020-09-20 14:13:23 WEST; 3min 8s ago
           └─ ConditionPathExists=/etc/zfs/zpool.cache was not met
     Docs: man:zpool(8)
 
Okay can you try the following:
touch /etc/zfs/zpool.cache
reboot

If it still boots into emergency you can do the following:
systemctl disable zfs-import-cache.service
reboot

however with this next time you wanna use zfs on the hypervisor you gotta have to create a zfs pool and enable the cache service again
 
Okay can you try the following:
touch /etc/zfs/zpool.cache
reboot

During the boot I saw the message:
[FAILED] Failed to start Import ZFS pools by cache file.
... and booted once again in emergency mode!

If it still boots into emergency you can do the following:
systemctl disable zfs-import-cache.service
reboot

... and booted once again in emergency mode! :(

Here are the current results of the systemctl you mentioned earlier:

Code:
root@pve:~# systemctl status zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
   Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; vendor
   Active: inactive (dead)
     Docs: man:zpool(8)

root@pve:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; disabled; vendo
   Active: inactive (dead)
     Docs: man:zpool(8)
 
hmm thas annoying, which services does it complain now about ?

systemctl list-units --failed
 
hmm thas annoying, which services does it complain now about ?

systemctl list-units --failed

Code:
root@pve:~# systemctl list-units --failed
0 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

In the attached file I also tried...

Code:
root@pve:~# systemctl list-units --all

And the result of "journalctl -xb".
 

Attachments

Sorry to insist but... it is not possible to create the backups of the VMs from the cli in my current Proxmox state (in emergency mode but with ssh access and apparent full file system access?
I would not mind to create the backups of my VMs, then install a new Proxmox from scratch, and finally restore the VMs.
 
In another PC I already did the following sequence in a new Proxmox server that I just installed:
- created a VM (100)
- created its Backup file (from Proxmox WebGUI)
- copied the backup file (/var/lib/vz/dump/vzdump-qemu-100-2020_09_20-15_27_57.vma.lzo) to a disk outside that PC
- deleted the backup file
- deleted the VM
(I have now a empty Proxmox)
- copied the backup file from the external disk back to Proxmox file system (/var/lib/vz/dump)
- restored the VM from the cli: qmrestore /var/lib/vz/dump/vzdump-qemu-100-2020_09_20-15_27_57.vma.lzo 100
- started the VM
(the VM is working again!!!)

So, now I just need to know if the second step is possible but this time from the cli and in this emergency mode that I have.
 
So, now I just need to know if the second step is possible but this time from the cli and in this emergency mode that I have.

Apparently not because I got the following error when I tried to backup VM 105 on my "emergency state Porxmox":

Code:
root@pve:~# vzdump 105
INFO: starting new backup job: vzdump 105
INFO: Starting Backup of VM 105 (qemu)
INFO: Backup started at 2020-09-20 16:43:17
INFO: status = stopped
INFO: update VM 105: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: ubuntu19
INFO: include disk 'scsi0' 'local-lvm:vm-105-disk-0' 32G
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-105-2020_09_20-16_43_17.vma'
INFO: starting kvm to execute backup task
ERROR: Backup of VM 105 failed - start failed: org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
INFO: Failed at 2020-09-20 16:43:18
INFO: Backup job finished with errors
job errors

HEEEELP!!!
 
Just to say that my Proxmox seems to be working again! :D
I have access to the WebGUI where I am creating for now backups of my VMs.
Later I will write here what I did that seems to fixed the problem.
 
In my list-units.txt the only device that was "dead" was this:

Code:
dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device                                    loaded    inactive   dead      dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device

Then in the journalctl.txt I noticed that device was sda1:

Code:
--
-- The start-up result is done.
Sep 20 14:49:37 pve kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
Sep 20 14:50:40 pve systemd[1]: dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device: Job dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device/start timed out.
Sep 20 14:50:40 pve systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device.
-- Subject: Unit dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit dev-disk-by\x2duuid-f75290bb\x2d6f20\x2d4d04\x2d8ec2\x2d221c5cb5571a.device has failed.
--

Then I tried to unplugged that HDD from the PC but the boot errors remained because some mounts need its presence.

So I went to the /etc/fstab and comment all those mounts. That HDD and those mounts are used in one LXC that I can give up.

Meanwhile I preformed all the backups (just in case :D) and all went well. Well in fact I had to cancel the backup of my FreeNAS VM because it was including the 4TB HDDs that Proxmox passes to that VM. Now I have to find how I can exclude those disks from the backup and store the FreeNAS.
 
  • Like
Reactions: H4R0
In conclusion, despite the final solution was a simple "edititing of the fstab" I think this was only the final step of several things I tried before (most of them suggested by @H4R0 that I would like to thank once again).

Now I will mark this topic as... SOLVED!
 
  • Like
Reactions: Thorvi

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!