Failed to start Import ZFS pool ...

You could boot from the PVE Iso and enter rescue mode to see whats wrong with your pool. I would guess one of your HDDs maybe died and because it was setup as a raid0 without any redundancy your pool is now lost. So lsblk, smartctl -t long /dev/yourHDD , smartctl -a /dev/yourHDD, zpool status -v and tail -1000 /var/log/syslog would be a good point to start.
 
Last edited:
You could boot from the PVE Iso and enter rescue mode to see whats wrong with your pool. I would guess one of your HDDs maybe died and because it was setup as a raid0 without any redundancy your pool is now lost. So lsblk, smartctl -t long /dev/yourHDD , smartctl -a /dev/yourHDD, zpool status -v and tail -1000 /var/log/syslog would be a good point to start.
Thanks for the answer.

Maybe I should still explain my topology.
I have connected several hard disks via a HW RAID controller with RAID 0 as a virtual DISK, then mounted this virtual disk via zfs as a single DISK into Proxmox.

I was able to boot the system normally via rescue boot. Also there it reported an error because of ZFS, but then booted normally. The pools are now also displayed normally and also work.

1634108856039.png
1634108889761.png
Maybe this logs from "/var/log/syslog" helps.

1634108964988.png
1634109033002.png
1634109092314.png
1634109140913.png
Edit: Tryed to boot the Server without Rescue boot. It still brings the Error "Failed to start ZFS pool ZFS_HDD_RAID0, but after a second it starts normally, and i can use it in proxmox too.
 

Attachments

  • 1634109072282.png
    1634109072282.png
    150.8 KB · Views: 7
Last edited:
Also keep in mind that you shouldn't use ZFS ontop of HW raid. See here from the ZFS documentation:

Hardware RAID controllers

Hardware RAID controllers should not be used with ZFS. While ZFS will likely be more reliable than other filesystems on Hardware RAID, it will not be as reliable as it would be on its own.

  • Hardware RAID will limit opportunities for ZFS to perform self healing on checksum failures. When ZFS does RAID-Z or mirroring, a checksum failure on one disk can be corrected by treating the disk containing the sector as bad for the purpose of reconstructing the original information. This cannot be done when a RAID controller handles the redundancy unless a duplicate copy is stored by ZFS in the case that the corruption involving as metadata, the copies flag is set or the RAID array is part of a mirror/raid-z vdev within ZFS.
  • Sector size information is not necessarily passed correctly by hardware RAID on RAID 1 and cannot be passed correctly on RAID 5/6. Hardware RAID 1 is more likely to experience read-modify-write overhead from partial sector writes and Hardware RAID 5/6 will almost certainty suffer from partial stripe writes (i.e. the RAID write hole). Using ZFS with the disks directly will allow it to obtain the sector size information reported by the disks to avoid read-modify-write on sectors while ZFS avoids partial stripe writes on RAID-Z by desing from using copy-on-write.
    • There can be sector alignment problems on ZFS when a drive misreports its sector size. Such drives are typically NAND-flash based solid state drives and older SATA drives from the advanced format (4K sector size) transition before Windows XP EoL occurred. This can be manually corrected at vdev creation.
    • It is possible for the RAID header to cause misalignment of sector writes on RAID 1 by starting the array within a sector on an actual drive, such that manual correction of sector alignment at vdev creation does not solve the problem.
  • Controller failures can require that the controller be replaced with the same model, or in less extreme cases, a model from the same manufacturer. Using ZFS by itself allows any controller to be used.
  • If a hardware RAID controller’s write cache is used, an additional failure point is introduced that can only be partially mitigated by additional complexity from adding flash to save data in power loss events. The data can still be lost if the battery fails when it is required to survive a power loss event or there is no flash and power is not restored in a timely manner. The loss of the data in the write cache can severely damage anything stored on a RAID array when many outstanding writes are cached. In addition, all writes are stored in the cache rather than just synchronous writes that require a write cache, which is inefficient, and the write cache is relatively small. ZFS allows synchronous writes to be written directly to flash, which should provide similar acceleration to hardware RAID and the ability to accelerate many more in-flight operations.
  • Behavior during RAID reconstruction when silent corruption damages data is undefined. There are reports of RAID 5 and 6 arrays being lost during reconstruction when the controller encounters silent corruption. ZFS’ checksums allow it to avoid this situation by determining if not enough information exists to reconstruct data. In which case, the file is listed as damaged in zpool status and the system administrator has the opportunity to restore it from a backup.
  • IO response times will be reduced whenever the OS blocks on IO operations because the system CPU blocks on a much weaker embedded CPU used in the RAID controller. This lowers IOPS relative to what ZFS could have achieved.
  • The controller’s firmware is an additional layer of complexity that cannot be inspected by arbitrary third parties. The ZFS source code is open source and can be inspected by anyone.
  • If multiple RAID arrays are formed by the same controller and one fails, the identifiers provided by the arrays exposed to the OS might become inconsistent. Giving the drives directly to the OS allows this to be avoided via naming that maps to a unique port or unique drive identifier.
    • e.g. If you have arrays A, B, C and D; array B dies, the interaction between the hardware RAID controller and the OS might rename arrays C and D to look like arrays B and C respectively. This can fault pools verbatim imported from the cachefile.
    • Not all RAID controllers behave this way. However, this issue has been observed on both Linux and FreeBSD when system administrators used single drive RAID 0 arrays. It has also been observed with controllers from different vendors.
One might be inclined to try using single-drive RAID 0 arrays to try to use a RAID controller like a HBA, but this is not recommended for many of the reasons listed for other hardware RAID types. It is best to use a HBA instead of a RAID controller, for both performance and reliability.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!