I have a striped mirror pool consisting of 4 SATA disks that is not mounted during boot. Syslog gives the error message in the subject above. The error appears to be that for some reason ZFS is taking long to read/import/... the pools. Because after the system has booted, after I manually open a terminal, I can mount the dataset manually with
The problem started after I had to apply changes to this pool: Because of issues with one kind of disks (Samsung 860 Pro SSDs producing strange data integrity errors), I had to replace one mirror with another one, consisting of disks from another vendor. For this purpose, I removed the vdev with the Samsung drives, and added a new mirror with other vendor drives. Since then, the pool is not producing any data integrity errors anymore. But I have to manually mount the pool (or run a script after boot to mount it).
Pool status:
no errors in dmesg:
no helpful error messages in syslog:
The reason for this appears to be that ZFS is slow in mounting datasets during boot, because immediately fter the boot, a script running "zfs list" shows:
About 10 seconds later, zfs list shows all datasets:
Other than running a script after boot to mount this pool manually, how can I fix this? How can I find out what ZFS is doing so long after boot?
Thanks!
Code:
zfs mount pool_sata/netstore
The problem started after I had to apply changes to this pool: Because of issues with one kind of disks (Samsung 860 Pro SSDs producing strange data integrity errors), I had to replace one mirror with another one, consisting of disks from another vendor. For this purpose, I removed the vdev with the Samsung drives, and added a new mirror with other vendor drives. Since then, the pool is not producing any data integrity errors anymore. But I have to manually mount the pool (or run a script after boot to mount it).
Pool status:
Code:
pool: pool_sata
state: ONLINE
scan: scrub repaired 0B in 0 days 00:10:51 with 0 errors on Sun Dec 13 00:34:53 2020
remove: Removal of vdev 1 copied 129G in 0h4m, completed on Thu Dec 10 17:43:00 2020
1.12M memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
pool_sata ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-KINGSTON_SEDC500M1920G_50026B76830C8417 ONLINE 0 0 0
scsi-SATA_KINGSTON_SEDC500_50026B768355CADF ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
scsi-SATA_KINGSTON_SEDC500_50026B7683976F0C ONLINE 0 0 0
ata-KINGSTON_SEDC500M1920G_50026B7683976E32 ONLINE 0 0 0
errors: No known data errors
no errors in dmesg:
Code:
# dmesg | grep ZFS
[ 0.000000] Command line: initrd=\EFI\proxmox\5.4.78-2-pve\initrd.img-5.4.78-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on mitigations=off
[ 0.712114] Kernel command line: initrd=\EFI\proxmox\5.4.78-2-pve\initrd.img-5.4.78-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on mitigations=off
[ 13.598597] ZFS: Loaded module v0.8.5-pve1, ZFS pool version 5000, ZFS filesystem version 5
no helpful error messages in syslog:
Code:
Dec 27 10:23:54 zeus kernel: [ 0.000000] Command line: initrd=\EFI\proxmox\5.4.78-2-pve\initrd.img-5.4.78-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on mitigations=off
Dec 27 10:23:54 zeus kernel: [ 0.712114] Kernel command line: initrd=\EFI\proxmox\5.4.78-2-pve\initrd.img-5.4.78-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs amd_iommu=on mitigations=off
Dec 27 10:24:09 zeus pvestatd[4848]: zfs error: cannot open 'pool_sata': no such pool#012
The reason for this appears to be that ZFS is slow in mounting datasets during boot, because immediately fter the boot, a script running "zfs list" shows:
Code:
NAME USED AVAIL REFER MOUNTPOINT
pool_opt 209G 465G 208K /mnt/zfs_opt
pool_opt/VMs 168G 465G 163G /mnt/zfs_opt/VMs
pool_opt/mail 41.2G 465G 41.2G /mnt/zfs_opt/mail
rpool 7.94G 422G 104K /rpool
rpool/ROOT 7.94G 422G 96K /rpool/ROOT
rpool/ROOT/pve-1 7.94G 422G 7.94G /
rpool/data 96K 422G 96K /rpool/data
About 10 seconds later, zfs list shows all datasets:
Code:
NAME USED AVAIL REFER MOUNTPOINT
pool_opt 209G 465G 208K /mnt/zfs_opt
pool_opt/VMs 168G 465G 163G /mnt/zfs_opt/VMs
pool_opt/mail 41.2G 465G 41.2G /mnt/zfs_opt/mail
pool_sata 244G 3.12T 192K /mnt/zfs_sata
pool_sata/netstore 244G 3.12T 244G /mnt/zfs_sata/netstore
pool_storage 14.2T 19.7T 18.9G /mnt/zfs_storage
pool_storage/backup 1.48T 19.7T 1.48T /mnt/zfs_storage/backup
pool_storage/jail 14.6G 19.7T 14.6G /mnt/zfs_storage/jail
pool_storage/media 12.5T 19.7T 12.5T /mnt/zfs_storage/media
pool_storage/server_data 46.5G 19.7T 46.5G /mnt/zfs_storage/server64_data
rpool 7.94G 422G 104K /rpool
rpool/ROOT 7.94G 422G 96K /rpool/ROOT
rpool/ROOT/pve-1 7.94G 422G 7.94G /
rpool/data 96K 422G 96K /rpool/data
Other than running a script after boot to mount this pool manually, how can I fix this? How can I find out what ZFS is doing so long after boot?
Thanks!