I've got a system set up with ZFS configured in RAID1 mode across two drives. All has been working fine, until today.
I added an SSD drive to the system to use for log and cache as described here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration
The SSD was formatted with a GPT partition table, with a 16GB partition (32GB total memory in system, so log should be half that) formatted as EXT4 for use as the log, and a 150GB partition formatted as EXT4 for the cache. I added the cache and log to the pool as follows:
Checking the status ("zpool status") showed the log and cache as being online as it should. However, upon reboot the system failed to boot after howing the GRUB boot screen. It said that it was running initramfs, then dumped me into a console saying that the /dev/sdc1 device was not available (which it should be - it is a SATA drive that should be available at boot unless I've missed a configuration step somewhere along the way).
I've now removed the cache and log, using the following:
"zpool status" now shows a simple pool with only the two mirrored drives, but during boot the systemd "zfs-mount.service" service fails, with the following error (screenshot included below):
Though this service fails, it looks like everything else is working - my VMs have booted and seem to be working fine, and "zpool status" looks fine.
Is there any way to fix this service startup error? Also, any ideas what I missed while configuring the cache/log for the pool?
Thanks,
Euan
I added an SSD drive to the system to use for log and cache as described here: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration
The SSD was formatted with a GPT partition table, with a 16GB partition (32GB total memory in system, so log should be half that) formatted as EXT4 for use as the log, and a 150GB partition formatted as EXT4 for the cache. I added the cache and log to the pool as follows:
Code:
zpool add -f rpool log /dev/sdc1 cache d/ev/sdc2
Checking the status ("zpool status") showed the log and cache as being online as it should. However, upon reboot the system failed to boot after howing the GRUB boot screen. It said that it was running initramfs, then dumped me into a console saying that the /dev/sdc1 device was not available (which it should be - it is a SATA drive that should be available at boot unless I've missed a configuration step somewhere along the way).
I've now removed the cache and log, using the following:
Code:
zpool remove rpool log /dev/sdc1 cache d/ev/sdc2
"zpool status" now shows a simple pool with only the two mirrored drives, but during boot the systemd "zfs-mount.service" service fails, with the following error (screenshot included below):
Code:
cannot mount '/rpool': directory is not empty
cannout '/rpool/data': directory is not empty
Though this service fails, it looks like everything else is working - my VMs have booted and seem to be working fine, and "zpool status" looks fine.
Is there any way to fix this service startup error? Also, any ideas what I missed while configuring the cache/log for the pool?
Thanks,
Euan