Upgrade PVE 5.1-41 to 5.2-1 Failed to start Mount ZFS

Borut

Well-Known Member
May 16, 2018
39
0
46
70
During upgrade I found:

zfs-import-scan.service is a disabled or a static unit, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
zfs-mount.service couldn't start.


root@starspot:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-05-25 08:33:17 CEST; 4min 47s ago
Docs: man:zfs(8)
Main PID: 125987 (code=exited, status=1/FAILURE)
CPU: 13ms

May 25 08:33:17 starspot systemd[1]: Starting Mount ZFS filesystems...
May 25 08:33:17 starspot zfs[125987]: cannot mount '/rpool': directory is not empty
May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
May 25 08:33:17 starspot systemd[1]: Failed to start Mount ZFS filesystems.
May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Unit entered failed state.
May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
root@starspot:~#

The same zfs-mount.service status after reboot...

Everything looks good, however, this makes me nervous for production use.
 
May 25 08:33:17 starspot systemd[1]: Starting Mount ZFS filesystems...
May 25 08:33:17 starspot zfs[125987]: cannot mount '/rpool': directory is not empty
...

/rpool directory must be empty, you should look here and move out any content.

I hope it will help you
 
I am wondering if nobody installed PVE on the root ZFS. I didn't do any changes on PVE... I just installed and upgraded it. Maybe the installation process created a mount point /rpool and then later on ZFS tried to create it again.

I don't see how I can empty /rpool...
 
Not to be of any help, but since you asked, i did install a bunch of PM 5.0 servers in a cluster with ZFS also for root and i upgraded them a few days ago to 5.2-1, without any downtime for VMs which I live migrated in the CLI. Everything works.

So this is not a global problem.

Code:
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2018-05-25 12:36:19 CEST; 4 days ago
     Docs: man:zfs(8)
  Process: 1115 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
 Main PID: 1115 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   Memory: 0B
      CPU: 0
   CGroup: /system.slice/zfs-mount.service

May 25 12:36:19 p27 systemd[1]: Starting Mount ZFS filesystems...
May 25 12:36:19 p27 systemd[1]: Started Mount ZFS filesystems.
root@p27:~#
 
Thank you! I set mountpoint to none for rpool and reboot:

root@starspot:~# systemctl status zfs-mount.service

zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2018-05-30 07:38:56 CEST; 35s ago
Docs: man:zfs(8)
Process: 3938 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
Main PID: 3938 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 37478)
Memory: 0B
CPU: 0
CGroup: /system.slice/zfs-mount.service

May 30 07:38:56 starspot systemd[1]: Starting Mount ZFS filesystems...
May 30 07:38:56 starspot systemd[1]: Started Mount ZFS filesystems.
root@starspot:~#

I couldn't find what went wrong. At least not from zpool history:

root@starspot:~# zpool history rpool
History for 'rpool':
2018-05-09.18:01:18 zpool create -f -o cachefile=none -o ashift=12 rpool mirror /dev/sda2 /dev/sdb2
2018-05-09.18:01:18 zfs create rpool/ROOT
2018-05-09.18:01:18 zfs create rpool/data
2018-05-09.18:01:18 zfs create rpool/ROOT/pve-1
2018-05-09.18:01:18 zfs set atime=off rpool
2018-05-09.18:01:18 zfs set compression=on rpool
2018-05-09.18:01:18 zfs create -V 8388608K -b 4K -o com.sun:auto-snapshot=false -o copies=1 -o sync=always -o compression=zle -o logbias=throughput -o primarycache=metadata -o secondarycache=none rpool/swap
2018-05-09.18:01:18 zfs set sync=disabled rpool
2018-05-09.18:02:29 zfs set sync=standard rpool
2018-05-09.18:02:29 zfs set mountpoint=/ rpool/ROOT/pve-1
2018-05-09.18:02:29 zpool set bootfs=rpool/ROOT/pve-1 rpool
2018-05-09.18:02:29 zpool export rpool
2018-05-09.18:06:10 zpool import -N rpool
2018-05-14.07:37:20 zpool import -N rpool
2018-05-14.09:26:05 zfs set mountpoint=/data rpool/data
2018-05-14.10:14:11 zfs create -s -V 100663296k rpool/data/vm-101-disk-1
2018-05-14.10:23:40 zfs destroy -r rpool/data/vm-101-disk-1
2018-05-14.10:35:52 zpool import -N rpool
2018-05-14.11:03:37 zpool import -N rpool
2018-05-14.11:08:43 zpool import -N rpool
2018-05-14.13:56:31 zpool import -N rpool
2018-05-14.16:16:39 zpool import -N rpool
2018-05-15.11:08:03 zpool import -N rpool
2018-05-15.15:17:00 zpool import -N rpool
2018-05-16.08:39:34 zpool import -N rpool
2018-05-16.16:28:04 zpool import -N rpool
2018-05-17.11:11:44 zpool import -N rpool
2018-05-17.13:55:51 zpool import -N rpool
2018-05-18.07:54:14 zpool import -N rpool
2018-05-23.14:43:27 zfs create -o acltype=posixacl -o xattr=sa -o refquota=100663296k rpool/data/subvol-103-disk-1
2018-05-23.14:43:32 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-23.14:55:53 zfs set mountpoint=/rpool/data rpool/data
2018-05-23.14:57:35 zfs create -o acltype=posixacl -o xattr=sa -o refquota=100663296k rpool/data/subvol-103-disk-1
2018-05-23.14:57:45 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-24.11:40:49 zfs create -o acltype=posixacl -o xattr=sa -o refquota=67108864k rpool/data/subvol-103-disk-1
2018-05-24.12:25:02 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-24.12:44:39 zfs create -o acltype=posixacl -o xattr=sa -o refquota=67108864k rpool/data/subvol-103-disk-1
2018-05-24.15:55:55 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-24.15:59:22 zfs create -o acltype=posixacl -o xattr=sa -o refquota=33554432k rpool/data/subvol-103-disk-1
2018-05-24.16:11:09 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-25.08:49:40 zpool import -N rpool
2018-05-25.09:03:04 zfs create -o acltype=posixacl -o xattr=sa -o refquota=33554432k rpool/data/subvol-103-disk-1
2018-05-25.09:08:15 zfs create -o acltype=posixacl -o xattr=sa -o refquota=33554432k rpool/data/subvol-104-disk-1
2018-05-25.12:07:06 zfs destroy -r rpool/data/subvol-104-disk-1
2018-05-25.12:07:21 zfs destroy -r rpool/data/subvol-103-disk-1
2018-05-29.15:08:40 zpool import -N rpool
2018-05-30.07:33:11 zpool import -N rpool
2018-05-30.07:34:17 zfs set mountpoint=none rpool
2018-05-30.07:38:52 zpool import -N rpool

root@starspot:~#

There was no mountpoint=/rpool set!

Again, thank you for suggestions and help.
Best regards,
Borut
 
root@starspot:~# zpool history rpool
History for 'rpool':
2018-05-09.18:01:18 zpool create -f -o cachefile=none -o ashift=12 rpool mirror /dev/sda2 /dev/sdb2
.......
There was no mountpoint=/rpool set!

This automatically did it.
 
Creating rpool is default action, without "zfs set mountpoint=none rpool" is a bug with this result:

May 25 08:33:17 starspot zfs[125987]: cannot mount '/rpool': directory is not empty
May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
 
It is not a bug. Creating pool "MY_POOL" the mount point will be set to /MY_POLL. You can set mountpoint on creating pool with -m

-m mountpoint
Sets the mount point for the root dataset. The default mount point is /pool or altroot/pool if altroot is specified. The mount point must be an absolute path, legacy, or none. For more information on dataset mount points, see zfs(8).

ZFS will mount fs only on empty catalog.

Proxmox install script is missing:

# zfs set mountpoint=none rpool/ROOT
# zfs set mountpoint=none rpool
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!