ZFS pools not mounting correctly

Murphi

New Member
Dec 14, 2019
19
1
3
23
Hello,

after changing the physical case of my ProxmoxVE Server (due to upgrade reasons), my ZFS pools arent mounted properly. I have two pools, rpool and data as seen here
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data   3.62T   211G  3.42T        -         -     0%     5%  1.00x    ONLINE  -
rpool   111G  1.02G   110G        -         -     2%     0%  1.00x    ONLINE  -
However the "data" pool does not get mounted to its previos mount points, which was under /data/isos and /data/disks. Those paths are now mounted with "rpool". The File /etc/pve/storage.cfg can be seen here:
Code:
dir: local
  path /var/lib/vz
  content snippets,backup,vztmpl
  maxfiles 1
   shared 0      

dir: isos
  path /data/isos
  content iso
  shared 0

dir: disks
  path /data/disks
  content images,rootdir
  shared 0

I already tried that sollution "https://forum.proxmox.com/threads/update-broke-lxc.59776/#post-277303" without success.

Help is appreciated, thank you in advance.

Kind regards
 
Last edited:
Hi,

your links need a long time to load and are full of advertisement/spam. Please rather use the Forums
[code]content here[/code]
or similar tags (available also to click in the WYSIWYG editor).

Anyways, why do you have the ZFS pool added as directory storage and not as ZFS Pool storage?

Check the set mountpoint with:
Bash:
zfs get mountpoint data
zfs get mountpoint data/disks

(may want to adapt "data" or "data/disks" to the respective ZFS dataset you're using)
 
Thanks for your swift reply. I edited my previos post with the code tags. mountpoint infornation states:
Code:
zfs get mountpoint data
NAME  PROPERTY    VALUE       SOURCE
data  mountpoint  /data       default

zfs get mountpoint data/disks
cannot open 'data/disks': dataset does not exist

I created the zfs pools as well as their storage via the webinterface. So I unfortunately don't know how it is supposed to look.
 
I edited my previos post with the code tags.
much thanks!

I created the zfs pools as well as their storage via the webinterface.

That's weird, that should normally add an entry looking like:
Code:
zfspool: test
        pool test
        content rootdir,images
        mountpoint /test
        nodes dev6

(just re-tested to be sure).

Are you sure that neither you or someone else edited that configuration file?

It's a bit hard to give clear directions fixing this without knowing what happened.

Listing all ZFS datasets, volumes and all mounts could at least add a little bit of information:
Bash:
zfs list
findmnt
 
no, nobody else changed the file. As stated in my first post, I switched the case of the server. (unplug everything, plug evrything back in to the same connectors, etc)I had some trouble with the SATA connectors from the rpool disks though. Maybe the Server booted once without some disks, I doubt that could have caused that.

zfs list
Code:
NAME               USED  AVAIL     REFER  MOUNTPOINT
data               211G  3.31T      211G  /data
rpool             1.02G   107G      104K  /rpool
rpool/ROOT        1.01G   107G       96K  /rpool/ROOT
rpool/ROOT/pve-1  1.01G   107G     1.01G  /
rpool/data          96K   107G       96K  /rpool/data
findmnt
Code:
TARGET                                SOURCE           FSTYPE     OPTIONS
/                                     rpool/ROOT/pve-1 zfs        rw,relatime,xattr,noacl
├─/sys                                sysfs            sysfs      rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security              securityfs       securityfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                    tmpfs            tmpfs      ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified          cgroup2          cgroup2    rw,nosuid,nodev,noexec,relatime
│ │ ├─/sys/fs/cgroup/systemd          cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/rdma             cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/memory           cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/perf_event       cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
│ │ ├─/sys/fs/cgroup/devices          cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,devices
│ │ ├─/sys/fs/cgroup/hugetlb          cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/pids             cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,pids
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/blkio            cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/freezer          cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,freezer
│ │ └─/sys/fs/cgroup/cpuset           cgroup           cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
│ ├─/sys/fs/pstore                    pstore           pstore     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/firmware/efi/efivars         efivarfs         efivarfs   rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                       bpf              bpf        rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug                 debugfs          debugfs    rw,relatime
│ ├─/sys/fs/fuse/connections          fusectl          fusectl    rw,relatime
│ └─/sys/kernel/config                configfs         configfs   rw,relatime
├─/proc                               proc             proc       rw,relatime
│ └─/proc/sys/fs/binfmt_misc          systemd-1        autofs     rw,relatime,fd=42,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=32889
├─/dev                                udev             devtmpfs   rw,nosuid,relatime,size=132012820k,nr_inodes=33003205,mode=755
│ ├─/dev/pts                          devpts           devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm                          tmpfs            tmpfs      rw,nosuid,nodev
│ ├─/dev/mqueue                       mqueue           mqueue     rw,relatime
│ └─/dev/hugepages                    hugetlbfs        hugetlbfs  rw,relatime,pagesize=2M
├─/run                                tmpfs            tmpfs      rw,nosuid,noexec,relatime,size=26414004k,mode=755
│ ├─/run/lock                         tmpfs            tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/rpc_pipefs                   sunrpc           rpc_pipefs rw,relatime
│ └─/run/user/0                       tmpfs            tmpfs      rw,nosuid,nodev,relatime,size=26414000k,mode=700
├─/rpool                              rpool            zfs        rw,noatime,xattr,noacl
│ ├─/rpool/ROOT                       rpool/ROOT       zfs        rw,noatime,xattr,noacl
│ └─/rpool/data                       rpool/data       zfs        rw,noatime,xattr,noacl
├─/var/lib/lxcfs                      lxcfs            fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
└─/etc/pve                            /dev/fuse        fuse       rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other

could I write the "right" configuration to storage.cfg myself?

Kind regards
 
if you add a ZFS dataset as directory storage you should set the 'is_mountpoint' property to 1:
Code:
dir: isos
  path /data/isos
  content iso
  shared 0
  is_mountpoint 1

additionally you would need to make sure that the mountpoint of the zpool is empty (PVE has created the directories dump images private snippets template there), otherwise you cannot import it the next time:
Code:
zfs get mounted data #make sure that data is not mounted currently
zpool export data 
find /data/ -type d -delete #deletes all directories - make sure there is no relevant data below /data!!!
zpool import data
should work.
As usual when deleting directories - make sure you have a working backup!

I hope this helps!
 
how can the "data"-pool be mounted on "/data" when in the Webinterface the capacity from rpool gets displayed?
 

Attachments

  • zfs.png
    zfs.png
    110.7 KB · Views: 15
  • zfs2.png
    zfs2.png
    120.7 KB · Views: 14
if you add a ZFS dataset as directory storage you should set the 'is_mountpoint' property to 1:
Code:
dir: isos
  path /data/isos
  content iso
  shared 0
  is_mountpoint 1

additionally you would need to make sure that the mountpoint of the zpool is empty (PVE has created the directories dump images private snippets template there), otherwise you cannot import it the next time:
Code:
zfs get mounted data #make sure that data is not mounted currently
zpool export data
find /data/ -type d -delete #deletes all directories - make sure there is no relevant data below /data!!!
zpool import data
should work.
As usual when deleting directories - make sure you have a working backup!

I hope this helps!

I did your steps and get the following error message:
Code:
zpool import data
cannot import 'data': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name

If I run zpool list(after zpool eport data) I get:
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data   3.62T   211G  3.42T        -         -     0%     5%  1.00x    ONLINE  -
rpool   111G  1.02G   110G        -         -     2%     0%  1.00x    ONLINE  -
 
I did your steps and get the following error message:
hmm - most likely the pool got imported automatically by pvestatd

post the output of:
Code:
zfs get all data
find /data
 
hmm - most likely the pool got imported automatically by pvestatd

post the output of:
Code:
zfs get all data
find /data
zfs get all data:
Code:
NAME  PROPERTY              VALUE                  SOURCE
data  type                  filesystem             -
data  creation              Tue Jan 14 21:17 2020  -
data  used                  211G                   -
data  available             3.31T                  -
data  referenced            211G                   -
data  compressratio         1.07x                  -
data  mounted               no                     -
data  quota                 none                   default
data  reservation           none                   default
data  recordsize            128K                   default
data  mountpoint            /data                  default
data  sharenfs              off                    default
data  checksum              on                     default
data  compression           on                     local
data  atime                 on                     default
data  devices               on                     default
data  exec                  on                     default
data  setuid                on                     default
data  readonly              off                    default
data  zoned                 off                    default
data  snapdir               hidden                 default
data  aclinherit            restricted             default
data  createtxg             1                      -
data  canmount              on                     default
data  xattr                 on                     default
data  copies                1                      default
data  version               5                      -
data  utf8only              off                    -
data  normalization         none                   -
data  casesensitivity       sensitive              -
data  vscan                 off                    default
data  nbmand                off                    default
data  sharesmb              off                    default
data  refquota              none                   default
data  refreservation        none                   default
data  guid                  15467859141635062467   -
data  primarycache          all                    default
data  secondarycache        all                    default
data  usedbysnapshots       0B                     -
data  usedbydataset         211G                   -
data  usedbychildren        16.0M                  -
data  usedbyrefreservation  0B                     -
data  logbias               latency                default
data  objsetid              54                     -
data  dedup                 off                    default
data  mlslabel              none                   default
data  sync                  standard               default
data  dnodesize             legacy                 default
data  refcompressratio      1.07x                  -
data  written               211G                   -
data  logicalused           226G                   -
data  logicalreferenced     226G                   -
data  volmode               default                default
data  filesystem_limit      none                   default
data  snapshot_limit        none                   default
data  filesystem_count      none                   default
data  snapshot_count        none                   default
data  snapdev               hidden                 default
data  acltype               off                    default
data  context               none                   default
data  fscontext             none                   default
data  defcontext            none                   default
data  rootcontext           none                   default
data  relatime              off                    default
data  redundant_metadata    all                    default
data  overlay               off                    default
data  encryption            off                    default
data  keylocation           none                   default
data  keyformat             none                   default
data  pbkdf2iters           0                      default
data  special_small_blocks  0                      default
find data:
Code:
/data
/data/disks
/data/disks/dump
/data/disks/images
/data/disks/private
/data/isos
/data/isos/template
/data/isos/template/iso

Do you know how I could read the data from "data"-pool independently?
 
data mounted no -
the dataset did not get mounted - 2 things:
* stop pvestatd
* add mkdir 0 to the storage definition

*then export the pool, clear the directories, import the pool again, start pvestatd

I hope this helps!
 
the dataset did not get mounted - 2 things:
* stop pvestatd
* add mkdir 0 to the storage definition

*then export the pool, clear the directories, import the pool again, start pvestatd

I hope this helps!

could you give me the commands for step 2 and 3?
 
* make sure you have a backup!
Code:
systemctl stop pvestatd
* change the storage definition:
Code:
dir: isos
  path /data/isos
  content iso
  shared 0
  is_mountpoint 1
  mkdir 0

other commands as above:
Code:
zpool export data 
find /data/ -type d -delete 
zpool import data
 
* make sure you have a backup!
Code:
systemctl stop pvestatd
* change the storage definition:
Code:
dir: isos
  path /data/isos
  content iso
  shared 0
  is_mountpoint 1
  mkdir 0

other commands as above:
Code:
zpool export data
find /data/ -type d -delete
zpool import data
thank you. It definitly looks better now. If I do an ls on "/data" it shows me my drives and isos. However the Webinterface tells me: "unable to activate storage 'isos' - directory is expected to be a mount point but is not mounted: '/data/isos' (500)" and consequently, my vms wont boot.
 
thank you. It definitly looks better now. If I do an ls on "/data" it shows me my drives and isos. However the Webinterface tells me: "unable to activate storage 'isos' - directory is expected to be a mount point but is not mounted: '/data/isos' (500)" and consequently, my vms wont boot.
ahh - sorry my mistake - I assumed /data/isos is a dataset by itself (created with `zfs create data/isos`), but it seems it's a directory...

if you want to keep it as a regular directory - remove the is_mountpoint definition from storage.cfg (and keep the mkdirs 0)

two possible workarounds:
* create the datasets
Code:
mv /data/isos /data/isos.bck #make backup
zfs create data/isos
cp -a /data/isos.bck/. /data/isos/

* recreate the cache-file for all pools in your system (that should make sure that they get imported early enough during boot:
Code:
zpool set cachefile=/etc/zfs/zpool.cache data

I hope this helps
 
ahh - sorry my mistake - I assumed /data/isos is a dataset by itself (created with `zfs create data/isos`), but it seems it's a directory...

if you want to keep it as a regular directory - remove the is_mountpoint definition from storage.cfg (and keep the mkdirs 0)

two possible workarounds:
* create the datasets
Code:
mv /data/isos /data/isos.bck #make backup
zfs create data/isos
cp -a /data/isos.bck/. /data/isos/

* recreate the cache-file for all pools in your system (that should make sure that they get imported early enough during boot:
Code:
zpool set cachefile=/etc/zfs/zpool.cache data

I hope this helps
since I could read the directorys from the cli I figured I can just add the Directory again via the Webinterface. Now everything works again. Do I still need to change any of the config files?
 
Do I still need to change any of the config files?
depends how you added it - probably the best test would be to reboot the system one more time and see if everything stays stable and reachable.
 
depends how you added it - probably the best test would be to reboot the system one more time and see if everything stays stable and reachable.
That was the first thing I did, and everything booted back up correctly. I added the Directories via the Webinterface again.
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!