[SOLVED] data set unmounted?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
i was currently rebooted the server and not sure what happed but the dataset seemed to got unmounted. i tried the following


Code:
root@prometheus4:~# zfs mount rpool/data/vm-109-disk-0

cannot open 'rpool/data/vm-109-disk-0': operation not applicable to datasets of this type

Code:
root@prometheus4:~# zfs get all rpool/data
NAME        PROPERTY              VALUE                  SOURCE
rpool/data  type                  volume                 -
rpool/data  creation              Fri Jul 24 10:09 2020  -
rpool/data  used                  1.31T                  -
rpool/data  available             2.43T                  -
rpool/data  referenced            7.85G                  -
rpool/data  compressratio         1.20x                  -
rpool/data  reservation           none                   default
rpool/data  volsize               128G                   local
rpool/data  volblocksize          8K                     default
rpool/data  checksum              on                     default
rpool/data  compression           on                     inherited from rpool
rpool/data  readonly              off                    default
rpool/data  createtxg             9                      -
rpool/data  copies                1                      default
rpool/data  refreservation        none                   default
rpool/data  guid                  8037406478648268761    -
rpool/data  primarycache          all                    default
rpool/data  secondarycache        all                    default
rpool/data  usedbysnapshots       12.9M                  -
rpool/data  usedbydataset         7.85G                  -
rpool/data  usedbychildren        1.30T                  -
rpool/data  usedbyrefreservation  0B                     -
rpool/data  logbias               latency                default
rpool/data  dedup                 off                    default
rpool/data  mlslabel              none                   default
rpool/data  sync                  disabled               inherited from rpool
rpool/data  refcompressratio      1.24x                  -
rpool/data  written               0                      -
rpool/data  logicalused           1.57T                  -
rpool/data  logicalreferenced     9.71G                  -
rpool/data  volmode               default                default
rpool/data  snapshot_limit        none                   default
rpool/data  snapshot_count        none                   default
rpool/data  snapdev               hidden                 default
rpool/data  context               none                   default
rpool/data  fscontext             none                   default
rpool/data  defcontext            none                   default
rpool/data  rootcontext           none                   default
rpool/data  redundant_metadata    all                    default


[ICODE]

[CODE]
root@prometheus4:~# zfs get all rpool/data/vm-109-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-109-disk-0  type                  volume                 -
rpool/data/vm-109-disk-0  creation              Sat Jul 25 11:22 2020  -
rpool/data/vm-109-disk-0  used                  7.90G                  -
rpool/data/vm-109-disk-0  available             2.43T                  -
rpool/data/vm-109-disk-0  referenced            7.85G                  -
rpool/data/vm-109-disk-0  compressratio         1.24x                  -
rpool/data/vm-109-disk-0  reservation           none                   default
rpool/data/vm-109-disk-0  volsize               128G                   local
rpool/data/vm-109-disk-0  volblocksize          8K                     default
rpool/data/vm-109-disk-0  checksum              on                     default
rpool/data/vm-109-disk-0  compression           on                     inherited from rpool
rpool/data/vm-109-disk-0  readonly              off                    default
rpool/data/vm-109-disk-0  createtxg             18095                  -
rpool/data/vm-109-disk-0  copies                1                      default
rpool/data/vm-109-disk-0  refreservation        none                   default
rpool/data/vm-109-disk-0  guid                  13967327577356581275   -
rpool/data/vm-109-disk-0  primarycache          all                    default
rpool/data/vm-109-disk-0  secondarycache        all                    default
rpool/data/vm-109-disk-0  usedbysnapshots       51.5M                  -
rpool/data/vm-109-disk-0  usedbydataset         7.85G                  -
rpool/data/vm-109-disk-0  usedbychildren        0B                     -
rpool/data/vm-109-disk-0  usedbyrefreservation  0B                     -
rpool/data/vm-109-disk-0  logbias               latency                default
rpool/data/vm-109-disk-0  dedup                 off                    default
rpool/data/vm-109-disk-0  mlslabel              none                   default
rpool/data/vm-109-disk-0  sync                  disabled               inherited from rpool
rpool/data/vm-109-disk-0  refcompressratio      1.24x                  -
rpool/data/vm-109-disk-0  written               27.5M                  -
rpool/data/vm-109-disk-0  logicalused           9.80G                  -
rpool/data/vm-109-disk-0  logicalreferenced     9.71G                  -
rpool/data/vm-109-disk-0  volmode               default                default
rpool/data/vm-109-disk-0  snapshot_limit        none                   default
rpool/data/vm-109-disk-0  snapshot_count        none                   default
rpool/data/vm-109-disk-0  snapdev               hidden                 default
rpool/data/vm-109-disk-0  context               none                   default
rpool/data/vm-109-disk-0  fscontext             none                   default
rpool/data/vm-109-disk-0  defcontext            none                   default
rpool/data/vm-109-disk-0  rootcontext           none                   default
rpool/data/vm-109-disk-0  redundant_metadata    all                    default


[CODE]


[CODE]
root@prometheus4:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             1.77T  2.43T   104K  /rpool
rpool/ROOT         479G  2.43T    96K  /rpool/ROOT
rpool/ROOT/pve-1   479G  2.43T   479G  /
rpool/data        1.31T  2.43T  7.85G  -
[CODE]


[CODE]root@prometheus4:~# zfs get mounted,mountpoint rpool/data
NAME        PROPERTY    VALUE       SOURCE
rpool/data  mounted     -           -
rpool/data  mountpoint  -           -

Thank you
 
Last edited:
Hi,

i don't really understand why you want to mount it.
It's type is volume and normally they're attached to VMs as disks.
Are you sure, that this dataset was mounted before?
 
thanks for the reply, i think it might been of pyznap that might of unmounted the volume
but not sure what happened when i run zfs list i dont see my vms and not sure how i can revert the way it was
 
yes correct reinstalled i think what messed it up is that i tried using pyzsnap to send the snapshots though network i normally use that software to backup the snapshots on USB disk. But i guess ill stick with pve-zync
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!