ZFS: Empty dataset after upgrade proxmox 5x to 6x

Jan 15, 2019
28
1
43
54
Madinina
Hello everybody

I hate this experience! Losing my data specially with zfs!

So I dedicated a raidz2 for vm windows and my isos.

After the update from 5x to 6x where I didn't get any mistakes, the zvol were there (wow) but not my isos in a dedicated dataset.
the disk space is still occupied but the isos are not visible!

I have based backup and server systems on this file system which I find excellent in theory and in practice too but if when I do an update I lose my data it is very, very serious!


root@srvzfs-lenovo-sr650:~# pveversion
pve-manager/6.0-7/28984024 (running kernel: 5.0.21-1-pve)
root@srvzfs-lenovo-sr650:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-4.15: 5.4-8
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-8
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2


My zfs list:

root@srvzfs-lenovo-sr650:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.69G 26.6G 104K /rpool
rpool/ROOT 1.68G 26.6G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.68G 26.6G 1.68G /
rpool/data 96K 26.6G 96K /rpool/data
zmarina 343G 2.78T 170K /zmarina
zmarina/data 333G 2.78T 170K /zmarina/data
zmarina/data/vm-100-disk-0 245G 2.78T 245G -
zmarina/data/vm-101-disk-0 47.8G 2.78T 47.8G -
zmarina/data/vm-102-disk-0 40.1G 2.78T 40.1G -
zmarina/isos 9.16G 2.78T 185K /zmarina/isos
zmarina/isos/template 9.16G 2.78T 170K /zmarina/isos/template
zmarina/isos/template/iso 9.16G 2.78T 9.16G /zmarina/isos/template/iso


I have 9.16G of isos in this dataset zmarina/isos/template/iso that are no longer visible!
root@srvzfs-lenovo-sr650:~# ls /zmarina/isos/template/iso
root@srvzfs-lenovo-sr650:~#


How do I fix this problem and is zfs really reliable?

Thank you for your suggestions

Steeve
 
check whether the datasets are actually mounted, probably they failed to mount.. if you have /zmarina/isos configured as directory storage, make sure to set "is_mountpoint" and "mkdir" accordingly, otherwise PVE will create the directories even if ZFS has not yet mounted the datasets.
 
Hi Fabian

Thank you very much!! I have manually mounted for the moment with the zfs mount -O zmarina/isos command and for his children and I have found my data. I configured the dataset in storage.cfg file with the options:
is_mountpoint yes
mkdir 0


This threads to:
(Bug?) Proxmox VE 6.0-4 - Backup Storage on ZFS

I reboot the server to night for validation command to storage.cfg

Beautiful fear...
Thank you a lot
 
you need to cleanup the empty directories that get in the way of mounting as well before it will work properly again..