Missing VM images after reboot

trikker

New Member
Mar 10, 2020
7
0
1
26
Hello.

I've got proxmox 6.1.7(later upgraded to 6.1.8 in the midst of this mess) running on a HDD ZFS Mirror and then a separate NVMe mirror to store VMs on.

My VM images are qcow2 files i stored in a directory based storage i made through the proxmox UI. I had to first create a directory myself on the NVMepool and then pointed to that directory when creating the storage in the proxmox UI. All normal as far as i know, everything seemed to be working as expected. That is untill i restarted.

I also created a ZFS based storage on the NVMe pool(zvol). However i havent used that option when creating m VMs, only qcow2 in my directory based storage. I created my VMs which were working as they should untill i restarted the host. When the host came back online my VMs dident start up as they should and when i went looking for the VM images in my folder /nvmepool/pve-data/images it was emty. All my qcow2 images was gone.

Whats odd is that ZFS storage i set up on the NVMe pool is showing "Usage 4.50% (20.77 GiB of 461.12 GiB)" despite me never storing anything on it, only using my directory based storage which is showing "Usage 0.25% (8.92 GiB of 3.51 TiB)" which is absolutely incorrect. Its showing the size of my rpool running HDDs which is also not the pool i stored my qcow2 images on.. '


Any help would be appriciated.
 
hi,

it's possible something was misconfigured, and you stored into the rpool instead.

maybe post:
-> /etc/pve/storage.cfg
-> pveversion -v
-> /etc/fstab
-> lsblk
-> df -h

going through this output should help find it.
 
/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content rootdir,backup,iso,vztmpl,images
maxfiles 10
shared 0

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1

dir: nvme-data
path /nvmepool/pve-data
content images,rootdir
shared 0

zfspool: nvme-zfs
pool nvmepool
content rootdir,images
mountpoint /nvmepool
sparse 0


pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

/etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 3.7T 0 part
sdb 8:16 0 3.7T 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 3.7T 0 part
nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:2 0 477G 0 part
└─nvme0n1p9 259:3 0 8M 0 part
nvme1n1 259:1 0 477G 0 disk
├─nvme1n1p1 259:4 0 477G 0 part
└─nvme1n1p9 259:5 0 8M 0 part

df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 8.8M 6.3G 1% /run
rpool/ROOT/pve-1 3.6T 9.0G 3.6T 1% /
tmpfs 32G 43M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
rpool 3.6T 128K 3.6T 1% /rpool
rpool/ROOT 3.6T 128K 3.6T 1% /rpool/ROOT
rpool/data 3.6T 128K 3.6T 1% /rpool/data
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 6.3G 0 6.3G 0% /run/user/0
 
ok it looks like you configured the nvme-data directory storage on the zfs-nvme?

probably zfs-nvme wasn't activated at boot. try zfs mount nvmepool.
 
what is inside? ls -arilh /nvmepool

clear and remove the dir if nothing important inside: rm -rf /nvmepool

then retry mount command.
 
Hmm. Seems like nvmepool is not mounted after boot.

root@pve:/# zfs get mountpoint
NAME PROPERTY VALUE SOURCE
nvmepool mountpoint /nvmepool default
rpool mountpoint /rpool default
rpool/ROOT mountpoint /rpool/ROOT default
rpool/ROOT/pve-1 mountpoint / local
rpool/data mountpoint /rpool/data default

root@pve:/# zfs get mounted
NAME PROPERTY VALUE SOURCE
nvmepool mounted no -
rpool mounted yes -
rpool/ROOT mounted yes -
rpool/ROOT/pve-1 mounted yes -
rpool/data mounted yes -
 
ls -arilh /nvmepool
root@pve:/# ls -arilh /nvmepool
total 10K
98825 drwxr-xr-x 5 root root 5 Mar 16 11:35 pve-data
98443 drwxr-xr-x 2 root root 2 Mar 17 16:08 images
34 drwxr-xr-x 19 root root 25 Mar 14 21:09 ..
98855 drwxr-xr-x 4 root root 4 Mar 17 16:08 .
 
when you create a directory storage on top of a zpool mountpoint, it's possible that the directory hierarchy gets created before zpool can be mounted, resulting in this.

use the find /nvmepool command to list all files and directories in there. if there's nothing, just remove it and mount the zpool and reactivate the directory storage.
 
I tried deleting nvme-zfs and reboot the machine to no avail.

find /nvmepool
root@pve:~# find /nvmepool
/nvmepool
/nvmepool/images
/nvmepool/pve-data
/nvmepool/pve-data/private
/nvmepool/pve-data/images
/nvmepool/pve-data/dump
 
don't delete the zfs dataset. it likely holds your data.

find command indicates this is just directory structure, so please read my previous posts about this. there's enough direction
 
Thanks so much for the help.

Deleting the directory and then mounting the pool did the trick. So if i've understood this correctly; proxmox creates the directory structures before zfs get a chance to mount the pool. And because proxmox creates the directory structures in the same place as the mountpoint is supposed to be zfs won't mount it. Am i understanding this correctly?
 
yes indeed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!