Suddenly unable to mount previously working zfs disk image for VM

phildapunk

New Member
Apr 25, 2022
4
0
1
Hi everyone, hoping someone can point me in the right direction

After using a zfs based disk image for approx the last 2 years, after upgrading to Proxmox v7.1 and rebooting once or twice, I was greeted with a boot error for a vm, complaining it cannot mount the data disk referenced in fstab.

The zfs pool is based on a single 500GB HD, so no raid config used.
Unfortunately whatever web page / tutorial I used to set up the HD & zfs pool, has evaded me, which is making this scenario hard for me to diagnose.

I've commented out the fstab entry, for the data disk, to allow the VM to boot. Unfortunately I have a few docker containers all pointing to the mount point, as data locations.

My concerns are that the data has been lost, or that any attempt to restore the working config will require overwriting any existing data. As such I have been hesitant to do much more than READ only commands, while trying to find the root cause of my problem.

For reference the affected partition is /dev/sdb1. zfs pool is ZFS1 and the disk image referenced by the vm as: vm-101-disk-0

From my single Proxmox node, I see the following information

slamdance-gui-disks.PNG

slamdance-zfs-pool.PNG

slamdance-by-uuid.PNG
slamdance-gui-cddisk-sdb-fullscreen.PNG

And finally from within the VM, this is what I see
docker1-by-uuid.PNG

Any comments regarding the health of the disk image, or suggestions for commands to try discover / mount or / dump the disk image would be greatly appreciated. Thanks for reading this far...
 
just to understand:

you have a zpool on the host, in which there was a vm disk (zvol), on which there was a fs (inside the guest). now the fs inside the guest does not seem to work anymore?
can you show us the output (on the host)
Bash:
zfs list -t all
qm config ID
(replace ID with the vm id)
as well as the content of /etc/pve/storage.cfg
 
Hi Dominik, thankyou for your response. See responses below

zfs list -t all
root@slamdance:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT ZFS1 403G 47.0G 96K /ZFS1 ZFS1/vm-101-disk-0 403G 47.0G 403G - zfs2 1.39T 2.13T 330G /zfs2 zfs2/backups 288K 2.13T 96K /zfs2/backups zfs2/backups/docker1 96K 2.13T 96K /mnt/promox_backups/docker1 zfs2/backups/ha 96K 2.13T 96K /mnt/promox_backups/ha zfs2/vm-100-disk-0 33.0G 2.16T 56K - zfs2/vm-101-disk-0 1.03T 3.16T 56K -

qm config 101
root@slamdance:~# qm config 101 agent: 1 balloon: 4096 boot: order=sata0 cores: 4 localtime: 1 memory: 21504 name: Ubuntu-Docker1 net0: virtio=66:4F:56:34:0B:3C,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: l26 sata0: local-lvm:vm-101-disk-0,size=85G sata1: ZFSStorage2:vm-101-disk-0,size=400G sata2: zfs2:vm-101-disk-0,backup=0,size=1T scsihw: virtio-scsi-pci smbios1: uuid=cc283895-644a-4551-ad50-2bc888ad19a4 sockets: 1 vcpus: 4 vmgenid: 06918103-38a8-4217-8d7f-cdd85fb5ec5d

nano /etc/pve/storage.cfg
lvmthin: local-lvm thinpool data vgname pve content images,rootdir nfs: Drobo export /mnt/DroboFS/Shares/Proxmox path /mnt/pve/Drobo server 192.168.1.105 content backup,images options vers=3 prune-backups keep-last=1 dir: local path /var/lib/vz content images,vztmpl,iso,rootdir,snippets prune-backups keep-all=1 zfspool: ZFSStorage1 pool ZFS1 content rootdir mountpoint /ZFS1 sparse 1 zfspool: ZFSStorage2 pool ZFS1 content images mountpoint /ZFS1 sparse 1 zfspool: zfs2 pool zfs2 content rootdir,images mountpoint /zfs2 nodes slamdance sparse 1 dir: proxmox_backup_docker1 path /zfs2/proxmox_backups/docker1 content backup prune-backups keep-last=5,keep-weekly=2 shared 1 dir: proxmox_backup_ha path /zfs2/proxmox_backups/ha content backup prune-backups keep-last=5,keep-weekly=2 shared 1
 
nano /etc/pve/storage.cfg
zfspool: ZFSStorage1 pool ZFS1 content rootdir mountpoint /ZFS1 sparse 1 zfspool: ZFSStorage2 pool ZFS1 content images mountpoint /ZFS1 sparse 1
You got two storages pointing to the same ZFS pool. I would only use one storage and add the content types "rootdir,images" so you can use the same storage for both LXCs and VMs like you did it with your other ZFS pool:
zfspool: zfs2 pool zfs2 content rootdir,images mountpoint /zfs2 nodes slamdance sparse 1
 
Last edited:
You got two storages pointing to the same ZFS pool. I would only use one storage and add the content types "rootdir,images" so you can use the same storage for both LXCs and VMs like you did it with your other ZFS pool:
Thanks Duniun - One of the pools was never utilised as it was a mis-step when first setting up my zfs pool. Thanks to your friendly suggestion, I've removed the unused pool and expanded the content of the remaining pool ;-)

I'm still struggling with my missing data partition, so it you can spot anything in my config screenshots, just respond..
 
For anyone reading, I've completely run through all the config steps on a third zvol to set up a vda partition, which is successfully read/writeable by this VM.

Going through the configuration steps again, it just seems the partition table is no longer seen by the VM, which is in contrast to the first screenshot (in my initial post) that clearly shows the missing partition sdb1

Should I be looking to try restore the partition table from inside the VM / LiveCD using something like Testdisk, or should I be trying something from the Promxox Shell
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!