[SOLVED] Virtual Machines in ZFS no longer mounted?

cshill

Member
May 8, 2024
33
2
8
Hi proxmox community,
I am still new and not quite sure why this is not working. I have been researching online for what feels like 2 hours learning about ZFS. I shutdown my two test nodes for the weekend and the VMs mounts disappeared. I remounted the drives, which felt odd that a few unmounted. Now most VMs are backup except for the ones I'm testing on the ZFS pool. The drives setup in ZFS are online, the mountpoints are showing up in shell, just that the VMs in my /ZFS01/VMs are gone and in Proxmox that directory is gone. If I go to zfs list it is showing ZFS01/VMs/vm-101-disk-0 and other VMs with no mountpoint. the ZFS01/VMs is pointed to the mountpoint /ZFS01/VMs. When I re-add the /ZFS01/VMs to the directory again in proxmox it doesn't find the VMs as they are jut not in that directory.


If I lsblk -f I can see the zfs system and zd0p3 is probably the first disk missing and zd32p2 is the 2nd as I setup the disks as EXT4.
1715626651836.png

Below is after I use zfs list and it shows the disks are there just missing a mountpoint. However, the ZFS01/VMs has the mountpoint.
1715626446214.png

The challenge is I'm not quite sure how to mount these VMs.
 
Hi,
VM images on ZFS are virtual block devices called zvols and cannot have a mountpoint. They can contain arbitrary data. If they do contain a filesystem (or a partition with a filesystem) that filesystem can be mounted of course. But when used with a VM, the whole virtual block device is passed to the VM, so no need to mount it first.

Please share your /etc/pve/storage.cfg. Do you have a storage of type ZFS with pool set to ZFS/VMs?
 
Hi Fiona,
I believe this is the pertinent information you are asking for. There are some IP addresses and other naming conventions I wish not to disclose in the storage.cfg. I think this may have been a user error but would still be interesting to find out how to resolve this. As a quick answer to your ZFS with pool question I was testing the creation of ZFS on a disk and I may have used pool at first for creating the VMs but then noticed it was not allowing ISO files and others so I destroyed it for how it is now.

The other week I started the project of learning about Proxmox and seeing if this will be effective for what we need and therefore I test a lot of the software/hardware/operating systems/etc. I try to find ways to break it and fix it so that the more I learn about it the easier the transition will be. I was testing the cluster feature and would shutdown one node, bring it back up, shutdown the other, bring the other back up, migrate VMs, create and destroy disks, etc. I could have destroyed the original ZFSpool. Once I understood ZFSpool better I found how some people use ZFS with directory, I probably deleted the ZFSpool one and created the ZFS01.VMs, ZFS01.ISO, and ZFS01.Backups. I think even then I deleted the ZFS01.VMs to change the name to what it is now.

As it currently stands I now have a ZFSpool known as ZFS01 as well as the directories ZFS01.VMs/ZFS01.ISO/ZFS01.Backups.

It might be easier to explain by VM and where there disks are located and what is missing.
VM100 - Has multiple disks I'm swapping in and out. For ZFS I have 1 disk in ZFS01.VMs directory, another in ZFSpool.
VM101 - I removed the VM and remade it in hopes that it would find the unused disk under hardware options. Currently no disk attached to the VM. It seems to be in limbo.
VM103 - Same situation as VM101, disk seems to be in limbo.
VM106 - Disk is located inside /ZFS01/VMs/images with the others 100, 106, 108. These were created within ZFS01.VM's directory.
VM 108 - Same as VM106
VM 109 - Disk located inside ZFSpool.
ZFSpool shows disks in the zfs list whereas directory disks are inside the /ZFS01/VMs/images pathway.
1715712256746.png
This makes me think the above lsblkf -f is actually showing the 3 directories I have made for VMs, ISO, and Backup under the ZD0 at the bottom. They are ZD0, ZD32, and ZD48.
I know that SDA1 is Boot disk and the cordoned off LVM section of Disk 1. Disk 2 is DIR01 as EXT4, Disk 3 is DIR02 as EXT4, Disk 4 is SDD with the split off section labeled as zfs_member. This is creating my ZFS section ZD0.
My hypothesis is that ZD16 is showing the VM100 zpool disk and zd64 is showing VM109 zpool disk. This means the VMs 101 and 103 disks were in a directory.
1715711386025.png


dir: ZFS01.VMs
path /ZFS01/VMs
content rootdir,images,iso,backup,vztmpl,snippets
nodes proxmoxtest
prune-backups keep-all=1
shared 0
dir: ZFS01.ISO
path /ZFS01/isos
content iso
nodes proxmoxtest
prune-backups keep-all=1
shared 0
dir: ZFS01.Backups
path /ZFS01/backups
content snippets,vztmpl,backup,iso,images,rootdir
nodes proxmoxtest
prune-backups keep-all=1
shared 0
 

Attachments

  • 1715710618502.png
    1715710618502.png
    17.4 KB · Views: 0
  • 1715710667979.png
    1715710667979.png
    15.5 KB · Views: 0
To make Proxmox VE see the zvols, you need a ZFS type storage, not a Directory type storage, because the virtual block devices are not files that can be found in the directory, but only in the ZFS hierarchy. I.e. you can go to Datacenter > Storage > Add > ZFS and select the pool there.
 
It may have been that I created the ZFS pool directly on the machine but didn't add it as a storage option under Datacenter? As I made ZFS01.

Thank you for your time Fiona.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!