ZFS: Proxmox disk files don't show up using standard file commands like ls

Pyromancer

Member
Jan 25, 2021
29
7
8
48
I have a Proxmox cluster, running V7 and V6 versions (am upgrading one machine at a time).

On all the machines, regardless of version, the VM data disk files do not show up at all using the normal Linux file commands like ls and df. Is this expected behaviour or do we have a weird fault?

Example:

Using df - note the nominally 10TB zfs partition /tank1 appears just 1% used, but also the wrong size:

Code:
root@phoenix:/tank1# df -h
Filesystem                              Size  Used Avail Use% Mounted on
udev                                     48G     0   48G   0% /dev
tmpfs                                   9.5G   50M  9.4G   1% /run
/dev/mapper/pve-root                     57G  2.1G   52G   4% /
tmpfs                                    48G   66M   48G   1% /dev/shm
tmpfs                                   5.0M     0  5.0M   0% /run/lock
tmpfs                                    48G     0   48G   0% /sys/fs/cgroup
tank1                                   3.9T  128K  3.9T   1% /tank1
/dev/fuse                                30M   44K   30M   1% /etc/pve
tmpfs                                   9.5G     0  9.5G   0% /run/user/0

Using ls:

Code:
root@phoenix:/tank1# ls -la /tank1
total 5
drwxr-xr-x  2 root root    2 Mar  9 16:28 .
drwxr-xr-x 19 root root 4096 Mar  9 16:28 ..

But using zfs list:

Code:
root@phoenix:/tank1# zfs list
NAME                  USED  AVAIL     REFER  MOUNTPOINT
tank1                4.98T  3.83T       96K  /tank1
tank1/vm-265-disk-0   203G  3.93T      100G  -
tank1/vm-280-disk-0  61.9G  3.86T     28.8G  -
tank1/vm-455-disk-0   516G  4.30T     35.5G  -
tank1/vm-503-disk-0  3.80T  7.06T      588G  -
tank1/vm-661-disk-0   124G  3.95T     8.74G  -
tank1/vm-662-disk-0   124G  3.95T     8.21G  -
tank1/vm-981-disk-0   179G  3.95T     55.4G  -


tank1 in this case is a ZFS mirror of 2 x 10TB hard disks, but we see the same thing on RaidZ partitions on the other machines.
 
No, this is expected. ZFS knows two types of datasets. Filesystem datasets are the ones usually used and then there are volume datasets (zvol) which provide block devices. For containers, filesystem datasets are used on a ZFS storage, but for VMs Proxmox VE uses the zvols to provide a block device to the VM.

As you can see in the zfs list output, they don't have a mountpoint, as compared to file system datasets, like the root dataset in your tank1 pool.

These datasets are exposed in the /dev directory as /dev/zdX. Since it is hard to figure out which /dev/zdX maps to which datasets, they are also exposed as /dev/<pool>/<datasets> and /dev/zvol/<pool>/<datasets>. If you check those paths with ls -l, you can see that they are symlinks to the /dev/zdX devices.
 
No, this is expected. ZFS knows two types of datasets. Filesystem datasets are the ones usually used and then there are volume datasets (zvol) which provide block devices. For containers, filesystem datasets are used on a ZFS storage, but for VMs Proxmox VE uses the zvols to provide a block device to the VM.

As you can see in the zfs list output, they don't have a mountpoint, as compared to file system datasets, like the root dataset in your tank1 pool.

These datasets are exposed in the /dev directory as /dev/zdX. Since it is hard to figure out which /dev/zdX maps to which datasets, they are also exposed as /dev/<pool>/<datasets> and /dev/zvol/<pool>/<datasets>. If you check those paths with ls -l, you can see that they are symlinks to the /dev/zdX devices.

Thank you for detailed answer, and yes, I can see the symlinks when I look in /dev/tank1

The reason I was looking for the files was that while upgrading node-2 (of a 3-node cluster) from 6 to 7, node-3 suddenly fenced (for reasons we're still not sure of) and one if it's VMs then appeared to attempt to migrate to node-2. The result was that the VM's conf file /etc/pve/qemu-server/265.conf ended up on node-2 while its disk files were still on node-3. Initially I was unsure if it would be safe to just recreate the conf file on node-3 so considered moving the data files from node-3 to node-2 to match, only to find I couldn't list or manipulate them.

I then resolved the issue by renaming the file on node-2 to a number range we don't use (to avoid any conflict risk) and then copied its contents to a new 265.conf back on node-3, which was immediately picked up by Proxmox and I was able to boot the VM (a cPanel machine) again normally.

I'm guessing this also means that if I want a backup to protect against boot disk failure, all I'd need to do is backup the files from /etc/pve/qemu-server/, and then recovery would be a case of replace boot disk, reinstall Proxmox, remount the zpools, and then repopulate /etc/pve/qemu-server/ and everything would come back as-was? We take comprehensive backups to an external system (and cPanel on those VMs does its own internal back up to AWS) but being able to recover "to the exact moment of failure" will always be the ideal solution.
 
If you want to use HA, you should make sure that the VMs can access their disks on all nodes or use HA groups to limit the nodes on which certain VMs are allowed to be.

If you don't have any shared storage but ZFS on each node, you could use the replication feature to have a recent copy of the VMs disk on the other node. In case of a node failing, you will of course have some data loss, since the last successful replication run.
It is possible to set up a 3way replication between all the nodes in your 3-node cluster.

The ZFS storage must be named the same on all nodes for the replication to work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!