I have a Proxmox cluster, running V7 and V6 versions (am upgrading one machine at a time).
On all the machines, regardless of version, the VM data disk files do not show up at all using the normal Linux file commands like ls and df. Is this expected behaviour or do we have a weird fault?
Example:
Using df - note the nominally 10TB zfs partition /tank1 appears just 1% used, but also the wrong size:
Using ls:
But using zfs list:
tank1 in this case is a ZFS mirror of 2 x 10TB hard disks, but we see the same thing on RaidZ partitions on the other machines.
On all the machines, regardless of version, the VM data disk files do not show up at all using the normal Linux file commands like ls and df. Is this expected behaviour or do we have a weird fault?
Example:
Using df - note the nominally 10TB zfs partition /tank1 appears just 1% used, but also the wrong size:
Code:
root@phoenix:/tank1# df -h
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 50M 9.4G 1% /run
/dev/mapper/pve-root 57G 2.1G 52G 4% /
tmpfs 48G 66M 48G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
tank1 3.9T 128K 3.9T 1% /tank1
/dev/fuse 30M 44K 30M 1% /etc/pve
tmpfs 9.5G 0 9.5G 0% /run/user/0
Using ls:
Code:
root@phoenix:/tank1# ls -la /tank1
total 5
drwxr-xr-x 2 root root 2 Mar 9 16:28 .
drwxr-xr-x 19 root root 4096 Mar 9 16:28 ..
But using zfs list:
Code:
root@phoenix:/tank1# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank1 4.98T 3.83T 96K /tank1
tank1/vm-265-disk-0 203G 3.93T 100G -
tank1/vm-280-disk-0 61.9G 3.86T 28.8G -
tank1/vm-455-disk-0 516G 4.30T 35.5G -
tank1/vm-503-disk-0 3.80T 7.06T 588G -
tank1/vm-661-disk-0 124G 3.95T 8.74G -
tank1/vm-662-disk-0 124G 3.95T 8.21G -
tank1/vm-981-disk-0 179G 3.95T 55.4G -
tank1 in this case is a ZFS mirror of 2 x 10TB hard disks, but we see the same thing on RaidZ partitions on the other machines.