3 Directories are reporting the same size and usage

Squeaky

New Member
May 9, 2024
4
0
1
I have set up Proxmox 8.2-1 with 3 disks. The first disk is 250GB and has Proxmox installed on it. The second (BigRaid5) and third (SmallRaid5) disks are 10TB and 1.7TB. The second and third disks have been set up as Directory with ext4 file system. I have installed 2 VMs on "BigRaid5" along with one ISO. When I look at the summary of "BigRaid5" it is the same as "Local" and "SmallRaid5", but when I use the "du" on "BigRaid5" and "SmallRaid5" it shows that they are not the same size. Any ideas on what might be wrong?
 
Any ideas on what might be wrong?
Hi @Squeaky , you are going to need to provide some command line outputs to illustrate your situation better. Please use text copy/paste and CODE tags to make your submission readable:
pvesm status
mount
df -h
cat /etc/pve/storage.cfg

If you feel that a screenshot may be necessary, attach it along with above information.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Code:
root@HomeESXi-2:~# pvesm status
Name              Type     Status           Total            Used       Available        %
BigRaid5           dir     active        69440768        33176452        32691172   47.78%
HomeESXi-1        esxi     active               0               0               0    0.00%
SmallRaid5         dir     active        69440768        33176452        32691172   47.78%
local              dir     active        69440768        33176452        32691172   47.78%
local-lvm      lvmthin     active       143380480               0       143380480    0.00%

root@HomeESXi-2:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=65939184k,nr_inodes=16484796,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=13194600k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=4591)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
/dev/fuse on /run/pve/import/esxi/HomeESXi-1/mnt type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=13194596k,nr_inodes=3298649,mode=700,inode64)

root@HomeESXi-2:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   63G     0   63G   0% /dev
tmpfs                  13G  1.6M   13G   1% /run
/dev/mapper/pve-root   67G   32G   32G  51% /
tmpfs                  63G   43M   63G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/fuse             128M   16K  128M   1% /etc/pve
tmpfs                  13G     0   13G   0% /run/user/0

root@HomeESXi-2:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: BigRaid5
        path /mnt/pve/BigRaid5
        content images,vztmpl,backup,iso,rootdir,snippets
        shared 0

dir: SmallRaid5
        path /mnt/pve/SmallRaid5
        content snippets,iso,rootdir,backup,vztmpl,images
        shared 0

esxi: HomeESXi-1
        server 172.16.1.0
        username root
        skip-cert-verification 1
 
BigRaid5 dir active 69440768 33176452 32691172 47.78%
SmallRaid5 dir active 69440768 33176452 32691172 47.78%
local dir active 69440768 33176452 32691172 47.78%
As you can see above all 3 directory-type storage pools that you have defined show the same capacity.
That means that they are backed by the same filesystem. Similar to as if you just created three directories in your home folder.

There are no indications that anything other than the root disk is mounted in your mount output.
Finally, your "df" output shows that the root filesystem has the same capacity metrics, further confirming my first statement:
/dev/mapper/pve-root 67G 32G 32G 51% /

All of this means that you have nothing mounted except the root disk.
Run "lsscsi" and "lsblk" - do you see the physical disks that should be constituting your "raid" storage?
If you do, you need to add them to /etc/fstab (man 5 fstab).
You should also add "is_mountpoint yes" to each of your "raid" pools. That will prevent those pools from becoming active unless PVE detects that it's an actual mount point. Without that attribute, you are simply dumping your data into the root disk and will soon run out of space.

Before mounting your external disks, remove all data from /mnt/pve/*raid*. If you don't do it, the data will be "hidden" and you will be back here asking why your used capacity does not match your expectations.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I created a virtual VM of Proxmox on my ESXi server with 2 disks. I then used the GUI to convert the 2nd disk to a Directory of type ext4 and uploaded an ISO to its ISO images folder. Both disks report the proper amount of storage. I looked at /etc/fstab and there was no entry for the second disk. I looked at /etc/pve/storage.cfg and found an entry for the second disk with 1 line different than on HomeESXi-2. It was "is_mountpoint 1" . When I run "mount", it does show the second disk.

However, when I put that line into HomeESXi-2 it then fails with "unable to activate storage 'BigRaid5' - : directory is expected to be a mount point but is not mounted: '/mnt/pve/BigRaid5' (500)". Without that line, the Storage is immediately activated and I can see and run the VMs on it.

1715288710434.png

1715288776542.png
They do not appear to be backed by the same storage. There must be one more place that contains the information that the status and usage routines look at to determine where to look for calculating Usage.
 
Without that line, the Storage is immediately activated and I can see and run the VMs on it.
Again, if you have physical disks that you want to back that storage you need to add them to /etc/fstab and mount them.
Right now your naming, ie "bigRaid5", is just an arbitrary pool pointing to a local directory on root filesystem.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you.

You were right, both drives were being ignored by the system and it was using the 'Local" directory. I removed the two VMs, then removed the directories, then removed the drives. Then I rebooted and when it came back up, I created two directories on those now clean drives and they are now being used instead of the 'Local' directory.

By the way, fstab does not have anything to do with drives set up by the "Create Directory" program. At least not in Proxmox 8.2-1.

The only things that I found it affected are "/etc/pve/storage.cfg" and "/etc/systemd/system/*.mount". I am sure that there has to be at least one more that cause it to appear when you execute "mount", but I have no clue where it would be.

Anyway , it is now working like it should.
 
By the way, fstab does not have anything to do with drives set up by the "Create Directory" program. At least not in Proxmox 8.2-1.
Correct, there are two places where "Directory" term is used:
- Datacenter add storage "directory"
- Node > Disks > "directory"

The first one relies on you doing the mount manually. Whether you do it via fstab or systemd is up to you. The traditional and easier way is via the fstab.
The second one formats the disk and creates a systemd unit file to mount it, then sets up the "directory" storage pool for you and sets the "is_mountpoint" flag. This would prevent you from accidentally writing data to the root filesystem.
It also mounts the disk and activates the storage pool immediately.

Since neither of your storage pools was properly set up or activated it led me to believe that you did not use the second method when you opened the thread.

Happy that you got it all working. Good job.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!