Different files space in container mount point

digidax

Renowned Member
Mar 23, 2009
104
1
83
Hello,
I have add a mount point, based on a hardware RAID-0 (Stripe) to a CT:
1635916134217.png

Inside of the LXC container, I get:
Code:
# df -h
Dateisystem                  Größe Benutzt Verf. Verw% Eingehängt auf
rpool/data/subvol-211-disk-0  200G    133G   68G   67% /
/dev/loop0                    6,8T    1,3T  5,3T   19% /backup_spool
none                          492K    4,0K  488K    1% /dev
tmpfs                         2,9G     16K  2,9G    1% /dev/shm
tmpfs                         2,9G     89M  2,9G    3% /run
tmpfs                         2,9G       0  2,9G    0% /sys/fs/cgroup
tmpfs                         593M       0  593M    0% /run/user/0

The mont point /backup_spool of the /dev/loop0 file systemen hast 1,3T of 6,8T in usage.

But if I check on the node:
Code:
root@pve4:~# df -h | grep backup_spool
/dev/sda1                     7.3T  6.4T  539G  93% /mnt/pve/backup_spool
There are used 6,4 T, produced by the disk Image:
Code:
root@pve4:/mnt/pve/backup_spool/images/211# du -sch *
6.4T    vm-211-disk-0.raw
6.4T    total

Why is it so and what can I do, to deflate the disk image on the node?

Kernelversion: Linux 5.4.128-1-pve #1 SMP PVE 5.4.128-1 (Wed, 21 Jul 2021 18:32:02 +0200)
PVE Manager Version: 6.4-13

Thanks and all the best, Frank
 
Brining this question up again, I'm planning to upgrate to 7.0 but want to have clearness, that this problem will not destroy anything.
Are additional information needed?
Thanks.