Problems understanding HDD sizes

Sergi Cabre

New Member
Aug 24, 2016
2
0
1
48
Hi to everyone,

I had to admit that I am quite new using Proxmox. So far I've configured it and it's been running for a while. In fact, I have 2 servers running Proxmox separately.

In the first one I use ZFS as a filesystem. In this case I have 2 x HDD (750GB each) on raid-1. The main thing that shocks me is that despite I have a GUEST with a maximum storage assigned of 30GB, surprisingly there are folders over that value, e.g: 75GB. How is that possible? That issue is driving me crazy!

In the second one I use LVM as a filesystem. In this one I don't have such issues.

Could anyone give me a hand on that?

Thank you so much!
 
The main thing that shocks me is that despite I have a GUEST with a maximum storage assigned of 30GB, surprisingly there are folders over that value, e.g: 75GB. How is that possible? That issue is driving me crazy!

A file with size 75GB does not mean that it uses 75GB storage. On unix, files can be 'sparse'. So the question is how you measure folder usage?
 
A file with size 75GB does not mean that it uses 75GB storage. On unix, files can be 'sparse'. So the question is how you measure folder usage?

Hi Dietmar. First of all, thanks for the prompt reply.

In the Proxmox I set up the CT (openvz) Disk size under the Resources tab at 30.00GB.

If I perform the df command under the HOST and GUEST machines I could see the following:

Code:
root@proxmox1:# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev  10M  0  10M  0% /dev
tmpfs  3.2G  424K  3.2G  1% /run
rpool/ROOT/pve-1  653G  621G  32G  96% /
tmpfs  5.0M  0  5.0M  0% /run/lock
tmpfs  13G  31M  13G  1% /run/shm
/dev/fuse  30M  16K  30M  1% /etc/pve
/var/lib/vz/private/801  30G  1.4G  29G  5% /var/lib/vz/root/801
none  3.0G  4.0K  3.0G  1% /var/lib/vz/root/801/dev
/var/lib/vz/private/802  30G  612M  30G  2% /var/lib/vz/root/802
none  610M  84K  610M  1% /var/lib/vz/root/801/run
none  5.0M  0  5.0M  0% /var/lib/vz/root/801/run/lock
none  3.0G  0  3.0G  0% /var/lib/vz/root/801/run/shm
none  100M  0  100M  0% /var/lib/vz/root/801/run/user
none  4.0G  4.0K  4.0G  1% /var/lib/vz/root/802/dev
none  810M  76K  810M  1% /var/lib/vz/root/802/run
none  5.0M  0  5.0M  0% /var/lib/vz/root/802/run/lock
none  3.4G  0  3.4G  0% /var/lib/vz/root/802/run/shm
none  100M  0  100M  0% /var/lib/vz/root/802/run/user

Code:
root@guest1:# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/simfs  30G  1.4G  29G  5% /
none  3.0G  4.0K  3.0G  1% /dev
none  610M  84K  610M  1% /run
none  5.0M  0  5.0M  0% /run/lock
none  3.0G  0  3.0G  0% /run/shm
none  100M  0  100M  0% /run/user

As it can be seen, there is a space of 30GB for the guest1 (ID: 801). But, if I perform the du command in a folder located under home:

Code:
root@guest1:/home/user/results# du -sh
80G   .

It really shocks me... I don't know if the stats of the folder could drop extra information:

Code:
root@guest1:/home/user/results# stat .
File: `.'
Size: 1293     Blocks: 65  IO Block: 87040  directory
Device: 19h/25d   Inode: 2138959  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (  0/  root)  Gid: (  0/  root)
Access: 2016-07-03 00:50:10.709328356 +0000
Modify: 2016-08-24 09:29:52.752892828 +0000
Change: 2016-08-24 09:29:52.752892828 +0000
Birth: -

There is nothing else mounted neither:

Code:
root@guest1:/home/user/results# mount
/var/lib/vz/private/801 on / type simfs (rw,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
none on /dev type devtmpfs (rw,nosuid,noexec,relatime,mode=755)
none on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
none on /run type tmpfs (rw,nosuid,noexec,relatime,size=624232k,mode=755)
none on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
none on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3091660k)
none on /run/user type tmpfs (rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)

How that could be possible?

Thank you so much for your time.
 
Last edited:
It really shocks me... I don't know if the stats of the folder could drop extra information:

Oh, seems you use OpenVZ simfs on zfs - AFAIK this is simply not supported by openvz. Besides, zfs uses compression by default, and only accounts compressed space. This explains why you can use more space inside a guest.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!