hard usage is going up by time

peiman

Member
Sep 21, 2016
16
0
6
37
hello
i installed an ubuntu 18.04.3

this is hard usage from master machine is

Code:
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8713797632 Oct 27 18:02 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8715894784 Oct 27 18:02 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8717991936 Oct 27 18:02 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8720089088 Oct 27 18:02 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8722186240 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8722186240 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8724283392 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8726380544 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8728477696 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8730574848 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8730574848 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r--r-- 1 root root 5096931328 Oct 27 17:55 1.vmdk
-rw-r----- 1 root root 8734769152 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8735555584 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101# ls -l
-rw-r----- 1 root root 8738963456 Oct 27 18:03 vm-101-disk-0.vmdk
root@m3784:/var/lib/vz/images/101#


the hard usage from inside the vm is

Code:
peiman@digidoc:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           798M  956K  797M   1% /run
/dev/sda2       629G  7.7G  590G   2% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/loop0       89M   89M     0 100% /snap/core/7270
/dev/loop1       90M   90M     0 100% /snap/core/7917
tmpfs           798M     0  798M   0% /run/user/1000


what can cause this !?
 
Last edited:
When discard (trim) is not run inside the VM to free empty blocks, the file will grow till its full capacity.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!