Disk space usage query

CelticWebs

Member
Mar 14, 2023
75
3
8
Hi all,

I'm fairly new here, I've been using Proxmox for a while now and pretty much got it running as I'd like with multiple VMS over 2 nodes. I have one query which I can't seem to get an answer too. When I setup a VM, I may allocate it say, 500GB, it will only actually be using 100GB perhaps, but when I look on Proxmox it says the full 500GB is used, then when I migrate it to another server, it takes a very long time becasue it's transferring the full 500GB. I've enabled discard and I've even run fstrim in the VMs, but it still shows as occupying the full allocated space.

Disks are showing as being RAW and Iv'e tried them being stored on LVM-Thin as well as ZFS storage pool. What am I doing wrong here? Am I expecting it to be doing something it's not capable of or have I just got something setup wrong?

Thanks in advance for you help and advice!
 
Thanks for your response, there are a couple of VMs, all setup similarly. Here's the config for two of them.


Code:
root@prox630:~# qm config 104
agent: 1,fstrim_cloned_disks=1
balloon: 0
boot: order=scsi0;net0
cores: 8
cpu: host
description: 1
hotplug: disk,network,usb
memory: 99328
meta: creation-qemu=8.0.2,ctime=1699350298
name: Stinger-114
net0: virtio=E6:24:24:79:23:92,bridge=vmbr0,firewall=1
numa: 1
onboot: 1
ostype: l26
parent: pre-backup
protection: 1
scsi0: ZFS-Storage:vm-104-disk-1,aio=threads,cache=writeback,discard=on,format=raw,iothread=1,size=250G,ssd=1
scsi1: Data-Store:vm-104-disk-0,aio=threads,cache=writeback,discard=on,iothread=1,size=500G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=73f52abb-e488-4e6c-bb10-ebf8f0b4d19d
sockets: 2
vmgenid: 3b597387-bb49-48e6-9b16-5734da13ba01

Code:
root@prox630:~# qm config 100
agent: 1,fstrim_cloned_disks=1
balloon: 16384
boot: order=scsi0;net0
cores: 8
cpu: x86-64-v2-AES,flags=+aes
description: 2
hotplug: disk,network,usb
machine: q35
memory: 49152
meta: creation-qemu=7.2.0,ctime=1697807079
name: Phantom-118
net0: virtio=6A:B8:18:A1:92:62,bridge=vmbr0,firewall=1
numa: 1
onboot: 1
ostype: l26
protection: 1
scsi0: Data-Store:vm-100-disk-0,aio=threads,cache=writeback,discard=on,format=raw,iothread=1,size=150G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=8ce872cf-a60e-4153-a7a2-0203d60c3d00
sockets: 2
vmgenid: 2f6830c2-0a8f-4d0c-afcb-d2aba5595bb6

All of the disks actually occupy the full amount of space on the Proxmox machine. whcih as mentioned, make them take quite a time if I want to move them form one node to the other for any reason.
 
Thanks of your help so far.

In some of the VMs the fstrim command does output than it's doing something. I've used fstrim previously, admittedly I didn't use the -v option which does at least show it's doing something. The file size in the list of images on a Proxmox disk remains the same. For VM100 there is no output form the fstrim -v --all command, could this be related to there being some nfs mounts?

I am correct in saying that the size shown under the storage in Proxmox is the current used size and not just the designated disk size? You can see in the attached image start the 500GB disk attached to VM104 is showing as 536.87 and the 250GB attached to VM104 is 268.44GB. AM I just getting confused here? I do know that when I migrate, it does trash rte whole amount and take sa very long time because of it.



Image 29-11-2023 at 12.18.jpegImage 29-11-2023 at 12.19.jpeg
 
As far as I can tell from your screenshot, it only shows what size the virtual hard drive is. But that doesn't say how much is actually used on the storage. With fstrim you ensure that free areas are marked as such. With lvm-thin only what is occupied is occupied. It is important that the discard flag is set and a trim occurs regularly.

Show us more about your storage, your migration log, etc. So that we can see for ourselves
 
then when I migrate it to another server
Have you setup replication on the VM that you want to migrate? If not, this will make a difference.

Just migrating from one node to another is a copy of the whole VM (including a lot of zeros) in a shared nothing architecture. Best way to mititage the problem is using shared storage (e.g. external NAS, CEPH or SAN), then you'll only have to migrate the memory of the VM.
 
Have you setup replication on the VM that you want to migrate? If not, this will make a difference.

Just migrating from one node to another is a copy of the whole VM (including a lot of zeros) in a shared nothing architecture. Best way to mititage the problem is using shared storage (e.g. external NAS, CEPH or SAN), then you'll only have to migrate the memory of the VM.
I think this answers my problem, it's' not shared storage, both servers have storage with the same names, but they aren't shared. So you saying that when it transfers it transfers a lot of zeros explains why it takes so long, even though all zeros, if a disk is set as 500gb, it has to transfer all 500gb, even if it's mostly zeros. I was under the impression that it would only transfer actual data, which from what you say, isn't the case?

No, I hadn't setup replication, I did have it setup but removed it for a reason that escapes me now. I think it was slowing / freezing the VM while it was doing it.

If it was already replicated there, would it then migrate just whats changed?
 
Last edited:
As far as I can tell from your screenshot, it only shows what size the virtual hard drive is. But that doesn't say how much is actually used on the storage. With fstrim you ensure that free areas are marked as such. With lvm-thin only what is occupied is occupied. It is important that the discard flag is set and a trim occurs regularly.

Show us more about your storage, your migration log, etc. So that we can see for ourselves
I'll see if I can dig out a migration log, though from what LnxBill has said, the migration is normal when it's not using centralised storage.
 
If it was already replicated there, would it then migrate just whats changed?
Yes, it transfers only the change from the last replication step so it is very fast, yet it still needs to transfer the disk difference and the RAM contents.
In a shared storage environment, the data is already available on all nodes so that you don't need to transfer the disk difference only the RAM contents.
 
Yes, it transfers only the change from the last replication step so it is very fast, yet it still needs to transfer the disk difference and the RAM contents.
In a shared storage environment, the data is already available on all nodes so that you don't need to transfer the disk difference only the RAM contents.
Thanks for the clarity.

Looks like I’d better setup replication to run the, to confirm, when I set this up, the first time would transfer everything, then after that it would only be the differences?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!