Provisioned VM disk size vs. what VM shows df -h

AndyRed

Active Member
Aug 16, 2019
22
5
43
56
I created an Ubuntu 24.x VM in PVE 8.3.1.

Proxmox View:
Hard Disk (scsi0): backup-drive:vm-106-disk-0,discard=on,iothread=1,size=50G,ssd=1
Bootdisk size: 50.00 GiB

VM View:
aredman@cloud:[/]: df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 795M 2.2M 793M 1% /run

/dev/mapper/ubuntu--vg-ubuntu--lv 24G 17G 6.3G 73% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 2.0G 95M 1.7G 6% /boot
tmpfs 795M 12K 795M 1% /run/user/1001


Why does the VM report 24G disk size when the VM has been provisioned for 50G?

Thanks so much,

Andy
 
Just installed fresh ubuntu2410desktop vm on 16g disk and same is shown inside vm by df ... but I don't installed on lvm as you.
Take a look at output of vgs, lvs and lvdisplay, I think there is the other space allocated or even not defined.
 
  • Like
Reactions: Johannes S
Interesting. I looked at the VM disk and this is what I see:

raw.png
So the actual disk is ~50G. How am I to interpret or understand what will happen when I get to the 24G that df represents?

aredman@cloud:[/]: sudo vgs

VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <48.00g 24.00g


aredman@cloud:[/]: sudo lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- <24.00g


aredman@cloud:[/]: sudo lvdisplay

--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-lv
LV Name ubuntu-lv
VG Name ubuntu-vg
LV UUID bp0U4h-8Wtf-naER-t8Bj-aZl3-0SYF-FSeGr2
LV Write Access read/write
LV Creation host, time ubuntu-server, 2025-01-28 19:33:13 +0000
LV Status available
# open 1
LV Size <24.00 GiB
Current LE 6143
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
 
Thanks for the clarity ;) I can certainly be slow ...

root@pve:[~]: vgs
VG #PV #LV #SN Attr VSize VFree
backup-drive 1 20 0 wz--n- <447.13g 120.00m
pve 1 4 0 wz--n- <446.63g <16.00g
root@pve:[~]: lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
backup-drive backup-drive twi-aotz-- <438.07g 21.97 1.16

snap_vm-106-disk-0_baseline backup-drive Vri---tz-k 50.00g backup-drive
snap_vm-106-disk-0_docker backup-drive Vri---tz-k 50.00g backup-drive
snap_vm-106-disk-0_nextcloud backup-drive Vri---tz-k 50.00g backup-drive
vm-106-disk-0 backup-drive Vwi-aotz-- 50.00g backup-drive snap_vm-106-disk-0_docker 29.08
vm-106-state-baseline backup-drive Vwi-a-tz-- <16.50g backup-drive 8.97
vm-106-state-docker backup-drive Vwi-a-tz-- <16.50g backup-drive 10.30
vm-106-state-nextcloud backup-drive Vwi-a-tz-- <16.50g backup-drive 46.93
data pve twi-aotz-- 320.09g 0.13 0.53
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g

root@pve:[~]: lvdisplay
--- Logical volume ---
LV Name backup-drive
VG Name backup-drive
LV UUID 6NPqYi-7NM2-DOCy-z4TC-5eGh-eCTv-0CYyet
LV Write Access read/write (activated read only)
LV Creation host, time pve, 2023-08-04 13:45:05 -0400
LV Pool metadata backup-drive_tmeta
LV Pool data backup-drive_tdata
LV Status available
# open 0
LV Size <438.07 GiB
Allocated pool data 21.97%
Allocated metadata 1.16%
Current LE 112145
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:9



--- Logical volume ---
LV Path /dev/backup-drive/vm-106-state-baseline
LV Name vm-106-state-baseline
VG Name backup-drive
LV UUID s916eb-58Yw-YQHB-Bh4x-Mdka-xgFX-BxyxVP
LV Write Access read/write
LV Creation host, time pve, 2025-01-28 14:45:30 -0500
LV Pool name backup-drive
LV Status available
# open 0
LV Size <16.50 GiB
Mapped size 8.97%
Current LE 4223
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:17

--- Logical volume ---
LV Path /dev/backup-drive/snap_vm-106-disk-0_baseline
LV Name snap_vm-106-disk-0_baseline
VG Name backup-drive
LV UUID IXfdqy-sWrl-5Ifm-34lT-8k4k-4AGe-19FnFZ
LV Write Access read only
LV Creation host, time pve, 2025-01-28 14:45:48 -0500
LV Pool name backup-drive
LV Status NOT available
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/backup-drive/vm-106-state-docker
LV Name vm-106-state-docker
VG Name backup-drive
LV UUID yvSJJs-Wj0b-YRhI-nFlZ-Xow9-NjYi-fTcUP4
LV Write Access read/write
LV Creation host, time pve, 2025-01-28 14:56:12 -0500
LV Pool name backup-drive
LV Status available
# open 0
LV Size <16.50 GiB
Mapped size 10.30%
Current LE 4223
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:18

--- Logical volume ---
LV Path /dev/backup-drive/snap_vm-106-disk-0_docker
LV Name snap_vm-106-disk-0_docker
VG Name backup-drive
LV UUID 5T33W0-w1N6-08T6-a5uk-JPWp-9q7A-7QNGPm
LV Write Access read only
LV Creation host, time pve, 2025-01-28 14:56:32 -0500
LV Pool name backup-drive
LV Status NOT available
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/backup-drive/vm-106-state-nextcloud
LV Name vm-106-state-nextcloud
VG Name backup-drive
LV UUID CVtLnf-89nu-2dZD-Qm6G-lzPO-Q5n2-b3VKE8
LV Write Access read/write
LV Creation host, time pve, 2025-01-28 16:20:23 -0500
LV Pool name backup-drive
LV Status available
# open 0
LV Size <16.50 GiB
Mapped size 46.93%
Current LE 4223
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:19

--- Logical volume ---
LV Path /dev/backup-drive/snap_vm-106-disk-0_nextcloud
LV Name snap_vm-106-disk-0_nextcloud
VG Name backup-drive
LV UUID nQByih-BAbw-fO2x-V2Xt-27bQ-WP6z-ebPnyd
LV Write Access read only
LV Creation host, time pve, 2025-01-28 16:21:39 -0500
LV Pool name backup-drive
LV Status NOT available
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/backup-drive/vm-106-disk-0
LV Name vm-106-disk-0
VG Name backup-drive
LV UUID 1xPMfS-QIYv-tewQ-HlqA-O425-802L-iJPE7e
LV Write Access read/write
LV Creation host, time pve, 2025-01-29 10:45:12 -0500
LV Pool name backup-drive
LV Thin origin name snap_vm-106-disk-0_docker
LV Status available
# open 1
LV Size 50.00 GiB
Mapped size 29.08%
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:16

--- Logical volume ---
LV Name data
VG Name pve
LV UUID hvsIOO-o9h4-cbop-2Qid-sQk6-clO1-L2HApa
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2023-02-06 07:35:52 -0500
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 320.09 GiB
Allocated pool data 0.13%
Allocated metadata 0.53%
Current LE 81944
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:7

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID mXompc-S2Et-IsVt-KKTU-uTYr-UTHu-pf832D
LV Write Access read/write
LV Creation host, time proxmox, 2023-02-06 07:35:45 -0500
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID QFsimP-IGNX-hyXS-3RP0-FYCM-bcP0-q6D8aP
LV Write Access read/write
LV Creation host, time proxmox, 2023-02-06 07:35:45 -0500
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:3

... and ...

aredman@cloud:[/]: sudo fdisk -l
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9F34D4B6-6C79-492D-B87A-7411D29075E9

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 4198399 4194304 2G Linux filesystem
/dev/sda3 4198400 104855551 100657152 48G Linux filesystem


Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 24 GiB, 25765609472 bytes, 50323456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


... or ...

root@pve:[~]: fdisk -l
Disk /dev/sda: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: HFS480G3H2X069N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5B63A8EB-B25B-44D8-A7B5-40A8C92FCD2D

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 937703054 936652431 446.6G Linux LVM


Disk /dev/sdb: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: HFS480G3H2X069N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/backup--drive-vm--100--disk--0: 42 GiB, 45097156608 bytes, 88080384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
GPT PMBR size mismatch (2165747 != 60063743) will be corrected by write.
The backup GPT table is not on the end of the device.


Disk /dev/sdc: 28.64 GiB, 30752636928 bytes, 60063744 sectors
Disk model: Ultra
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 88365C9B-F9CD-426F-85BE-F6F44EBDD0DA

Device Start End Sectors Size Type
/dev/sdc1 64 511 448 224K Microsoft basic data
/dev/sdc2 512 6271 5760 2.8M EFI System
/dev/sdc3 6272 2165099 2158828 1G Apple HFS/HFS+
/dev/sdc4 2165100 2165699 600 300K Microsoft basic data


Disk /dev/mapper/backup--drive-vm--106--state--baseline: 16.5 GiB, 17712545792 bytes, 34594816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/backup--drive-vm--106--state--docker: 16.5 GiB, 17712545792 bytes, 34594816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/backup--drive-vm--106--state--nextcloud: 16.5 GiB, 17712545792 bytes, 34594816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/backup--drive-vm--106--disk--0: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: gpt
Disk identifier: 9F34D4B6-6C79-492D-B87A-7411D29075E9

Device Start End Sectors Size Type
/dev/mapper/backup--drive-vm--106--disk--0-part1 2048 4095 2048 1M BIOS boot
/dev/mapper/backup--drive-vm--106--disk--0-part2 4096 4198399 4194304 2G Linux filesyste
/dev/mapper/backup--drive-vm--106--disk--0-part3 4198400 104855551 100657152 48G Linux filesyste
 
vm-106-disk-0 backup-drive Vwi-aotz-- 50.00g backup-drive snap_vm-106-disk-0_docker 29.08
So it's a 50g volume ... maybe your snapshots and backups still used 26g if just 24g are inside vm now ... and your backup-drive is a pure volume too ?
Still wondering why anybody likes all such confusing lvm gabble of output, understanding and handling ...
We do all pure file based (completely without lvm) as that's user friendly, revert to older state is just ref cp in few msec (!!) and could easy remote replicated, eg
[root@srv1 images]# ll 158
total 16787736
-rw-r----- 1 root root 17182752768 Jan 30 18:09 vm-158-disk-0.qcow2
-rw-r----- 1 root root 34365243392 Nov 20 00:09 vm-158-disk-1.qcow2
[root@srv1 images]# time find ../../.xfssnaps/ -name "vm-158*" -ls # look for available older versions under 2s
80198541 16780644 -rw-r----- 1 root root 17182752768 Dec 15 00:01 ../../.xfssnaps/weekly.2024-12-15_0001/srv/data/images/158/vm-158-disk-0.qcow2
80198542 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2024-12-15_0001/srv/data/images/158/vm-158-disk-1.qcow2
36507251105 16780736 -rw-r----- 1 root root 17182752768 Dec 22 00:01 ../../.xfssnaps/weekly.2024-12-22_0001/srv/data/images/158/vm-158-disk-0.qcow2
36507251106 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2024-12-22_0001/srv/data/images/158/vm-158-disk-1.qcow2
36507233543 16780828 -rw-r----- 1 root root 17182752768 Dec 29 00:01 ../../.xfssnaps/weekly.2024-12-29_0001/srv/data/images/158/vm-158-disk-0.qcow2
36507233544 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2024-12-29_0001/srv/data/images/158/vm-158-disk-1.qcow2
45097243182 16780872 -rw-r----- 1 root root 17182752768 Jan 1 00:01 ../../.xfssnaps/monthly.2025-01-01_0001/srv/data/images/158/vm-158-disk-0.qcow2
45097243183 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/monthly.2025-01-01_0001/srv/data/images/158/vm-158-disk-1.qcow2
51539617185 16781028 -rw-r----- 1 root root 17182752768 Jan 12 00:01 ../../.xfssnaps/weekly.2025-01-12_0001/srv/data/images/158/vm-158-disk-0.qcow2
51539617186 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2025-01-12_0001/srv/data/images/158/vm-158-disk-1.qcow2
12889668664 16781092 -rw-r----- 1 root root 17182752768 Jan 16 00:01 ../../.xfssnaps/daily.2025-01-16_0001/srv/data/images/158/vm-158-disk-0.qcow2
12889668665 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-16_0001/srv/data/images/158/vm-158-disk-1.qcow2
57986972344 16781128 -rw-r----- 1 root root 17182752768 Jan 19 00:01 ../../.xfssnaps/weekly.2025-01-19_0001/srv/data/images/158/vm-158-disk-0.qcow2
57986972345 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2025-01-19_0001/srv/data/images/158/vm-158-disk-1.qcow2
25769966847 16781152 -rw-r----- 1 root root 17182752768 Jan 21 00:01 ../../.xfssnaps/daily.2025-01-21_0001/srv/data/images/158/vm-158-disk-0.qcow2
25769974107 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-21_0001/srv/data/images/158/vm-158-disk-1.qcow2
4313250683 16781172 -rw-r----- 1 root root 17182752768 Jan 22 00:01 ../../.xfssnaps/daily.2025-01-22_0001/srv/data/images/158/vm-158-disk-0.qcow2
4313250684 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-22_0001/srv/data/images/158/vm-158-disk-1.qcow2
40802190024 16781212 -rw-r----- 1 root root 17182752768 Jan 25 00:01 ../../.xfssnaps/daily.2025-01-25_0001/srv/data/images/158/vm-158-disk-0.qcow2
40802190025 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-25_0001/srv/data/images/158/vm-158-disk-1.qcow2
10743807179 16781228 -rw-r----- 1 root root 17182752768 Jan 26 00:00 ../../.xfssnaps/weekly.2025-01-26_0001/srv/data/images/158/vm-158-disk-0.qcow2
10743807180 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2025-01-26_0001/srv/data/images/158/vm-158-disk-1.qcow2
51539634367 16781264 -rw-r----- 1 root root 17182752768 Jan 29 00:01 ../../.xfssnaps/daily.2025-01-29_0001/srv/data/images/158/vm-158-disk-0.qcow2
51539653474 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-29_0001/srv/data/images/158/vm-158-disk-1.qcow2
23626639150 16781276 -rw-r----- 1 root root 17182752768 Jan 30 00:01 ../../.xfssnaps/daily.2025-01-30_0001/srv/data/images/158/vm-158-disk-0.qcow2
23626639151 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-30_0001/srv/data/images/158/vm-158-disk-1.qcow2
8599215162 16780548 -rw-r----- 1 root root 17182752768 Dec 8 00:01 ../../.xfssnaps/weekly.2024-12-08_0001/srv/data/images/158/vm-158-disk-0.qcow2
8599215163 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2024-12-08_0001/srv/data/images/158/vm-158-disk-1.qcow2
23627024731 16780932 -rw-r----- 1 root root 17182752768 Jan 5 00:01 ../../.xfssnaps/weekly.2025-01-05_0001/srv/data/images/158/vm-158-disk-0.qcow2
23627024732 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/weekly.2025-01-05_0001/srv/data/images/158/vm-158-disk-1.qcow2
40802216371 16781108 -rw-r----- 1 root root 17182752768 Jan 17 00:01 ../../.xfssnaps/daily.2025-01-17_0001/srv/data/images/158/vm-158-disk-0.qcow2
40802216372 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-17_0001/srv/data/images/158/vm-158-disk-1.qcow2
17180484214 16781116 -rw-r----- 1 root root 17182752768 Jan 18 00:01 ../../.xfssnaps/daily.2025-01-18_0001/srv/data/images/158/vm-158-disk-0.qcow2
17180484215 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-18_0001/srv/data/images/158/vm-158-disk-1.qcow2
45097165729 16781140 -rw-r----- 1 root root 17182752768 Jan 20 00:01 ../../.xfssnaps/daily.2025-01-20_0001/srv/data/images/158/vm-158-disk-0.qcow2
45097165730 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-20_0001/srv/data/images/158/vm-158-disk-1.qcow2
47244642739 16781180 -rw-r----- 1 root root 17182752768 Jan 23 00:01 ../../.xfssnaps/daily.2025-01-23_0001/srv/data/images/158/vm-158-disk-0.qcow2
47244642740 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-23_0001/srv/data/images/158/vm-158-disk-1.qcow2
25769994814 16781192 -rw-r----- 1 root root 17182752768 Jan 24 00:01 ../../.xfssnaps/daily.2025-01-24_0001/srv/data/images/158/vm-158-disk-0.qcow2
25769994815 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-24_0001/srv/data/images/158/vm-158-disk-1.qcow2
45097348065 16781240 -rw-r----- 1 root root 17182752768 Jan 27 00:01 ../../.xfssnaps/daily.2025-01-27_0001/srv/data/images/158/vm-158-disk-0.qcow2
45097348066 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-27_0001/srv/data/images/158/vm-158-disk-1.qcow2
17180554000 16781248 -rw-r----- 1 root root 17182752768 Jan 28 00:01 ../../.xfssnaps/daily.2025-01-28_0001/srv/data/images/158/vm-158-disk-0.qcow2
17180554001 5320 -rw-r----- 1 root root 34365243392 Nov 20 00:09 ../../.xfssnaps/daily.2025-01-28_0001/srv/data/images/158/vm-158-disk-1.qcow2
real 0m1.754s
user 0m0.762s
sys 0m0.937s
 
Last edited:
I really don't understand what's happening here. I have deployed numerous flavors of Ubuntu, 22, 24 and 20 and see the same behavior when issuing a df -h. I can deploy a container and get the desired output of whatever disk size I have created.