[SOLVED] lvm disk usage 100% but only 30% usage

cdsJerry

Renowned Member
Sep 12, 2011
218
8
83
I'm confused by something. I was going to clone a VM today but aborted it until I figure out what I'm looking at.

I have my local lvm showing 30.25% usage of 8TB however when I click on the pve->Disks-LVM it shows 100% usage of 8TB. Which is it, 30% or 100%? Why do the numbers not match? Now I'm scared that I have something wrong.

I'm running almost all Windows KVMs if that makes any difference.
 
I'm confused by something. I was going to clone a VM today but aborted it until I figure out what I'm looking at.

I have my local lvm showing 30.25% usage of 8TB however when I click on the pve->Disks-LVM it shows 100% usage of 8TB. Which is it, 30% or 100%? Why do the numbers not match? Now I'm scared that I have something wrong.

I'm running almost all Windows KVMs if that makes any difference.
that is total assigned amount. not the actual amount you have left..

mine has the same

Screenshot 2022-05-04 125603.jpg
 
If I knew what any of that alphabet soup meant I might be able to provide that information. Sorry.
Right click on your proxmox node and open the _shell

Screenshot 2022-05-04 132411.jpg

Screenshot 2022-05-04 132440.jpg

Screenshot 2022-05-04 132641.jpg


then type in those one at a time and click enter

so lsblk enter
copy that info to this thread

then type in pvs enter and copy that code here

and so on ;)
 
Right click on your proxmox node and open the _shell

View attachment 36538

View attachment 36537

View attachment 36536


then type in those one at a time and click enter

so lsblk enter
copy that info to this thread

then type in pvs enter and copy that code here

and so on ;)
Thank you for the instructions. I AM learning.. but not fast enough.
Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                            8:0    0  7.3T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part
└─sda3                         8:3    0  7.3T  0 part
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm 
  │ └─pve-data-tpool         253:4    0  7.1T  0 lvm 
  │   ├─pve-data             253:5    0  7.1T  1 lvm 
  │   ├─pve-vm--101--disk--0 253:6    0  500G  0 lvm 
  │   ├─pve-vm--102--disk--0 253:7    0  700G  0 lvm 
  │   ├─pve-vm--102--disk--1 253:8    0  700G  0 lvm 
  │   ├─pve-vm--100--disk--0 253:9    0  700G  0 lvm 
  │   ├─pve-vm--102--disk--2 253:10   0  700G  0 lvm 
  │   ├─pve-vm--105--disk--0 253:12   0  700G  0 lvm 
  │   ├─pve-vm--107--disk--0 253:14   0  550G  0 lvm 
  │   ├─pve-vm--103--disk--0 253:15   0  550G  0 lvm 
  │   ├─pve-vm--108--disk--0 253:16   0  150G  0 lvm 
  │   ├─pve-vm--109--disk--0 253:17   0  300G  0 lvm 
  │   └─pve-vm--110--disk--0 253:18   0   32G  0 lvm 
  └─pve-data_tdata           253:3    0  7.1T  0 lvm 
    └─pve-data-tpool         253:4    0  7.1T  0 lvm 
      ├─pve-data             253:5    0  7.1T  1 lvm 
      ├─pve-vm--101--disk--0 253:6    0  500G  0 lvm 
      ├─pve-vm--102--disk--0 253:7    0  700G  0 lvm 
      ├─pve-vm--102--disk--1 253:8    0  700G  0 lvm 
      ├─pve-vm--100--disk--0 253:9    0  700G  0 lvm 
      ├─pve-vm--102--disk--2 253:10   0  700G  0 lvm 
      ├─pve-vm--105--disk--0 253:12   0  700G  0 lvm 
      ├─pve-vm--107--disk--0 253:14   0  550G  0 lvm 
      ├─pve-vm--103--disk--0 253:15   0  550G  0 lvm 
      ├─pve-vm--108--disk--0 253:16   0  150G  0 lvm 
      ├─pve-vm--109--disk--0 253:17   0  300G  0 lvm 
      └─pve-vm--110--disk--0 253:18   0   32G  0 lvm 
sdb                            8:16   0  1.8T  0 disk
└─sdb1                         8:17   0  1.8T  0 part
sr0                           11:0    1 1024M  0 rom 
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize  PFree 
  /dev/sda3  pve lvm2 a--  <7.28t <16.38g
root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree 
  pve   1  14   0 wz--n- <7.28t <16.38g
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz--  <7.13t             28.36  6.75                           
  root          pve -wi-ao----  96.00g                                                   
  swap          pve -wi-ao----   8.00g                                                   
  vm-100-disk-0 pve Vwi-aotz-- 700.00g data        13.99                                 
  vm-101-disk-0 pve Vwi-aotz-- 500.00g data        84.29                                 
  vm-102-disk-0 pve Vwi-a-tz-- 700.00g data        0.50                                   
  vm-102-disk-1 pve Vwi-a-tz-- 700.00g data        0.51                                   
  vm-102-disk-2 pve Vwi-a-tz-- 700.00g data        4.49                                   
  vm-103-disk-0 pve Vwi-aotz-- 550.00g data        100.00                                 
  vm-105-disk-0 pve Vwi-aotz-- 700.00g data        32.28                                 
  vm-107-disk-0 pve Vwi-a-tz-- 550.00g data        53.56                                 
  vm-108-disk-0 pve Vwi-aotz-- 150.00g data        90.65                                 
  vm-109-disk-0 pve Vwi-a-tz-- 300.00g data        99.94                                 
  vm-110-disk-0 pve Vwi-a-tz--  32.00g data        18.51                                 
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,rootdir,images,iso,vztmpl
        maxfiles 2
        shared 1

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: Backups-IOM
        export /mnt/pools/A/A0/Backups
        path /mnt/pve/Backups-IOM
        server 192.xxx.xx.xxx
        content images,backup
        prune-backups keep-last=1

nfs: ProxNas3
        export /mnt/HD/HD_a2/ProxNAS3
        path /mnt/pve/ProxNas3
        server 192.xxx.xx.xxx
        content backup,images,iso,vztmpl
        prune-backups keep-last=1

nfs: 01nas2
        export /nfs/ProxmoxStore
        path /mnt/pve/01nas2
        server 192.xxx.xx.xxx
        content backup,images,iso,vztmpl
        prune-backups keep-last=1

root@pve:~# pvesm status
Name                  Type     Status           Total            Used       Available        %
Backups-IOM         nfs     active      1932398592      1612331168       320067424   83.44%
ProxNas3               nfs     active      2925144896      1468002240      1427882112   50.19%
01nas2                 nfs     active      2914247232      1630249024      1283998208   55.94%
local                  dir     active        98559220        51111160        42398512   51.86%
local-lvm          lvmthin     active      7653027840      2170398695      5482629144   28.36%
 
you can also look on the LVM-THIN to see what is left and usage. as total for all VM;s if you click on backups you will see your snapshots here as long as this is where you are creating your VMs

Screenshot 2022-05-04 140203.jpg




and when you create a snapshot it calculates here as well you can see your snapshot in VM Disks

Screenshot 2022-05-04 140628.jpg

I am not a pro either just read the posts and read docs and watch videos on youtuube to learn work in progress ;)
 
does this look correct to you. seems like your using data 100% on vm 103 ?
and vm 109 99.94%

does this seems correct to you.
VM 103 is using 380GB of data with 30.0GB free on a 411GB drive. The other 138GB is unallocated HDD space. This VM was created from a physical Windows machine from a Windows backup which was then restored to a Proxmox VM. It's been a while but my _guess_ is that I had allocated a 500GB HDD to the VM to make sure it wouldn't run out of space during the restore. I don't know how to change the drive size without screwing up Windows. Windows doesn't usually play nice when it's components change so I've been afraid to even try. Linux is much nicer about that. I doubt I ever need that space for that particular Windows machine. It's a specific task machine that won't have much data growth ever.
VM 109 is a bit more confusing. Proxmox says it's a 300GB drive. Windows also says it's a 300GB drive with 61.5GB used and 237GB free. The only way it could be 99.94% is if it's counting _all_ the drive space and not using it as a thin drive? When I click on the hard Disk in the Hardware section I notice there's no "cache=writeback,size=" listed. Maybe I screwed up with I created the VM. When I click the Hard Disk edit tab it shows the "Cache: Write back" selected in the drop down. That's a VM I created to do some video editing with. It's off 99.999% of the time and not used except occasionally.
 
  • Like
Reactions: Spirog
you can also look on the LVM-THIN to see what is left and usage. as total for all VM;s if you click on backups you will see your snapshots here as long as this is where you are creating your VMs

View attachment 36539




and when you create a snapshot it calculates here as well you can see your snapshot in VM Disks

View attachment 36540

I am not a pro either just read the posts and read docs and watch videos on youtuube to learn work in progress ;)
That's actually part of what I looked at that started this thread. Thank you.
 
  • Like
Reactions: Spirog
VM 103 is using 380GB of data with 30.0GB free on a 411GB drive. The other 138GB is unallocated HDD space. This VM was created from a physical Windows machine from a Windows backup which was then restored to a Proxmox VM. It's been a while but my _guess_ is that I had allocated a 500GB HDD to the VM to make sure it wouldn't run out of space during the restore. I don't know how to change the drive size without screwing up Windows. Windows doesn't usually play nice when it's components change so I've been afraid to even try. Linux is much nicer about that. I doubt I ever need that space for that particular Windows machine. It's a specific task machine that won't have much data growth ever.
VM 109 is a bit more confusing. Proxmox says it's a 300GB drive. Windows also says it's a 300GB drive with 61.5GB used and 237GB free. The only way it could be 99.94% is if it's counting _all_ the drive space and not using it as a thin drive? When I click on the hard Disk in the Hardware section I notice there's no "cache=writeback,size=" listed. Maybe I screwed up with I created the VM. When I click the Hard Disk edit tab it shows the "Cache: Write back" selected in the drop down. That's a VM I created to do some video editing with. It's off 99.999% of the time and not used except occasionally.
Great well so far seems your good
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!