LVM 100% usage

Discussion in 'Proxmox VE: Installation and configuration' started by maaksel, Mar 31, 2019.

  1. maaksel

    maaksel New Member

    Joined:
    Mar 31, 2019
    Messages:
    3
    Likes Received:
    0
    Updated to 5.3, and noticed that my disk usage is at 100% for the LVM. Any idea why or how I can correct this? I only have 2 VMs up, 160GB and a 16GB, neither of which are over ~70% full.

    upload_2019-3-30_16-5-43.png
     
  2. JBB

    JBB Member

    Joined:
    Jan 23, 2015
    Messages:
    94
    Likes Received:
    1
    Me too - no clues yet. What, if any, ill effects are you seeing? Also, I assume, like me, your node reports lots of free storage?
     
  3. maaksel

    maaksel New Member

    Joined:
    Mar 31, 2019
    Messages:
    3
    Likes Received:
    0
    So, my friend kind of explained it to me, and showed me the math using the lvs command.

    Here is mine: upload_2019-3-31_8-24-59.png

    So, I do have 5 volume, even though I thought I only had 2. He said he doesn't know how EXACTLY the math works, but that data is basically a 'close sum' of the vm disks I have.

    I made another server, 101, with 20GB, and my LVM usage moved up to 6, but the % stayed the same. I made another, 102, with 20GB and it expectantly moved up to 7, and the % still stayed the same.

    He said it's not really a big deal until my data % started getting used up. I am not a strong linux guy, but he reminded me that there is PLENTY of stuff we can clean up/prune/trim from the system since I haven't done anything to it other than upgrades and such. Hope this makes a little more sense at at least putting us down the path of learning. It did for me, but I plan on doing a bit more research on how the math works.
     
  4. JBB

    JBB Member

    Joined:
    Jan 23, 2015
    Messages:
    94
    Likes Received:
    1
    Interesting, but I'm not sure I completely understand. Are you saying that it's not a problem for the LVM data percentage to be at or near 100%?

    BTW I see that the output from your lvs command is rather different from mine. You seem to have an lvm-thin storage though while I have plain lvm. But again, I don't know what the implications of this is.

    Code:
    root@host:~# lvs
      LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      data pve -wi-ao---- 321.75g                                                   
      root pve -wi-ao----  96.00g                                                   
      swap pve -wi-ao----  31.00g 
    
     
  5. BobhWasatch

    BobhWasatch Member

    Joined:
    Mar 16, 2019
    Messages:
    44
    Likes Received:
    6
    The output of the lvs command shows that you have five logical volumes allocated from the physical volume group "pve". The root and swap volumes are regular volumes that are used for your root filesystem and swap.

    Data is an lvm-thin pool (the left-hand 't' in the Attr column means it is a thin pool rather than a regular volume). Your VM disks are allocated from that. So the whole physical volume group is allocated, but a big chunk of it is allocated to the lvm-thin pool from which thin volumes can be sub-allocated.

    The thing with lvm-thin is that you can allocate VM disks with some declared maximum size, but they will only take blocks from the pool as they are used. So, for example, your vm-100-disk-1 has a maximum size of 160G but is only using 67% of that right now. The 16G disk is only using 8% of it's allocation and overall the data pool is about 65% used. Note that this means you can over-commit storage. You have 160+16=176G allocated out of 165 available.

    The difference with @JBB is his data volume is not thin. It is probably mounted as a filesystem under /var/lib/vz and the VM disk images are files (either raw or qcow2) rather than lvm block devices. The whole size of the volume is considered allocated as far as the volume manager is concerned and you can't over-commit.
     
    JBB likes this.
  6. JBB

    JBB Member

    Joined:
    Jan 23, 2015
    Messages:
    94
    Likes Received:
    1
    That's indeed true in my case.

    OK - so I don't need to be worried about running out?
     
  7. BobhWasatch

    BobhWasatch Member

    Joined:
    Mar 16, 2019
    Messages:
    44
    Likes Received:
    6
    Um, maybe. I mis-spoke a bit. If you are using qcow2 format VM disks, they behave similar to lvm-thin in that space isn't acquired until it is used. In that case you can declare a VM disk or group that total a size that is bigger than the actual storage.
     
  8. JBB

    JBB Member

    Joined:
    Jan 23, 2015
    Messages:
    94
    Likes Received:
    1
    OK so what I'm trying to understand is why Proxmox tells me I have about 50% free on /var/lib/vz (where my lvm filesystem is mounted):

    Code:
    Filesystem            Size  Used Avail Use% Mounted on
    udev                   16G     0   16G   0% /dev
    tmpfs                 3.2G  306M  2.9G  10% /run
    /dev/mapper/pve-root   95G  6.3G   84G   7% /
    tmpfs                  16G   37M   16G   1% /dev/shm
    tmpfs                 5.0M     0  5.0M   0% /run/lock
    tmpfs                  16G     0   16G   0% /sys/fs/cgroup
    /dev/sda2             486M  191M  271M  42% /boot
    /dev/mapper/pve-data  317G  160G  157G  51% /var/lib/vz
    /dev/fuse              30M   36K   30M   1% /etc/pve
    tmpfs                 3.2G     0  3.2G   0% /run/user/1000
    
    .... but then says I've used 97% of /dev/sda3, which is the device dealing with that volume (as far as I know):

    Code:
    root@host:~$ pvs
      PV         VG  Fmt  Attr PSize   PFree 
      /dev/sda3  pve lvm2 a--  464.75g 16.00g
    
    BTW my disk files (qcow2) all have discard turned on and fstrim running weekly.
     
    #8 JBB, Mar 31, 2019
    Last edited: Mar 31, 2019
  9. BobhWasatch

    BobhWasatch Member

    Joined:
    Mar 16, 2019
    Messages:
    44
    Likes Received:
    6
    You guys have two different setups. It is likely that you installed Proxmox a while ago and upgraded while his install is newer. Thin provisioning is a relatively new feature.

    Since you aren't using thin provisioning, the space for logical volume "data" is pre-allocated. Even if the filesystem on /dev/mapper/pve-data is empty the space is committed at the volume manager layer. That doesn't mean you're out of space for files, it just means that you can't allocate any more block devices or grow an existing one. In your case the relevant free space is the filesystem free space.

    With thin provisioning, VM disks are block devices allocated from a thin pool. They are not stored as files in /var/lib/vz but rather the VM's work directly with the logical block device. So in that case the percentage free of the thin pool is what's relevant to how much space you have for VM's.

    Two different ways of handling space allocation. Using fstrim is a good idea in either case.
     
    JBB likes this.
  10. JBB

    JBB Member

    Joined:
    Jan 23, 2015
    Messages:
    94
    Likes Received:
    1
    Ah OK. That makes sense - I suspected some kind of pre-allocation was going on.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice