[SOLVED] local-lvm getting full.

sherbmeister

Member
Dec 7, 2022
49
3
8
Hi, my local-lvm is almost full and I don't know what to do to fix this. I still don't fully understand what local-lvm is or how it works, compared to "local" (which is cleared with no issues).

I tried fstrim on all my VMs and Containers. That only freed about 10 GB out of what's being used - 96.42% (462.39 GB of 479.56 GB)

Cleared log files as well. Cleared old backups, snapshots, and set my disks discard on.

Any other suggestions? Thanks.
 
Hello

Can you please share your /etc/pve/storage.cfg with me to verify? But if you do not have done anything special at your installation:

The local is basically your Linux root where Proxmox has its system stored. Also, data like backups and ISO files go here.

The local-lvm is where the hard drives of your VMs are stored.
 
  • Like
Reactions: sherbmeister
The local-lvm is where the hard drives of your VMs are stored.
Thought so, but once I added everything up, it doesn't even come close to what's being used.


Code:
root@yautja:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

dir: storage2
        path /mnt/pve/storage2
        content snippets,backup,images,rootdir,iso,vztmpl
        is_mountpoint 1
        nodes yautja
        shared 0

dir: NVME
        path /mnt/pve/NVME
        content rootdir,images,iso,vztmpl,snippets,backup
        is_mountpoint 1
        nodes yautja
        shared 0

dir: storage3
        path /mnt/pve/storage3
        content vztmpl,iso,images,rootdir,backup,snippets
        is_mountpoint 1
        nodes yautja
        shared 0

lvm: auriga-storage
        vgname auriga-storage
        content rootdir,images
        nodes auriga
        shared 0

zfspool: local-zfs
        pool rpool/ROOT/pve-1
        content images,rootdir
        nodes auriga
        sparse 0

zfspool: local-zfs-newt
        pool rpool/ROOT/pve-1
        content images,rootdir
        nodes newton
        sparse 0

lvm: local-lvm-yautja
        vgname pve
        content images,rootdir
        nodes yautja
        shared 0

cifs: ftp-yautja
        path /mnt/pve/ftp-yautja
        server 192.168.69.239
        share NAS
        content backup,vztmpl
        prune-backups keep-all=1
        username marius

cifs: ftp-newton
        path /mnt/pve/ftp-newton
        server 192.168.69.7
        share server-backups
        content snippets,backup,rootdir,images,vztmpl
        prune-backups keep-all=1
        username marius

cifs: ISO
        path /mnt/pve/ISO
        server 192.168.69.7
        share isos
        content iso
        prune-backups keep-all=1
        username marius
 
Last edited:
Do you mean lvm: local-lvm-yautja?

Can I see how your storage is allocated with lsblk?
 
  • Like
Reactions: sherbmeister
Do you mean lvm: local-lvm-yautja?
Yes.


Code:
root@yautja:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop1                          7:1    0    10G  0 loop
loop2                          7:2    0    64G  0 loop
sda                            8:0    0 447.1G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part
└─sda3                         8:3    0 446.6G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   3.3G  0 lvm 
  │ └─pve-data-tpool         253:4    0 320.1G  0 lvm 
  │   ├─pve-data             253:5    0 320.1G  1 lvm 
  │   ├─pve-vm--309--disk--0 253:6    0    30G  0 lvm 
  │   ├─pve-vm--188--disk--0 253:7    0     4M  0 lvm 
  │   ├─pve-vm--188--disk--1 253:8    0    64G  0 lvm 
  │   ├─pve-vm--105--disk--0 253:9    0    50G  0 lvm 
  │   ├─pve-vm--900--disk--0 253:10   0    35G  0 lvm 
  │   ├─pve-vm--104--disk--0 253:11   0    50G  0 lvm 
  │   ├─pve-vm--203--disk--0 253:12   0    50G  0 lvm 
  │   ├─pve-vm--204--disk--0 253:13   0    50G  0 lvm 
  │   ├─pve-vm--107--disk--0 253:14   0    50G  0 lvm 
  │   ├─pve-vm--107--disk--1 253:15   0    50G  0 lvm 
  │   ├─pve-vm--106--disk--0 253:16   0    32G  0 lvm 
  │   └─pve-vm--555--disk--0 253:17   0    18G  0 lvm 
  └─pve-data_tdata           253:3    0 320.1G  0 lvm 
    └─pve-data-tpool         253:4    0 320.1G  0 lvm 
      ├─pve-data             253:5    0 320.1G  1 lvm 
      ├─pve-vm--309--disk--0 253:6    0    30G  0 lvm 
      ├─pve-vm--188--disk--0 253:7    0     4M  0 lvm 
      ├─pve-vm--188--disk--1 253:8    0    64G  0 lvm 
      ├─pve-vm--105--disk--0 253:9    0    50G  0 lvm 
      ├─pve-vm--900--disk--0 253:10   0    35G  0 lvm 
      ├─pve-vm--104--disk--0 253:11   0    50G  0 lvm 
      ├─pve-vm--203--disk--0 253:12   0    50G  0 lvm 
      ├─pve-vm--204--disk--0 253:13   0    50G  0 lvm 
      ├─pve-vm--107--disk--0 253:14   0    50G  0 lvm 
      ├─pve-vm--107--disk--1 253:15   0    50G  0 lvm 
      ├─pve-vm--106--disk--0 253:16   0    32G  0 lvm 
      └─pve-vm--555--disk--0 253:17   0    18G  0 lvm 
sdb                            8:16   0   3.6T  0 disk
└─sdb1                         8:17   0   3.6T  0 part /mnt/pve/storage2
sdc                            8:32   0   3.6T  0 disk
└─sdc1                         8:33   0   3.6T  0 part /mnt/pve/storage3
nvme0n1                      259:0    0 238.5G  0 disk
└─nvme0n1p1                  259:1    0 238.5G  0 part /mnt/pve/NVME
root@yautja:~#
 
lol nevermind, I'm dumb. Shoulda checked this. Seems like there's some leftover disks from older VMs that didn't get deleted for some reason. Apologies
 
It happens to the best of us. If that solves your issue, don't forget to set the issue to 'Solved'.

If not, we can look further.
 
In the Web UI, there is a Server View in the left. Open the drop down of your node and look for the storage there.
Once you selected the storage, you should get an option to show VM disks on this storage.

I can't seem to be able to access sda3
The disks are not saved as files here, but rather as their own block devices. You can think of it a little like every disk is its own partition.
 
In the Web UI, there is a Server View in the left. Open the drop down of your node and look for the storage there.
Once you selected the storage, you should get an option to show VM disks on this storage.


The disks are not saved as files here, but rather as their own block devices. You can think of it a little like every disk is its own partition.
That's the thing. There's literally nothing in my VM Disks and yet it's still full.
 

Attachments

  • vm.png
    vm.png
    14.9 KB · Views: 10

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!