Disk is full - how to migrate items

TestInProd

New Member
Feb 15, 2024
11
0
1
I'm not sure how I managed to do this, but I must have messed up the initial config of my PVE server.

I have 3x NVME:
  • 1x 500GB for OS
  • 2x 1TB for VMs
I apparently pointed the LVM/local storage to the OS drive rather than the NVMEs meant for the VMs. After setting up several VMs, I logged into PVE today to realize I couldn't control anything as the system seemed to have been in 'read only' mode as the OS drive is full. While the GUI was somewhat responsive, I couldn't reboot VMs or PVE itself. I forced it off, turned it back on, and all is good for the moment.

How should I move the existing VMs, etc, to the 1TB NVME?

Screenshot from 2024-04-18 15-01-15.png
 
First you need to free up space. PVE won't fully work with a full root filesystem.
You should also check why your root filesystem is full. Not monitoring your LVM thin pool and letting VMs write to it until it fails and guest data is lost shouldn't affect the root filesystem. Those are different LVs, each with its own capacity. So moving VMs to other disks won't free up your root filesystem (unless you changed your default storage configs and allowed PVE to storage backups as "raw" or "qcow2" files on the "local" storage...which wouldn`t be recommended...).
You can't select to use those 1TB NVMes for VMs when installing PVE when selecting the 500GB disk as your OS drive. This has to be done later via the webUI. By default the PVE installer will create a ext4/xfs formated LV for your root filesystem, a LV for swap and a LVM thin pool for storing your VMs on the system/boot disk.
How should I move the existing VMs, etc, to the 1TB NVME?
Create a new storage for your VMs/LXCs using those 1TB SSDs. Each virtual disk got a "move disk" button you could use to move virtual disks between storages.
 
Last edited:
First you need to free up space. PVE won't fully work with a full root filesystem.
You should also check why your root filesystem is full. Not monitoring your LVM thin pool and letting VMs write to it until it fails and guest data is lost shouldn't affect the root filesystem. Those are different LVs, each with its own capacity. So moving VMs to other disks won't free up your root filesystem (unless you changed your default storage configs and allowed PVE to storage backups as "raw-2 or "qcow2" on the "local" storage...which wouldn`t be recommended...).
You can't select to use those 1TB NVMes for VMs when installing PVE when selecting the 500GB disk as your OS drive. This has to be done later via the webUI. By default the PVE installer will create a ext4/xfs formated LV for your root filesystem, a LV for swap and a LVM thin pool for storing your VMs on the system/boot disk.

Create a new storage for your VMs/LXCs using those 1TB SSDs. Each virtual disk got a "move disk" button you could use to move virtual disks between storages.

Thank you for the reply! The only thing that I want to clarify is this:
You should also check why your root filesystem is full.
...
So moving VMs to other disks won't free up your root filesystem

The root LV appears to only have 96GB allocated, so I'm guessing that system updates and log files filled it. To fix this whole issue, would this correct?
  1. Create new storage on the 1TB SSDs
  2. Move the VMs over to it
  3. Then somehow shrink the LVM LV that's on the 500GB SSD and expand the root LV
Would that be correct? Is there anything else that I should do? Here's what that SSD looks like:

Code:
nvme0n1                                          259:0    0 465.8G  0 disk
├─nvme0n1p1                                      259:1    0  1007K  0 part
├─nvme0n1p2                                      259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                                      259:3    0 464.8G  0 part
  ├─pve-swap                                     252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                     252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                               252:2    0   3.4G  0 lvm 
  │ └─pve-data-tpool                             252:4    0 337.9G  0 lvm 
  │   ├─pve-data                                 252:5    0 337.9G  1 lvm 
  │   ├─pve-vm--100--disk--0                     252:6    0    32G  0 lvm 
  │   ├─pve-vm--101--disk--0                     252:7    0     4M  0 lvm 
  │   ├─pve-vm--101--disk--1                     252:8    0    60G  0 lvm 
  │   ├─pve-vm--101--disk--2                     252:9    0     4M  0 lvm 
  │   ├─pve-vm--102--disk--0                     252:10   0    10G  0 lvm 
  │   ├─pve-vm--104--disk--0                     252:11   0    40G  0 lvm 
  │   ├─pve-vm--103--state--Configured           252:12   0   8.5G  0 lvm 
  │   ├─pve-vm--103--state--Working              252:13   0   8.5G  0 lvm 
  │   ├─pve-vm--103--disk--0                     252:14   0    20G  0 lvm 
  │   ├─pve-vm--105--state--IdM_Setup            252:15   0   8.5G  0 lvm 
  │   ├─pve-vm--105--disk--0                     252:16   0    32G  0 lvm 
  │   └─pve-vm--105--state--FreeRADIUS_Installed 252:17   0   8.5G  0 lvm 
  └─pve-data_tdata                               252:3    0 337.9G  0 lvm 
    └─pve-data-tpool                             252:4    0 337.9G  0 lvm 
      ├─pve-data                                 252:5    0 337.9G  1 lvm 
      ├─pve-vm--100--disk--0                     252:6    0    32G  0 lvm 
      ├─pve-vm--101--disk--0                     252:7    0     4M  0 lvm 
      ├─pve-vm--101--disk--1                     252:8    0    60G  0 lvm 
      ├─pve-vm--101--disk--2                     252:9    0     4M  0 lvm 
      ├─pve-vm--102--disk--0                     252:10   0    10G  0 lvm 
      ├─pve-vm--104--disk--0                     252:11   0    40G  0 lvm 
      ├─pve-vm--103--state--Configured           252:12   0   8.5G  0 lvm 
      ├─pve-vm--103--state--Working              252:13   0   8.5G  0 lvm 
      ├─pve-vm--103--disk--0                     252:14   0    20G  0 lvm 
      ├─pve-vm--105--state--IdM_Setup            252:15   0   8.5G  0 lvm 
      ├─pve-vm--105--disk--0                     252:16   0    32G  0 lvm 
      └─pve-vm--105--state--FreeRADIUS_Installed 252:17   0   8.5G  0 lvm
 
I am curious what's taking up all of the space, now. Looking around, I can't find much that would add up to 96GB.

Code:
root@pve:/# find / -type f -size +60M 2>/dev/null
/sys/devices/pci0000:00/0000:00:02.0/resource2_wc
/sys/devices/pci0000:00/0000:00:02.0/resource2
/var/lib/vz/template/iso/virtio-win-0.1.240.iso
/var/lib/vz/template/iso/ubuntu-22.04.3-live-server-amd64.iso
/var/lib/vz/template/iso/rhel-9.3-x86_64-boot.iso
/var/lib/vz/dump/vzdump-qemu-102-2024_04_06-11_42_22.vma.zst
/proc/kcore
/usr/share/kvm/edk2-aarch64-code.fd
/usr/share/kvm/edk2-arm-vars.fd
/usr/share/kvm/edk2-arm-code.fd

root@pve:/# du -a / | sort -n -r | head -n 20 2>/dev/null
10364076        /
7862060 /var
7649204 /var/lib
7530852 /var/lib/vz
4643392 /var/lib/vz/template
4643384 /var/lib/vz/template/iso
2887452 /var/lib/vz/dump
2887440 /var/lib/vz/dump/vzdump-qemu-102-2024_04_06-11_42_22.vma.zst
2327088 /usr
2083396 /var/lib/vz/template/iso/ubuntu-22.04.3-live-server-amd64.iso
1358568 /usr/lib
922628  /var/lib/vz/template/iso/rhel-9.3-x86_64-boot.iso
663620  /usr/share
612816  /var/lib/vz/template/iso/virtio-win-0.1.240.iso
536416  /usr/lib/modules
536412  /usr/lib/modules/6.5.11-8-pve
519644  /usr/lib/modules/6.5.11-8-pve/kernel
377128  /usr/lib/modules/6.5.11-8-pve/kernel/drivers
337928  /usr/lib/x86_64-linux-gnu
 
Then somehow shrink the LVM LV that's on the 500GB SSD and expand the root LV
LVM Thin pools can only be destroyed. Shrinking is not possible. The forums search function should show you lots of threads on how to do this.

How full your LVM thin pool is you will get with lvs. df -h will show how full the root filesystem is.

You should analyse whats consuming all the space on the root filesystem instead of just guessing. See for example here: https://www.cyberciti.biz/faq/linux-find-largest-file-in-directory-recursively-using-find-du/
"ncdu" is also a nice tool to check whats consuming space but you need some space to install it first.
 
Last edited:
I am curious what's taking up all of the space, now. Looking around, I can't find much that would add up to 96GB.

Code:
root@pve:/# find / -type f -size +60M 2>/dev/null
/sys/devices/pci0000:00/0000:00:02.0/resource2_wc
/sys/devices/pci0000:00/0000:00:02.0/resource2
/var/lib/vz/template/iso/virtio-win-0.1.240.iso
/var/lib/vz/template/iso/ubuntu-22.04.3-live-server-amd64.iso
/var/lib/vz/template/iso/rhel-9.3-x86_64-boot.iso
/var/lib/vz/dump/vzdump-qemu-102-2024_04_06-11_42_22.vma.zst
/proc/kcore
/usr/share/kvm/edk2-aarch64-code.fd
/usr/share/kvm/edk2-arm-vars.fd
/usr/share/kvm/edk2-arm-code.fd

root@pve:/# du -a / | sort -n -r | head -n 20 2>/dev/null
10364076        /
7862060 /var
7649204 /var/lib
7530852 /var/lib/vz
4643392 /var/lib/vz/template
4643384 /var/lib/vz/template/iso
2887452 /var/lib/vz/dump
2887440 /var/lib/vz/dump/vzdump-qemu-102-2024_04_06-11_42_22.vma.zst
2327088 /usr
2083396 /var/lib/vz/template/iso/ubuntu-22.04.3-live-server-amd64.iso
1358568 /usr/lib
922628  /var/lib/vz/template/iso/rhel-9.3-x86_64-boot.iso
663620  /usr/share
612816  /var/lib/vz/template/iso/virtio-win-0.1.240.iso
536416  /usr/lib/modules
536412  /usr/lib/modules/6.5.11-8-pve
519644  /usr/lib/modules/6.5.11-8-pve/kernel
377128  /usr/lib/modules/6.5.11-8-pve/kernel/drivers
337928  /usr/lib/x86_64-linux-gnu
In case you can't find anything, did you ever edit your fstab or similar to mount some filesystem manually and then set that up as a directory storage but not setting the "is_mountpoint" option?
 
In case you can't find anything, did you ever edit your fstab or similar to mount some filesystem manually and then set that up as a directory storage but not setting the "is_mountpoint" option?

Thank you so much for the guidance!

I must have been mistaken - root doesn't appear to be full at all. It's the LVM thin pool that seems to have filled up and was causing issues. Odd that it caused issues with PVE itself though (couldn't use a shell, couldn't reboot, etc). Unless maybe /tmp was full and the forced restart fixed that, or something along those lines.

LVM Thin pools can only be destroyed. Shrinking is not possible.
Would it be a good idea to destroy the LVM thin pool on the OS disk (after migrating VMs) and only use the newly created one? I should be able to then expand the root LV, yes?

Bash:
root@pve:/# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  1.9M  6.3G   1% /run
/dev/mapper/pve-root   94G  9.9G   80G  12% /
tmpfs                  32G   46M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              192K  141K   47K  76% /sys/firmware/efi/efivars
/dev/nvme0n1p2       1022M   12M 1011M   2% /boot/efi
/dev/fuse             128M   44K  128M   1% /etc/pve
tmpfs                 6.3G     0  6.3G   0% /run/user/0


root@pve:/# lvs
  LV                                      VG  Attr       LSize   Pool Origin                        Data%  Meta%  Move Log Cpy%Sync Convert
  data                                    pve twi-aotz-- 337.86g                                    15.95  1.06                           
  root                                    pve -wi-ao----  96.00g                                                                           
  snap_vm-103-disk-0_Configured           pve Vri---tz-k  20.00g data                                                                     
  snap_vm-103-disk-0_Working              pve Vri---tz-k  20.00g data                                                                     
  snap_vm-105-disk-0_FreeRADIUS_Installed pve Vri---tz-k  32.00g data vm-105-disk-0                                                       
  snap_vm-105-disk-0_IdM_Setup            pve Vri---tz-k  32.00g data                                                                     
  swap                                    pve -wi-ao----   8.00g                                                                           
  vm-100-disk-0                           pve Vwi-a-tz--  32.00g data                               5.60                                   
  vm-102-disk-0                           pve Vwi-a-tz--  10.00g data                               73.59                                 
  vm-103-disk-0                           pve Vwi-a-tz--  20.00g data snap_vm-103-disk-0_Configured 51.29                                 
  vm-103-state-Configured                 pve Vwi-a-tz--  <8.49g data                               18.47                                 
  vm-103-state-Working                    pve Vwi-a-tz--  <8.49g data                               43.89                                 
  vm-104-disk-0                           pve Vwi-a-tz--  40.00g data                               25.80                                 
  vm-105-disk-0                           pve Vwi-a-tz--  32.00g data snap_vm-105-disk-0_IdM_Setup  11.36                                 
  vm-105-state-FreeRADIUS_Installed       pve Vwi-a-tz--  <8.49g data                               29.74                                 
  vm-105-state-IdM_Setup                  pve Vwi-a-tz--  <8.49g data                               32.73
 
Would it be a good idea to destroy the LVM thin pool on the OS disk (after migrating VMs) and only use the newly created one? I should be able to then expand the root LV, yes?
If you don't want to store any guests there, you could destroy it and expand the root LV with it to be able to store more ISOs/templates/backups.

So according to your output, root filesystem is only 12% filled and thin pool only 16%.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!