lvm out of space

Mjp9119

New Member
Jun 30, 2019
20
0
1
59
Hi,
I´m having a VM with IO error.
The reason seems to be hard disk out of space. The VM is running Suse 15.
I can't log in to the VM, and have some other problems too.

I do log into the proxmox web gui

So I´m considering to add another physical disk.
After that, incorporate it to the lvm which is now 100% full

All this in hope that the extra space would normalize my vm behaviour without breaking anything in it

Is that possible?

how should I proceed if so?

LVM seems configured as thin provisioning
Thanks in advance
 
please provide the output of:
Code:
lsblk
pvs
vgs
lvs -a

That should help analyze the situation.
 
Thanks, here goes the result for lsblk
Code:
root@pve-h15:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0   931G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.1G  0 lvm
  │ └─pve-data-tpool         253:4    0 794.8G  0 lvm
  │   ├─pve-data             253:5    0 794.8G  0 lvm
  │   └─pve-vm--100--disk--0 253:6    0   921G  0 lvm
  └─pve-data_tdata           253:3    0 794.8G  0 lvm
    └─pve-data-tpool         253:4    0 794.8G  0 lvm
      ├─pve-data             253:5    0 794.8G  0 lvm
      └─pve-vm--100--disk--0 253:6    0   921G  0 lvm
sdb                            8:16   0 931.5G  0 disk
└─sdb1                         8:17   0 931.5G  0 part /data

for pvs
Code:
root@pve-h15:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  931.01g 15.99g

For vgs
Code:
root@pve-h15:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   4   0 wz--n- 931.01g 15.99g


and for lvs -a
Code:
root@pve-h15:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotzD- 794.79g             100.00 4.54
  [data_tdata]    pve Twi-ao---- 794.79g
  [data_tmeta]    pve ewi-ao----   8.11g
  [lvol0_pmspare] pve ewi-------   8.11g
  root            pve -wi-ao----  96.00g
  swap            pve -wi-ao----   8.00g
  vm-100-disk-0   pve Vwi-aotz-- 921.00g data        86.30
 
root@pve-h15:~# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotzD- 794.79g 100.00 4.54

Your thinpool pve/data is full - you need to reduce the used space (vm-100-disk-0) seems to use up most of the space
(alternatively you could extend the VG with another disk - but keep in mind that you have no redundancy in that case and losing one disk means you'll lose everythin on both disks)

I hope that helps!
 
Thanks!,
Yes it helps enormously.

I assume that reducing the used space in vm100 should be done form inside this vm.
I Can´t access it.

The SO does not allow me to login in its console, I cant access it via network, and the console from the web gui freezes at start.

So I give up with that

But I was thinking in "extending th VG with another disk" to the sole effect of normalizing my vm100.
That way I would get in and delete some files

So this is my question,

please
can you tell me wich commands should I use and how to use them for that purpose?
(extending the vg with another disk)

I intend to remove that disk afterwards when everything is ok in vm100 storage

Thank you very much for your support!
 
first of all: make sure you have a working and tested backup!! (in case something goes wrong you could lose data otherwise)

going through your output again - I think that the 'pve' volume group still has a bit of space:
pve 1 4 0 wz--n- 931.01g 15.99g

In that case you should be good by just running `lvextend` (check its manpage for how to invoke it: `man lvextend`)
Since it's a thin pool consider increasing the metadata-lv as well - see our wiki page on the topic:
https://pve.proxmox.com/wiki/LVM2#Resize_metadata_pool

I hope this helps!
 
Hi , still trying to solve my boot freeze problem on vm 100 suse leap

After reading follow the last advice I've found this command which I would use to increase the pool and the meta

Code:
lvresize  --size +1G --poolmetadatasize +16M  pve-data_tdata/pve-data-tpool

but it complains with this:

Code:
lvresize  --size +1G --poolmetadatasize +16M  pve-data_tdata/pve-data-tpool
  Volume group "pve-data_tdata" not found
  Cannot process volume group pve-data_tdata

This is the output of lsblk again:
Code:
 lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0   931G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.1G  0 lvm
  │ └─pve-data-tpool         253:4    0 794.8G  0 lvm
  │   ├─pve-data             253:5    0 794.8G  0 lvm
  │   └─pve-vm--100--disk--0 253:6    0   921G  0 lvm
  └─pve-data_tdata           253:3    0 794.8G  0 lvm
    └─pve-data-tpool         253:4    0 794.8G  0 lvm
      ├─pve-data             253:5    0 794.8G  0 lvm
      └─pve-vm--100--disk--0 253:6    0   921G  0 lvm
sdb                            8:16   0 931.5G  0 disk
└─sdb1                         8:17   0 931.5G  0 part /data

My intention is to add 1Gb to the pool and 16M to the metada (numbers I picked arbitrarily)
How should I write the command ?
 
This is an UPDATE

After trying to figure out by my self without any result I came to note the LV, VG columns in the output of lvs previously issued
so decided to givem a try and they did the magic!!

issued
Code:
 lvresize  --size +1G --poolmetadatasize +16M  pve/data

and the result wasn't completetely clean but ended with a succesful resize anyway
Code:
 lvresize  --size +1G --poolmetadatasize +16M  pve/data
  WARNING: Sum of all thin volume sizes (921.00 GiB) exceeds the size of thin pools and the amount of free space in volume group (15.98 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  Size of logical volume pve/data_tmeta changed from 8.11 GiB (2077 extents) to 8.13 GiB (2081 extents).
  WARNING: Sum of all thin volume sizes (921.00 GiB) exceeds the size of thin pools and the amount of free space in volume group (14.98 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  Size of logical volume pve/data_tdata changed from 794.79 GiB (203466 extents) to 795.79 GiB (203722 extents).
  Logical volume pve/data_tdata successfully resized.
after that I restarted my vm100 and the Suse came up alive again!!

So, let me express my gratefulness once more, yours had been an enormous help, Thank you

ANd by the way, could anyone explain those two warnings ?

Thanks
Mauricio
 
These warning are actually quite worrysome, since you have over-provisioned your volume group. That is, you have created thin volumes, which only take as much physical space in your volume groups as they actually have initially, but once they rise to their full usage they will try to occupy more space in the volume group that you actually have.

This has been the reason for your previous issue already and I assume, that you will run into the same issue pretty soon, again. You will have to reduce the size of the disk for vm-100 no matter what! Better start backing up the data from vm-100, this is bound to be trouble…
 
ok, thanks for your reply.

..but once they rise to their full usage they will try to occupy more space in the volume group that you actually have. ..

Let me ask about this "rise to full usage"
This rise is produced by normal use inside vm100 (meaning that if I restrict disk I/O operations in it I would be buying sometime)
or is it produced at the proxmox level (meaning the hypervisor itself) and in this case and therefore I can´t "control" them from inside vm100

Hope I made myself clear
Thanks for your help
 
Actually, either of them can, and will contribute to this issue. As do snapshots (LVM-wise and VM-wise). As the warning stated, you have provisioned 8GB more space, than you actually have. Also, LVM is COW (copy-on-write) which will always first write new data to the volume and then remove/release the "overwritten" data afterwards - otherwise snapshots wouldn't be possible.

As the VG continues to be filled-up, you will surely hit that issue again.
 
Ok , Thank you so much.
It happens to be that I didn't install nor configure a single bit on this server ,
except of course for solving the boot problem on vm100 with the your valuable help.

That said, I am now responsible for this server so,
I´m considering to delete it and reinstall it hopefully in a correct manner.

My question is:
would reinstalling proxmox be the best option in order to eliminate future problems of this kind
or could the lvm pool be safely resized without reinstallation ?

thanks in advance
Mauricio
 
You might be able to reduce your lvs, but that would depend also on the file system used inside the VM. Also… if think about re-installing the server, make sure, that you backup your VM first.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!