local-lvm(host) is full - Vm doesnt start

bigstep

New Member
Sep 15, 2019
11
0
1
Romania
bigstep.ro
Hello all,
please help with this situation: the local-lvm disk is full and the VM doesn't start:(
can you please indicate how I can delete some files and make space
thanks a lot
I am totally new to this:(
1568545821273.png
 
Hello all,
please help with this situation: the local-lvm disk is full and the VM doesn't start:(
can you please indicate how I can delete some files and make space
thanks a lot
I am totally new to this:(
View attachment 11774
If you click on local-lvm on the left and then select the content option, attach a screenshot and we can see if anything you might be able to move.
 
From SSH can you run and paste the output here

vgdisplay
lvdisplay
 
root@host:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 16
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 223.04 GiB
PE Size 4.00 MiB
Total PE 57097
Alloc PE / Size 55561 / 217.04 GiB
Free PE / Size 1536 / 6.00 GiB
VG UUID XxhNYq-Boum-4Wrg-lE3p-yLKK-RRmc-kf30Ll

--- Volume group ---
VG Name Mihai
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.09 TiB
PE Size 4.00 MiB
Total PE 286159
Alloc PE / Size 258560 / 1010.00 GiB
Free PE / Size 27599 / 107.81 GiB
VG UUID PTAhiV-205i-Tblb-Rx2i-BBSQ-4aaP-0YUncE
 
root@host:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID WibCPN-QduK-kiK3-2SKB-ql2q-m3ph-yCDqkQ
LV Write Access read/write
LV Creation host, time proxmox, 2019-05-24 05:13:48 +0300
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID 6ISfkv-uK6Y-CF19-9s27-XZlP-UvbS-5FxkB7
LV Write Access read/write
LV Creation host, time proxmox, 2019-05-24 05:13:48 +0300
LV Status available
# open 1
LV Size 65.75 GiB
Current LE 16832
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID FObMYZ-WBod-RV9C-eYTs-0gmN-o0iw-y2ZFsy
LV Write Access read/write
LV Creation host, time proxmox, 2019-05-24 05:13:49 +0300
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 2
LV Size 140.42 GiB
Allocated pool data 100.00%
Allocated metadata 4.83%
Current LE 35947
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-0
LV Name vm-100-disk-0
VG Name pve
LV UUID LFtNgq-7r20-rVuV-E8mU-Pgzj-M5em-aoG1Kq
LV Write Access read/write
LV Creation host, time host, 2019-05-27 10:07:48 +0300
LV Pool name data
LV Status available
# open 1
LV Size 180.00 GiB
Mapped size 78.01%
Current LE 46080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7

--- Logical volume ---
LV Path /dev/Mihai/vm-100-disk-0
LV Name vm-100-disk-0
VG Name Mihai
LV UUID NVhKw9-sHcO-5WL1-yOPO-serC-53W6-lWHkyP
LV Write Access read/write
LV Creation host, time host, 2019-05-27 11:40:34 +0300
LV Status available
# open 1
LV Size 1010.00 GiB
Current LE 258560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
 
You have 6GiB free on the VG that is part of /dev/pve

You can add 5GiB to the LV that holds the disk's, this will let the VM boot but your then need to either remove some files / shrink the raw file to get back some free space, the LV /dev/pve is only 140GiB but you created a RAW file that can by max go to 180GiB

lvextend -L +5G /dev/pve/

Can you attach the VM.conf file contents?
 
thank you. I added the space like you said and the vm started. the thing is that on C: and E: there is only 30 Gb ocupied..the rest is free on those partitions..

how to I get the VM.conf file?
 
thank you. I added the space like you said and the vm started. the thing is that on C: and E: there is only 30 Gb ocupied..the rest is free on those partitions..

how to I get the VM.conf file?

cat /etc/pve/qemu-server/100.conf
 
root@host:~# cat /etc/pve/qemu-server/100.conf
bootdisk: sata0
cores: 12
ide2: none,media=cdrom
memory: 32000
name: Windows
net0: e1000=A6:65:D1:98:ED:64,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
sata0: local-lvm:vm-100-disk-0,size=180G
sata1: Hard1T:vm-100-disk-0,size=1010G
scsihw: virtio-scsi-pci
smbios1: uuid=1eae0b23-f957-4af2-93ee-20b1bdf1d178
sockets: 1
vmgenid: a88ecfd4-d87b-456a-a2ad-8e51bb6a6922
 
root@host:~# cat /etc/pve/qemu-server/100.conf
bootdisk: sata0
cores: 12
ide2: none,media=cdrom
memory: 32000
name: Windows
net0: e1000=A6:65:D1:98:ED:64,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
sata0: local-lvm:vm-100-disk-0,size=180G
sata1: Hard1T:vm-100-disk-0,size=1010G
scsihw: virtio-scsi-pci
smbios1: uuid=1eae0b23-f957-4af2-93ee-20b1bdf1d178
sockets: 1
vmgenid: a88ecfd4-d87b-456a-a2ad-8e51bb6a6922

Okie, for discard to work (to clear up the free space) your need to add a new small disk 1GiB (Use Hard1T) select SCSI from the drop down list when you add the disk.

Download https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso within Windows and use the drivers to install the missing driver for the new SCSI disk.

You should then be good to shutdown the VM and edit the config file:

Change "sata0: local-lvm:vm-100-disk-0,size=180G" to "scsi0: local-lvm:vm-100-disk-0,discard=on,size=180G"

Please make sure you backup any important files as their is a chance the OS may still not boot.
 
the VM has booted after I extended the space like you mentioned. now it works.
it looks like to me that the initial configuration might not have been the best.
As i have no idea what could be occupying so much space.. as this is a new machine and the configuration was suppose to be with 2 x 256 Gb ssd and 1 x 1Tb Hdd as backup..
 
the 2x256 Gb ssd were suppose to be like a mirror.. if one fails there is always the other, and the 1 Tb Hdd for backups..
i will add the extra hard, the thing is I do not know what could be using so much space ..
1568558173965.png
 
You don't have discard setup, so when files are written to C: and then deleted it won't actually pass that deletion through the VM to the LVM storage.
 
and can I setup the discard now ? will it impact the data that is on the files now?

Once you have done the steps I listed earlier and added the discard statement in the config your just need to wait for windows to run the internal cleanup. You can force this / change how often it runs here : https://www.howtogeek.com/257196/ho...nabled-for-your-ssd-and-enable-it-if-it-isnt/

You should then notice the disk space decrease within the LVM.

It will have no affect on the current DATA.
 
Hi, help please
Why is my local-lvm of the newly installed server saying it is full?
1661985487618.png
root@vdi:~# vgdisplay --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <1.82 TiB PE Size 4.00 MiB Total PE 476790 Alloc PE / Size 472598 / 1.80 TiB Free PE / Size 4192 / <16.38 GiB VG UUID SpOYzI-lAQW-5HHz-5TxC-8mYK-lv1i-RZFxw2 root@vdi:~# lvdisplay --- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV UUID VTmBzm-DirC-NCFl-YQgo-WH7r-J9Dq-dkNWZC LV Write Access read/write LV Creation host, time proxmox, 2022-08-31 09:24:59 -0500 LV Status available # open 2 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID kEv1GE-Ma1v-LQZG-0wVa-a3Vj-ExTh-Yb2U5u LV Write Access read/write LV Creation host, time proxmox, 2022-08-31 09:24:59 -0500 LV Status available # open 1 LV Size 96.00 GiB Current LE 24576 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name data VG Name pve LV UUID ipEc3b-scke-rFIF-UbMw-lHJ3-gCf0-efnVDx LV Write Access read/write LV Creation host, time proxmox, 2022-08-31 09:25:00 -0500 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size 1.67 TiB Allocated pool data 0.00% Allocated metadata 0.15% Current LE 437878 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 root@vdi:~#
 
Something there doesn`t fit. "local-lvm" is also shown as "LVM" and not as "LVM-Thin". But you got a 1.67 TiB thin pool according to lvdisplay.
Did you edit the storage.cfg or edited the storages using the webUI?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!