local-pvm FULL (?) ERROR: startup for container...

d_G

New Member
May 17, 2024
17
0
1
Hi,
container full operative for a lot then :

run_buffer: 571 Script exited with status 32
lxc_init: 845 Failed to run lxc.hook.pre-start for container "100"
__lxc_start: 2034 Failed to initialize container "100"
TASK ERROR: startup for container '100' failed

local-pvm is full... (!) I assume was this the reason of the container issue.
how to flush it ?
Thank you.
 
local-pvm? Maybe you mean local-lvm?

You provide NO details of your system and expect help!
Thank you for reply, let me know what info you need.
yes : local-lvm (pve) is 100% (full)
EDIT : local (pve) is @ 31%
VE 8.2.2 and 1 only disk.
Code:
pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        13952032         4344560         8876952   31.14%
local-lvm     lvmthin     active        12181504        12181504               0  100.00%
I thought was impossible for the lxc to boot up ...
 
Last edited:
You need to migrate to a larger disk and expand the lvm-thin, or delete some unnecessary things that are using that storage,
or add additional disk and move some VM disks to that new storage (create additional lvm-thin or ZFS).

Don't try to expand the lvm-thin with another PV because then you are putting the whole thing at risk if 1 disk dies (you can do this tho if it has redundancy underneath, like if it's running on at least a RAID1.)

Looks like you overcommitted and the VM disks expanded to 100% of the lvm-thin space - do you have TRIM (discard=on) active on the virtual disks?
 
Thank you for reply Kingneutron,
Proxmox is installed on 1 disk and 32gb in ext4 (the disk is of 512GB) nothing else installed.
and for the CT Lxc 40GB .
I didn't ever realize that the container could fill all the space.:eek:
I cannot enter in the CT via shell...
NO VM installed no disk expansion done.
no trim check ... done :oops:

> delete some unnecessary things that are using that storage
but how to achieve it ?
thx
 
Last edited:
Something is up with your disk usage/partitioning. You are showing a local (dir) storage of only 14gb & a local-lvm of about 12gb. That totals about 26gb of a 512gb disk - where is the rest? Did you partition it yourself? Did you copy image to disk? How did you expect to run a CT of 40gb?

Maybe start by showing the output of lsblk
 
  • Like
Reactions: Kingneutron
Code:
# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1                      259:0    0 238.5G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0    31G  0 part
  ├─pve-swap                 252:0    0   3.8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  13.6G  0 lvm  /
  ├─pve-data_tmeta           252:2    0     1G  0 lvm
  │ └─pve-data-tpool         252:4    0  11.6G  0 lvm
  │   ├─pve-data             252:5    0  11.6G  1 lvm
  │   └─pve-vm--100--disk--0 252:6    0    40G  0 lvm
  └─pve-data_tdata           252:3    0  11.6G  0 lvm
    └─pve-data-tpool         252:4    0  11.6G  0 lvm
      ├─pve-data             252:5    0  11.6G  1 lvm
      └─pve-vm--100--disk--0 252:6    0    40G  0 lvm

I am wrong with this unit is a 256disk...
I specified the size during the creation of debian lxc and it was working well.
 
Code:
nvme0n1                      259:0    0 238.5G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0    31G  0 part

So as you can see yourself, your NVMe of 238.5G is only using in all its partitions about 32G.

IDK how you set this up to cause this - but in principle (you can search this stuff in regular Linux tutorials) -
what you need todo (in this order):

1. Boot into a live-Linux media & using Gparted increase the nvme0n1p3 partition to use the rest of the unused disk.

2. Then you have to deal with the PV (Physical Volume) with something like pvresize /dev/nvme0n1p3

3. Then you'll need to deal with the LV (Logical Volume) with something like lvresize -l +100%FREE pve/data (you may/should deal with metadata too).

PLEASE NOTE: MAKE SURE YOU HAVE FULL & RESTORABLE BACKUPS OF EVRYTHING BEFORE DOING ANY OF THIS! YOU'VE BEEN WARNED!
 
  • Like
Reactions: Kingneutron
Sorry I forgot to mention - in your case - I'd just start with a fresh install - it will be a whole lot easier!
 
IDK how you set this up to cause this
about 32GB: using installer of proxmox is quite easy set an amount of space.

So cannot enter (EDIT: from pve) in anyway to delete some file into the container ?
Keeping in mind that the CT was up and running well, (before go to full all the space) to try to avoid in the future this could be an error of the disk itself or of the docker ?
The amount specified for a CT must to be consider into the 32GB (in my case) choosen during the installation of VE ?
EDIT : restarting the pve : today the CT is green (was greyout yesterday) but of course entering in its shell I see only a blinking cursor no admin/pw request.

Code:
 lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotzD- <11.62g             100.00 1.97                           
  root          pve -wi-ao----  13.62g                                                   
  swap          pve -wi-ao----   3.75g                                                   
  vm-100-disk-0 pve Vwi-aotz--  40.00g data        29.04
 
Last edited:
I don't understand most of your mumbo jumbo - but I've already explained to you what is going on & how to fix it.

As you can see from your LVS output, you have used all of the LVM available (your vm-100-disk-0 is 29% of 40gb - so 11.6gb used) which in total with the root & swap fills up all of the 32gb partition. Why would you leave a disk in this state - with most of it unused & unpartitioned.

Do yourself a favor and reinstall (properly this time). Just accept a standard install - you should be good to go for your needs.

If you have anything you really need - just make a backup to another drive before reinstalling.
 
I understand you don't like to listen - YOU ARE GOING TO BRICK YOUR DEVICE
You seem not to have a basic grasp of what is what in Linux systems. I think it is time you do some googling of basics: Disk partitioning. PVs LVs thin-provisioning etc. You should also start learning where & how VMs & LXCs reside their disks & how they use their space etc. in Proxmox.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!