After Proxmox VE 6 to VE 7 upgrade, LXC containers fail to start due to missing “data” storage and volume parsing errors

prassant

New Member
Aug 8, 2025
2
0
1
Hello everyone,


I’m relatively new to Proxmox and recently upgraded my company’s Proxmox servers from version VE 6 to VE 7. The upgrade process went mostly fine, except after rebooting, I had some issues.


I had to add the HW address to my vmbr0 interface for the server to come back online properly. Now Proxmox is accessible, but my LXC containers won’t start.


The problem seems to be that after the upgrade, my containers’ rootfs configurations point to a storage called data which did not exist before the upgrade — previously everything was on local.


If I look into the local storage (/var/lib/vz), I can see my VM disks there, so the VMs themselves can start, but none of the LXC container volumes appear in the Proxmox UI or can be mounted.


I have removed the data storage entry from my storage.cfg because it was never there before.


Here is some information about my system:
Code:
lsblk
NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                 8:0    0   1.8T  0 disk
├─sda1              8:1    0  19.5G  0 part /
├─sda2              8:2    0   3.9G  0 part [SWAP]
├─sda3              8:3    0     1K  0 part
└─sda5              8:5    0   1.8T  0 part
  ├─pve-local     253:0    0   1.7T  0 lvm  /var/lib/vz
  └─pve-backup    253:1    0 146.5G  0 lvm  /var/lib/vz/dump
sdb                 8:16   1   1.8T  0 disk
├─lms-lms_tmeta   253:2    0   116M  0 lvm
│ └─lms-lms-tpool 253:4    0   1.8T  0 lvm
│   └─lms-lms     253:5    0   1.8T  1 lvm
└─lms-lms_tdata   253:3    0   1.8T  0 lvm
  └─lms-lms-tpool 253:4    0   1.8T  0 lvm
    └─lms-lms     253:5    0   1.8T  1 lvm


And my current /etc/pve/storage.cfg looks like this:
Code:
dir: local
        path /var/lib/vz
        content rootdir,images,iso,snippets,vztmpl
        maxfiles 0

lvmthin: lms
        thinpool lms
        vgname lms
        content rootdir,images

dir: backup
        path /var/lib/vz/dump
        content images,backup,iso
        nodes ns397318
        prune-backups keep-all=1
        shared 0


My containers’ config files (like /etc/pve/lxc/102.conf) still reference local:102/vm-102-disk-0 as rootfs, but this volume file cannot be found, resulting in errors like:

Code:
TASK ERROR: unable to parse volume filename 'vm-102-disk-0'

I suspect the upgrade renamed or changed how storage names are handled, but I don’t have the data storage defined, and my LXC containers rootfs volumes aren’t showing up under the local storage directory as expected.


Does anyone have an idea on how to fix the missing container volumes and properly remount them so that my containers can start again?


Thank you in advance!
 
better late than never, I guess ;)

please post the following:

"pveversion -v"
"pct config 102"
"pvesm status"
"pvesm list local"
"pvesm list lsm"

thanks!
 
Thanks :D

Of Course :

Code:
pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.158-2-pve)
pve-manager: 7.4-20 (running version: 7.4-20/5d6e3351)
pve-kernel-5.15: 7.4-15
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.158-2-pve: 5.15.158-2
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.3
libpve-apiclient-perl: 3.2-2
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-5
libpve-http-server-perl: 4.3.0
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-4
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.7-1
proxmox-backup-file-restore: 2.4.7-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.4
pve-cluster: 7.3-3
pve-container: 4.4-7
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+3
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.10-1
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-7
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
Code:
hostname: reverseProxy
memory: 2048
net0: name=eth0,bridge=vmbr2,firewall=1,gw=192.168.10.254,hwaddr=FA:59:8F:84:6C:E5,ip=192.168.10.2/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local:vm-102-disk-0,size=8G
swap: 2048
unprivileged: 1


Code:
pvesm status
Name          Type     Status           Total            Used       Available        %
backup         dir     active       150993700        15495296       127802228   10.26%
lms        lvmthin     active      1943474176               0      1943474176    0.00%
local          dir     active      1747271788         9863780      1648625160    0.56%


Code:
root@ns397318:~# pvesm list local
Volid                                                     Format  Type            Size VMID
local:100/vm-100-disk-0.qcow2                             qcow2   images    8589934592 100
local:iso/centreon-19.10-1.el7.x86_64.iso                 iso     iso       1337137152
local:iso/pfSense-CE-2.4.4-RELEASE-p3-amd64.iso           iso     iso        696539136
local:iso/ubuntu-18.04.3-desktop-amd64.iso                iso     iso       2082816000
local:iso/ubuntu-20.04.1-live-server-amd64.iso            iso     iso        958398464
local:iso/ubuntu-24.04-live-server-amd64.iso              iso     iso       2754981888
local:vztmpl/ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz tgz     vztmpl     213430501
local:vztmpl/ubuntu-19.10-standard_19.10-1_amd64.tar.gz   tgz     vztmpl     219093821
local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz   tgz     vztmpl     214203058



Code:
pvesm list lsm
storage 'lsm' does not exist

pvesm list lms
Volid Format  Type      Size VMID



Here are the data