PVE 6.0-6 / LXC / Latest (pvetest) breaks horribly.

devinacosta

Active Member
Aug 3, 2017
65
11
28
47
I am running Proxmox 6.0-6 (pvetest) and I have 3 LXC containers that went into bizarre state. I noticed yesterday after it updated the system with the latest pvetest updates, that the configs for the LXC instances got all jacked up. What is strange is it seems to be messing up the LXC.conf files and stuff goes missing? For instance on vmid: 1201, the config file shows:

root@admin-virt01:/etc/pve/lxc# cat 1201.conf
hostname: wazuh-manager
lock: snapshot
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.241.100.5,hwaddr=02:5A:3C:

[vzdump]
#vzdump backup snapshot
hostname: wazuh-manager
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.241.100.5,hwaddr=02:5A:3C:
snapstate: prepare
snaptime: 1567324341

All 3 Instances (config files got messed up by Proxmox). A backup attempted to happen last night, and they got stuck. Appears that Proxmox is hacking 1/2 of the MAC address and screwing up the configuration. Also I noticed it was messing with my "arch" configuration option as well?


1567356708468.png



The GUI now shows the bizarre message of:

1567356692065.png

Something is horribly broken with the latest (pvetest) updates to LXC.


Sometimes when I cat the /etc/pve/lxc/1201.conf file it shows like this? What is with the "-nodes/admin-virt01/lxc/1201.conf.tmp.376183"? It's like the corosync keeps messing with the config file?

-nodes/admin-virt01/lxc/1201.conf.tmp.3762183arch: amd64
cores: 4
hostname: wazuh-manager
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.241.146.5,hwaddr=02:5A:3C:30:77:0C,ip=10.241.147.101/20,type=veth
onboot: 1
ostype: centos


Also when i try to restore the container after i delete it, and then re-create it, then try to restore from a backup file, it shows this:




Task viewer: CT 1201 - Restore

OutputStatus

Stop
vm 1201 - unable to parse config: Ý-nodes/admin-virt01/lxc/1201.conf.tmp.3762183arch: amd64
vm 1201 - unable to parse config: r
/dev/rbd3


Package versions

proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.2-pve1
ceph-fuse: 14.2.2-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-6
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
 
Last edited:
It appears downgrading to pve-cluster 6.0-5 fixes the issue. Looks like something in pve-cluster 6.0-6 (horribly is broken).
 
Thanks for the report! - We are currently working on a fix.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!